hello guys , i have php game script that depends on tasks stored in mysql .
each task is stored with end time and type , if end type = now then the script will auto process it without any issues
but , when there are over 20 tasks , at one same time from different players , then the loop get stuck for unknown reason …
meaning : if we got 20 tasks all of theirs end time = now
what is supposed to happen is that the script will execut all of them at once
but what is. happening now , is processing one by one foreach page load which cause a huge delay and broke the game structure
each load for any page will auto calls function processQueue() ,which is :
{
global $gameConfig;
$this->load_model('Mutex', 'mutex');
$this->mutex->releaseOnTimeout();
if ($this->mutex->lock()) {
$this->processTaskQueue($type, $playerId);
// to make it weekly put "/2" after it
$row = db::get_row("SELECT gs.cur_week w1, CEIL((TO_DAYS(NOW())-TO_DAYS(gs.start_date))) w2 FROM g_settings gs");
if (($row['w2'] - $row['w1']) >= 1) {
db::query("UPDATE g_settings gs SET gs.cur_week=:cur", array(
'cur' => intval($row['w2'])
));
$allP = db::get_all("SELECT p.id FROM p_players p WHERE 1 ");
$Ids2 = "";
foreach($allP as $Ids)
{
$Ids2 .= ($Ids2 == "" ) ? $Ids['id'] : ",".$Ids['id'];
}
$Ids2 = "(".$Ids2.")";
db2::query("UPDATE p_players p SET p.gold_num=p.gold_num+:gold WHERE p.id IN $Ids2",array('gold' =>200));
$this->setWeeklyMedals(intval($row['w2']));
}
$this->mutex->release();
}
}
which as you can see depending on mutex lock in order of prerventing doubled values during the executing
which is :
<?php
define("__QS_LOCK_FS_", MODELS_DIR . "lock");
class Mutex_Model extends Model
{
public function lock()
{
if (0 < db::count("UPDATE g_settings gs SET gs.qlocked=1, qlocked_date=NOW() WHERE gs.qlocked=0") && ($fp = fopen(__QS_LOCK_FS_, "r")) != FALSE) {
if (flock($fp, LOCK_EX)) {
fclose($fp);
return TRUE;
}
fclose($fp);
}
return FALSE;
}
public function release()
{
$this->_releaseInternal();
db::query("UPDATE g_settings gs SET gs.qlocked=0");
}
public function releaseOnTimeout()
{
if (0 < db::count("UPDATE g_settings gs SET \r\n\t\t\t\tgs.qlocked=0\r\n\t\t\tWHERE\r\n\t\t\t\tgs.qlocked=1\r\n\t\t\t\tAND\r\n\t\t\t\tTIME_TO_SEC(TIMEDIFF(NOW(), gs.qlocked_date)) > 20")) {
$this->_releaseInternal();
}
}
public function _releaseInternal()
{
if (($fp = fopen(__QS_LOCK_FS_, "r")) != FALSE) {
flock($fp, LOCK_UN);
fclose($fp);
}
}
}
?>
if the lock is. created , then it will calls function
processTaskQueue() , which is the responsible of processing the tasks
and it is :
public function processTaskQueue($type, $playerId)
{
$result = db::get_all("SELECT \r\n\t\t\t\tq.id, q.player_id, q.village_id, q.to_player_id, q.to_village_id, q.proc_type, q.building_id, q.proc_params, q.threads, q.execution_time,\r\n\t\t\t\tTIME_TO_SEC(TIMEDIFF(q.end_date, NOW())) remainingTimeInSeconds\r\n\t\t\tFROM p_queue q\r\n\t\t\tWHERE\r\n\t\t\t\tTIME_TO_SEC(TIMEDIFF((q.end_date - INTERVAL (q.execution_time*(q.threads-1)) SECOND), NOW())) <= 0\r\n\t\t\tORDER BY\r\n\t\t\t\tTIME_TO_SEC(TIMEDIFF((q.end_date - INTERVAL (q.execution_time*(q.threads-1)) SECOND), NOW())) ASC");
foreach ($result as $resultRow) {
$remain = $resultRow['remainingTimeInSeconds'];
if ($remain < 0) {
$remain = 0;
}
$resultRow['threads_completed_num'] = $resultRow['execution_time'] <= 0 ? $resultRow['threads'] : floor(($resultRow['threads'] * $resultRow['execution_time'] - $remain) / $resultRow['execution_time']);
if ($this->processTask($resultRow)) {
unset($result);
db::free();
$this->processQueue($type, $playerId);
break;
}
}
unset($result);
}
i tried something which decreased the stocking a little bit , which is when the loop breaks , i would call same function instead of going back to the lock all over again , like this :
if ($this->processTask($resultRow)) {
unset($result);
$this->processTaskQueue($type, $playerId);
break;
}
but that creates a bigger issue , which is doubling values for each request
example : when player sent 5000 army, they will return 10000
which is a huge issue
any ideas how to solve this without causing issues ? …
note : there is no slow. query , each task is processed as i debuged within less than half second
thanks …