Page MenuHomePhabricator

ecore: rely on event instead of creating one Eo object per future that need resolving.

Authored by cedric on Apr 6 2019, 6:43 PM.



This was a terrible oversight, but the point of having a small native type for future was
for making them efficient. Still we were using one Eo object for dispatching per future
to dispatch new value. I could have gathered all the dispatch with just one object, but
at the end we do have one object that notify us of the loop iteration... the loop object!
And we have event on that object that we can rely to trigger the dispatching of future
without requiring any additional object. So let's do that instead.

Depends on D8566

Diff Detail

rEFL core/efl
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.
cedric created this revision.Apr 6 2019, 6:43 PM
cedric requested review of this revision.Apr 6 2019, 6:43 PM
cedric updated this revision to Diff 21377.Apr 16 2019, 10:03 AM
cedric edited the summary of this revision. (Show Details)

I am wondering if we should use a list here, or a eina_array. or even inarray.


Mhm, if we schedule a new future while we are in this loop, then efl_event_callback_array_del will be executed, the new schedule call will *not* re-add the event to the loop. Even if there is a element in the entries.


Wouldn't it be easier to just while (entries) { eina_future_cancel(entry->future) } ? Then we have only one place calling frees, i am additionally not too sure if this here isn't racy, as eina_future_cancel will call recall. Which will call free twice below.


We need to efl_event_callback_array_del here when loopsched->future_entries is NULL? (I think ^_^)

bu5hm4n requested changes to this revision.Apr 21 2019, 3:04 AM
This revision now requires changes to proceed.Apr 21 2019, 3:04 AM

I am wondering if we should use a list here, or a eina_array. or even inarray.

Array and inarray are inefficient structure when we need to randomly remove value out of them which is necessary here during the execution of recall.


I don't think so or we are talking about something different. But we do efl_event_callback_array_add on loopscheed->future_entries != NULL which happen before we do the call to efl_event_callback_array_del above.


I am not sure what you mean. But EINA_LIST_FREE does the while on entries and properly get the data casted into entry. while wouldn't do that.

As for the race possibility, it is avoided because the entries are a copy of the general list which is set to NULL before we start triggering any possible callback.


Would be an optimization as the code in the array handler will do nothing but just clean itself. So I choose less line of code and didn't duplicate the logic here.

bu5hm4n added inline comments.Apr 22 2019, 11:28 PM

Oh we NULL it out at the top. Now i understand what you mean.


Yeah, i oversaw the NULL setting to the future_entries.


But this *might* add two event submissions where two generations of futures are resolved immidiatly after each other, instead of waiting for one loop iteration.
Further more. If we have some code which schedules and recalls a future all the time, then the list of subscribed events would grow a lot. I think the efl_event_callback_array_del should be here.

bu5hm4n added inline comments.Apr 22 2019, 11:33 PM

The problem is still there.

If you execute eina_future_cancel on the entry->future, then f->scheduled_entry is true on that future, the future is recalled. The function ecore_future_recall is executed. ecore_future_recall does not check if the element is still in the list or not before removing it. After that it free's the pointer of the scheduled entry. THEN execution comes back to this point. the value of the future is *again* flushed, and the entry is *again* freed.

cedric planned changes to this revision.Apr 23 2019, 9:11 AM
cedric added inline comments.

Oh, I see, good catch.


You are right indeed.

cedric updated this revision to Diff 21573.Apr 23 2019, 9:57 AM

rebase and fix recall function.

bu5hm4n accepted this revision.Apr 23 2019, 11:37 AM

Looks good now.

This revision is now accepted and ready to land.Apr 23 2019, 11:37 AM
This revision was automatically updated to reflect the committed changes.