Page MenuHomePhabricator

Long term extention for Position_Manager
Open, TODOPublic

Description

With the latest optimization from @zmike for map that are moving but are not resizing, I think we should be able to easily optimize scrolling by pushing object that move together into the same buffer. I do not know how to do it best right now (We could go with a limited number of just 2 buffer visible at all time, or with one buffer per object, they both have drawback). Still, we might want the PositionManager to provide additional information for the Collection/CollectionView to leverage. At this point, I don't know how best to communicate that information (Manually request a "buffer" object from the Collection/CollectionView to put item in it? Set information on item to be able to group them? Another buffer?). Anyway, I just want to make sure at this point that we do have a plan for API extention.

cedric created this task.Aug 12 2019, 4:49 PM
cedric triaged this task as High priority.

What do you mean with buffer here ?

The same slice trick.

Oh, but wouldn't that also just work if you take the range that is passed via the event and use the API that the collection (view) passes to the position manager for batching ?

Not necessarily as with Group you need to know which object the Position Manager might move differently.

SanghyeonLee added a comment.EditedAug 12 2019, 9:40 PM

I was requested this feature from the tizen developers several times though we couldn't have enough time to fix it so reject the request but if we can do this, it will improve scrolling performance very well.
I think it need to be 3 buffer for this, one is in the viewport and others are for the top and bottom....

I still fail to understand what exactly needs to be done here, can you give a brief explanation what is expected, in what form, at which occasion?

@cedric plan for the group feature is to have one additional field in the function callbacks, that gives you the group of the first item. The other group items are just part of the other buffer.

I will continue to work on all that on the 19. I am still on vacation.

Enjoy your vacation! The idea is to group together a bunch of object that move together in one smart object that we can turn evas map on. This way they become automatically buffered and it will speed up scrolling. But we do not want to put into a group things that would be resized or moved differently. The Position Manager being the only one who knows, we need a way to tell the Collection how to group things.

zmike added a comment.Aug 20 2019, 4:04 AM

@cedric I think you need to lay off the baguettes. I only did optimizations for mask and proxy renders, not map. Do...do I have to look at map too?

In time, maybe, but not necessarily. I have some idea that can be explored on using proxy instead of map. Depends. Anyway, that is for later.

zmike edited projects, added Restricted Project; removed efl (efl-1.23).Sep 3 2019, 10:08 AM
zmike lowered the priority of this task from High to TODO.

stop tagging with milestones for release tickets

Before all that kind of optimization, I think we will need to revise the API of the position manager and its internal a bit. Main issue I am seeing:

  • The collection needs to throttle position manager sizing information request and make sure batch request are not overwhelming the main loop.
  • Any change on the item size is currently throwing all cache out from the position manager (and in general the position manager give up on its cache way to easily).
  • There is no way for the collection view to inform the position manager of information like item average size or total view port size which would avoid batching size request unnecessarily.

Before all that kind of optimization, I think we will need to revise the API of the position manager and its internal a bit. Main issue I am seeing:

  • The collection needs to throttle position manager sizing information request and make sure batch request are not overwhelming the main loop.

There have been a nice way to distinguish between caching sizing calls, and real sizing infos. That was not enough ?

  • Any change on the item size is currently throwing all cache out from the position manager (and in general the position manager give up on its cache way to easily).

Yeah, this is something that was not optimized yet, i did never know into which direction we should optimize this, what kind of situations are heading towards us, how much we can already optimize in the site of the collection or cv etc..

  • There is no way for the collection view to inform the position manager of information like item average size or total view port size which would avoid batching size request unnecessarily.

Absolute viewport size - okay. But i do not want to feed the PMs with too many implementation specific information, i fear that we one day find a better layouting algorithm, and then still have to serve average sizes for the sake of backward compatibility.

cedric added a comment.Fri, Nov 8, 3:25 PM

Before all that kind of optimization, I think we will need to revise the API of the position manager and its internal a bit. Main issue I am seeing:

  • The collection needs to throttle position manager sizing information request and make sure batch request are not overwhelming the main loop.

There have been a nice way to distinguish between caching sizing calls, and real sizing infos. That was not enough ?

The issue is that we have burst of request when the cache need to be refreshed. The current strategy is to answer all request and if necessary build object to do the sizing. The problem I see is that this burst of request should actually be properly delayed so that all of them can happen over time without consuming large amount of the main loop iteration (< 16ms, and maybe < 8ms one day). I can throttle this in the collectionview, but that need to be a conscious decision of where to put that logic and why.

  • Any change on the item size is currently throwing all cache out from the position manager (and in general the position manager give up on its cache way to easily).

Yeah, this is something that was not optimized yet, i did never know into which direction we should optimize this, what kind of situations are heading towards us, how much we can already optimize in the site of the collection or cv etc..

My main concern right now is that any object being deleted or added throw off the entire cache. A dedicated structure mixing access trees and buffer would seriously help here, but it isn't that simple. For example, how do we handle the insertion of a batch of items at random position which is where I want to head for MVVM to support sorting ability. I would really like to have the ability to insert a bunch of object at once. Basically everything need to be a batch.

  • There is no way for the collection view to inform the position manager of information like item average size or total view port size which would avoid batching size request unnecessarily.

Absolute viewport size - okay. But i do not want to feed the PMs with too many implementation specific information, i fear that we one day find a better layouting algorithm, and then still have to serve average sizes for the sake of backward compatibility.

We will have to look at it, but if we could avoid large batch of request and only do a minimal one to start, that would clearly be better.