Page MenuHomePhabricator

Eina Debug
Updated 2,011 Days AgoPublic

Getting started

Eina Debug is a layer aimed to bring a way to debug EFL applications by providing a transport channel between a debug tool and the applications of a device.

In order to be interrupted by EFL core as less as possible, the communication is done in a separated thread. The Ecore loop is not used there, as well as the Ecore helpers (socket...).


  • Client: can be an application to debug or a debug tool. For the daemon, the difference is slight, as both are entities exchanging information with other.
  • Target: the device where the applications to debug reside and to which the debug tool connects

Modus operandi

When launched, the client tries to connect (only once) to a local daemon. When connected, it sends details such as its pid, its name... and registers different operations that it supports (evlog, cpu...).

The higher layers of the application can register their own debug capabilities.

When a debug tool is launched, it can connect to a device daemon (local or remote) and get information about and from the connected applications. To get a dialog, an opcode is stored in the header of the packet. This opcode must be known by the debug tool and by the application.


Connection application-daemon

The connection between the device applications and their daemon is done with UNIX sockets, for simplicity and security.

If the daemon doesn't exist when the application is launched, no connection will be tried later.

When the connection is established, the application sends its pid, name and the supported protocol version.

Opcodes and registration

Opcodes are integers sent in each packet, next to the destination id. This opcode is generated by the daemon.

After an application connects the daemon, it sends the operations (as strings) that it supports. The daemon searches for an already generated opcode for each string. In case the opcode is not found, it is generated. The daemon sends the ids as a response.

The operations strings don't have any restrictions but it is recommended to follow some rules to avoid conflicts and to improve readiness:

  • The domain name (e.g Eo, Eolian, Evlog...) should begin the string.
  • Sub-domains can be used for a better granularity.
  • The operation itself should be enough explicite. Avoid things like ls, find, show...

Examples of operations strings:



As mentioned before, an operation will always get the same opcode, no matter the nature of the client.

Two opcodes are not generated by the daemon, as they are needed to establish a first dialog between the application and the daemon:

  • Hello: this is the first packet sent. It gives some info about the app.
  • Opcodes registration.

Function registration example

Here is an example to register operations

static const Eina_Debug_Opcode _ops[] =
     {"Eo/objects_ids_get", &_eoids_get_op, &_eoids_get},
     {"Eolian/object/info_get", &_obj_info_op, &_obj_info_get},

eina_debug_opcodes_register(session, _ops, _status_cb);

As the registration is done asynchronously, a callback (_status_cb) is needed to determine when the opcodes are ready. _eoids_get_op and _obj_info_op are two variables used to store the opcodes. Their pointers are given for a future filling. _eo_ids_get and _obj_info_get are the callbacks used when a packet is received with the corresponding opcode.

Callbacks invocation

When the connection is made between the client and the daemon, a thread is created and is in charge to handle the packets coming from the daemon. When a packet is received, the dispatcher extracts the opcode to determine which function to call, if such. When the function is found, it is simply invoked.


The daemon is in charge of permitting the communication between applications and the debug tools. Two applications should not be able to communicate together via the daemon, as well as two debug tools. To ensure this, two connections are possible:

  • via a UNIX socket: this is for local applications connecting to the daemon
  • via a TCP socket: this is for the debug tools

Additionally, the daemon centralizes the information about the connected processes (name, pid...). It manages the ids attributed to the operations supported by the applications.

Every time a packet is received on a socket, the client id is checked. If the value is 0, the packet is consumed inside the daemon as it is the final destination. Otherwise, the daemon replaces the packet client id with the source id and sends the packet to the destination. Two applications are forbidden to speak together, as well as two debug tools.

Connection types


All the applications connect by default to the daemon, if existing. As mentioned before, the connection is made via a UNIX or TCP socket, depending on the nature of the application (target or debugger). The default connection is always done via the UNIX socket.


The remote connection is needed to debug applications from other devices. It is done by connecting via TCP to localhost on a port given by the user. The user is in charge to establish a (secured) connection between the devices. For example, a SSH tunnel set with a local forwarding can be created.

For the daemon, the connection way determines the roles of the client, meaning a TCP connection is done by a debugger while a UNIX connection by a standard application.

When a debugger connects to a local daemon, it indicates he is a "master", which establishes a TCP connection. The default port for the daemon is 6666.

Packet header

The header consists of:

  • the packet size including this field. It is mainly used to determine the number of bytes to read from the socket.
  • the "client id": the sender sets the destination id in this field. The daemon uses it to route to the right application and changes it to be the source id.
  • the opcode id: the id of the operation. It is generated and returned by the daemon when the application registers its operations.


The dispatcher is in charge of invoking the callback corresponding to the packet opcode. The opcode-callback relation is done during the operations registration. The opcode id is used as an index to find the callback that is finally invoked.

The default dispatcher can be overriden by the application. It is useful if packets have to be transferred to the main loop so the information stored in the payload can be displayed graphically. Clouseau overrides the dispatcher to achieve this goal.


This section is important as all the layers have to take care of it. If not, expect crashes and/or unexpected results.
The endianness issue is well known when two different devices have to communicate. TCP convention is Big Endian. However, most of the devices (laptops, phones...) uses Little Endian processors. If we follow TCP, we would waste our time swapping to/from Big Endian.
So we assume that the packets MUST be little-endian. Most of the time, nothing will have to be done on both sides. It means that the swap is needed only when a big endian machine is involved.


Timers are created by using eina_debug_timer_add with a timeout, a user callback and a user data. It is run in a dedicated thread.

The thread waits on epoll with the first time registered. All the next timers are relative to their previous timer.

A pipe, attached to the epoll instance, is used to wake the thread up when timers list changed or exit is required.

As in Ecore_Timer, when the callback returns ''true'', the timer is re-appended into the timers list.


Clouseau a UI inspection for EFL applications where you can see many properties of the UI objects. As the Eina Debug appeared, it became evident that Clouseau needs to use this layer instead of its own in order to communicate with the target applications.
Even if the features have been ported, all the UI and the core have been done from scratch.


The UI has been designed to be simpler. The connection (local, remote, offline) can be changed on the fly without restarting the application. The objects information of an application is not retrieved all together, but only when needed, meaning all the objects list are retrieved in one shot, but only when the object is selected, its information is requested.
Screenshots can be taken many times (not only once as in the previous Clouseau). An icon showing the list of the screenshots ordered by time is available.
The settings are available in the main menu.


The snapshot feature consists of storing all the information of an application in a file in order to reload it later without having to connect to the application. Screenshots are stored too.
In the previous version of Clouseau, EET was used to store the information as a complex tree.
This new version tries to use the Eina_Debug layer for its advantage. When the snapshot is requested:

  • Clouseau sends a snapshot command to the target and waits for the packets
  • The Eina Debug thread receives the packet and forwards it to the Clouseau module
  • The Clouseau module creates dummy packets to:
    • get the classes
    • get the objects and their information
  • Clouseau packets dispatcher, that normally forwards the packets to the main loop, stores the packets in a buffer.
  • The Clouseau module indicates to Clouseau that the snapshot is finished
  • Clouseau creates a simple structure where it stores the application information, such as pid, name, opcodes (relative to the stored packets) and the list of screenshots
  • The structure is saved as EET. The packets buffer is stored right after the EET buffer

During loading:

  • Clouseau opens the file and extract the structure and the packets buffer
  • For each packet, the callback corresponding to the opcode is invoked
  • The screenshots are then linked to the canvases


In the first release of Clouseau, the extension seems to be a feature implemented for a specific purpose and not thought more than that.

It should give the extension developer a full access to Clouseau resources, by allowing it to send requests to the application, to show UI stuff...

It is not mandatory for the extension code to be written inside the Clouseau tree. It can be loaded in live from a specific path given by the user.

Two functions must be implemented:

  • the start function, whose parameter is the environment (session, client id...). The extension is required to fill some fields of this structure, such as the widget to show on the screen, callbacks to import and export data...
  • the stop function
Last Author
Last Edited
Jun 5 2017, 12:00 AM