Page MenuHomePhabricator

Chapter 4. Understanding the Edje Layout Engine
Updated 537 Days AgoPublic

Prev | Next | Index


The Edje Library is a complex and intimidating beast at first sight. It is hard to understand the full potential of this library, let alone describe what it does in simple terms. In a nutshell Edje is what really showcases what Evas can do, and allows the programmer to create live, animated and playful user interfaces. It is perfectly normal for you, to read this chapter and think that you still do not grasp all the capabilities of Edje.

Edje has multiple roles, and depending on the application it might seem that Edje is a complex library with multiple purposes, that can confuse the programmer who comes in contact with EFL for the first time. As time passes (and the EFL libraries are finally released!) more and more Edje applications will appear that will show what Edje can really do.

So depending on who you ask and what application you are using, Edje can be:

  • A layout engine.
  • An animation/effects graphic library.
  • An Interface Description Language.
  • A logic/appearance separation library.
  • A theming framework.
  • A GUI previewer (think glade).
  • An abstraction over Evas

All the above are different aspects of Edje. You can view Edje as many things, but the fact is that it remains the same library no matter how you look at it. It just happens that Edje is clearly something more than a trivial EFL library. We will explore each role of Edje in turn.

Edje as a Layout engine

Most applications are built so that when started, they automatically request a specific size from the window manager. This size is usually the one that the application programmer has chosen so that the application fully utilizes the given screen space. Most of these times the user never actually changes the size of an application window. If the application is the sort of utility which will be used for a brief period of time the user might not even move the application to a better position leaving it to where the window manager has placed it. But what happens if the user resizes the window or even fully maximizes it?

If you are really unlucky the application won't even resize since it is a dialog. How many times have you tried to grab a window from its borders only to realize that you cannot change its size? If you are equally unlucky the application window will resize but the window contents will stay the same. This happens because the elite programmer of the application never bothered with application resizing in the first place. Go bug him/her about this.

In most cases however, the application window will resize smartly. Assuming that the application is document based, its content area along with the status bar and the toolbar will be notified of the new coordinates and change their layouts to match the new size. Notice anything strange here? Of course not since 99% percent of applications resize this way.

The problem stems from the fact that all GUI toolkits are coordinate-based. The buttons and text have fixed sizes. So although the document area will actually increase the toolbar and status bar will not. Empty space will be wasted in several parts of the window. See the following figure:

Figure 4.1. Space wasted after resizing.

Toolkit programmers know this and have provided application developers will several facilities in order to control where this extra space goes. The buzzwords are containers/boxes/glue space/autofill/constraints e.t.c. Most developers either don't use these and just hardcode their user interface, or even if they use them they are never actually happy about the behavior of the application after resizing.

The truth is that all these methods are complex to use and understand. In any case the fact that not all elements of an application resize evenly when its window is maximized is problematic. Your latest application might look cool on your 19" LCD monitor with 1600x1200, but it looks ugly at your friends 17" LCD at 1024x768 and unusable at your aunt's 15" CRT and 800x600.

Edje allows you (if you want it of course) to give relative coordinates to all your interface elements. You can create a truly resolution independent application. You use Edje to describe the relative size and location of parts in your application. Each time the application window is resized all its elements will be also resized proportionally.

For each interface element you only need to describe its upper left and bottom right corner as it will appear in the application window. The figure below shows this. Let's say that your application is a "scroller" one which will show some content and will allow the user to go up and down by clicking arrows that reside on the sides of the window. You quickly draw two arrow images (one up and one down) in Inkscape and export them as .png images. Then you decide that you want each arrow to take 10% of bottom or top screen space.

Figure 4.2. Relative positioning in Edje.

To tell Edje where a part of your interface resides you describe the relative coordinates of the top-left and bottom-right corners of the area it occupies on screen. Coordinates are normalized using the window size of your application (that is why they are called relative). So top-left corner of the whole window has x=0.0 and y=0.0 while bottom-right corner is at x=1.0 and y=1.0. Negative and greater than 1.0 values are actually valid ones. These describe elements outside of the viewable window and are well suited for animations (explained later). Next figure shows the relative coordinates of our example according to this approach.

Figure 4.3. An example of relative positioning in Edje.

This visual representation of the window content would be translated in Edje with the following code.

Example 4.1. Relative coordinates in Edje

part
{
	name, "up_button";
	type, IMAGE;
	description
	{
		state, "default" 0.0;
		rel1
		{
			relative, 0.0 0.0;
			offset, 0 0;
		}
		rel2
		{
			relative, 1.0 0.1;
			offset, 0 0;
		}
		image
		{
			normal, "up.png";
		}
	}
}
part
{
	name, "down_button";
	type, IMAGE;
	description
	{
		state, "default" 0.0;
		rel1
		{
			relative, 0.0 0.9;
			offset, 0 0;
		}
		rel2
		{
			relative, 1.0 1.0;
			offset, 0 0;
		}
		image
		{
			normal, "down.png";
		}
	}
}

The key point here is that the rel1 block corresponds to the top-left corner of something while the rel2 is the bottom-right one. Take a moment to compare this code with the image presented before. You should fully understand how the text describes the interface and what do the float values (0.0 0.1 0.9 and 1.0) mean. Ignore the offset lines. The name and type keywords are self explaining.

So what have we accomplished so far? Without writing a single C function we have a fully dynamic interface. Check the figure below. No matter how the user resizes the window, the interface will automatically "adapt" to its new size. Window coordinates? Screen coordinates? World coordinates? translation between them? No code for them at all needs to be written by the programmer. It is already in Edje (or rather Evas).

Figure 4.4. Adapting automatically to a new window size.

Of course a lot of people will probably step up and shout that this should only work if our graphics were vector based. This would make scaling images a lossless procedure and would always produce perfect quality visuals for the interface. If you read the Evas chapter you will already know what to answer to them. Evas/Edje has really sophisticated image resizing algorithms, so in most cases (provided that your image resources are reasonably sized) the loss in quality will never be evident to the end-user. And unlike vector graphics the Edje implementation is really fast.

Edje as Animation/Effects Library

We have seen how Edje can be used to describe the location of interface parts in the application window. This is nice for static applications but EFL is all about motion. Edje allows you to describe animation for your user interface. And all this in a very natural way.

Let's compare the Edje way with the traditional one. This time we will examine the adventures of Mary Developer. Mary wants to create a simple animation for her latest application. A ball sprite which starts from the top-left corner of the window and goes all the way down to the bottom right. We choose this animation path because we assume that these are the positive directions of x and y axes of the application window. Mary wants the animation to last 5 seconds and present 20 frames per second for the transition. The canvas is a rectangle with x=300 and y=300.

Mary carefully studies the documentation of her graphic toolkit. She learns about timers and how to use them (and maybe a little about threads). She spends some time calculating some timing results and finally she crafts the following code:

Example 4.2. A simple animation (in C)

//Include files which contain implementation
//of linked lists or other data structures.
[...]
//Include files which contain timers 
[...]

int main()
{
	Canvas *a_canvas;
	List *objects_to_be_drawn;
	timer *animation_timer;
	
	//Canvas is 300x300
	a_canvas=create_new_canvas(300,300);

	//Do not forget the paint function!
	//Setup a callback. VERY IMPORTANT
	set_paint_function_of_canvas(a_canvas,my_repaint); 
	
	//We assume that ball.png is 10x10 pixels
	image=create_new_from_file("ball.png");
	set_coord_image(0,0);
	//Append the image to objects drawn by Canvas
	add_image(objects_to_be_drawn,image); 

	show_canvas(canvas);
	repaint(canvas); //Here the my_repaint function is called.

	//Setup a timer to create animation
	//Schedule the timer to run every 50ms (20 frames per second=1000ms)
	animation_timer=timer_create(animate,50,image);

	//Continue with rest of the program
	[...]
}

//Function which smells X-Windows internals (what happens after an expose event)
void my_repaint(canvas *where)
{
	canvas_object *current;
	while(objects_to_be_drawn !=EMPTY)
	{
		current=get_next_object(objects_to_be_drawn);
		draw_object(where,current); //Finally each object is drawn.
	}
}

//The animation function which is controlled by a timer
//Return 0 if the timer is finished or 1 if
//it is to be rescheduled again
int animate(void *data)
{
	canvas_object *ball;
	int x;
	int y;

	ball=(canvas_object *)data;
	//Again let's say we know it is an image for simplicity
	x=get_image_x(ball);
	y=get_image_y(ball);

	//Advance coordinates by 3 to each direction
	//3 is found by dividing the canvas size (300) by total
	//number of frames we want to show (5 seconds * 20 frames each =100)
	//Therefore 300/100 =3
	x=x+3;
	y=y+3;

	//Update the new coordinates of the object
	set_coords_rect(ball,x,y);
	//Make sure that canvas is updated too
	repaint(canvas); 

	//We need a check here to detect if the ball has reached
	//the bottom right corner. If yes the timer should be stopped.
	//One could also count the number of frames shown so far.
	//There are more elegant ways to deal with this.
	//The fact remains that a check should exist at one form or another.
	if(x==300 || y==300) return 0;

	//The ball has a long way to go.
	//Reschedule the timer
	return 1;
}

That is a lot of code for a simple animation. No wonder why most of today's application seem so static and motionless. Animation libraries do exist but they are only used in specific application domains (usually games). With Edje this is no longer true. Animation comes to the desktop!

Apart from the large size of the code above there is also another important problem. If you have ever coded like this you should see it right away. Mary wants the animation to last 5 seconds with 20 frames per second so she has to make calculations to find the number of total frames shown and the progress of the ball sprite. These values are currently hardcoded (the 50ms timer delay, and 3 pixel movement respectively). Theses values are only correct in the case of a 300x300 canvas (also hardcored). But what happens if the user resizes the application window? These values should be recalculated. Mary has to write additional code which automatically changes these values so that the animation is smooth no matter the size of the window.

All this becomes too complicated for a simple application and also forces the programmer to deal with canvas management more that she should. There has to be a better way. Actually there is and can already be found in most advanced 3D (and maybe 2D) animation/cad/modelling programs. If you have ever used one of them (Blender and Synfig are such open source applications for 3D and 2D respectively) you should already be familiar with this. The concept is called key framing in almost all (3D) animation suites. We would like to digress a bit at this point and mention a little history. You can always skip ahead if you want.

Before computing reached the masses, any kind of animation was a daunting task. The artist had to draw each frame separately. Having cinema quality (24 frames per second) animation meant that a single artist could produce only very short clips. More artists would be needed for longer films. People came up with two partial solutions. The first one was to lower the number of frames to 10 up to 15 (in best cases). This meant that the application was less smoother but less frames had to be drawn. This kind of animation can be even seen today in animation films destined to be shown on TV to young audiences. The second solution was to draw a small number of frames which had the backgrounds and focus the actual work on the parts of the screen that changed during animation (characters moving and talking). Of course this meant that full motion action was not an option. Japanese animation (a.k.a. Anime) has taken this concept to extremes featuring extremely detailed static backgrounds and blocky moving characters.

3D animation with the help of computers was a revolution. An artist could spend her time creating complex models of characters and places and then have the computer render the scene from different perspectives. Each frame was calculated from the computer. High quality animation became an option since the computer could render 30 or even 60 frames per second with no additional effort from the user. A single artist with a single workstation could create anything. In the case of professional studios with a team of artists and clusters of computers (render farms) high quality films would take over the movie and gaming industry in less than a decade. This happens because the user no longer defines frames but instead defines key positions and commands the computer to compute the animation between different keys.

The concept works elegantly both in 3D and 2D. The user defines a starting point A for an object and selects some characteristics of the object to be recorded. Location of the object at this point is stored by the computer but other parameters such as size, rotation, color, textures and even shape can be recorded too. Then a second point B is defined where the object has different values on the recorded parameters. Finally the computer jumps in, creating automatically the middle frames changing the values in a relative way depending on the distance between point A and B. The result is a smooth transition as seen in the next figure. In the case of different shapes this is a very efficient way to create morphing, a technique commonly used in commercials, games and movies.

Figure 4.5. Computer generated animation using keys.

Edje implements this concept in 2D and allows the programmer to specify several key position for objects displayed on the canvas. Since Evas is a stateful canvas, as already mentioned in the previous chapter, it can compute automatically the frames in between and create smooth transitions between several states of the canvas.

Joe Programmer sees how Mary Developer struggles with her simple animation. He has become an experienced programmer in EFL since the last chapter so he offers to help Mary and introduces her to EFL. He listens to her requirements (20 fps and 5 second animation) and after some coding he presents her the following listings:

Example 4.3. The same simple animation in Edje

The C code of the program:

int main()
{
	//Code that creates a canvas and an Edje object
	//like any other object (text, image, rectangle e.t.c)
	[...]

	//Specify frames per second to 20
	edje_frametime_set(1.0/20.0);

	//Continue with rest of the program
	[...]
}

Inside the Edje description of the interface: (by now you know it is not C)

part
{ 
	name, "ball";
	type, IMAGE;
	description 
	{ 
		state, "default" 0.0;
		rel1 { 
			relative, 0.0 0.0;
			offset, 0 0;  
		}       
		rel2
		{ 
			relative, 0.05 0.05;
			offset, 0 0;  
		}       
		image
		{
			normal, "ball.png";
		}       
	}       
	description
	{ 
		state, "finish" 0.0;
		rel1 { 
			relative, 0.95 0.95;
			offset, 0 0;  
		}       
		rel2
		{ 
			relative, 1.0 1.0;
			offset, 0 0;  
		}       
		image
		{
			normal, "ball.png";
		}       
	}       
}
program
{
	name, "animate_ball";
	signal, "show";
	action, STATE_SET "finish" 0.0;
	transition, LINEAR 5.0;
	target, "ball";
}

Notice the key positions. Point A is the state called default. This is the name Edje uses for the starting point of an interface element. You cannot change this name. Point B is the state called finish which defines the ball sprite to be at the bottom right of the Canvas window. Animation is accomplished by the program block which states that we want a linear transition lasting 5 seconds for the ball element (defined by the target keyword) which changes the state to finish (defined by the action keyword). We want this animation to be launched when the program loads and the canvas is shown (defined by the signal keyword).

Also notice the lack of extra calculations. Mary's requirement of a 5 second transition with 20fps is directly transfered to code. The number of frames shown and the delay between them is automatically calculated by Edje. No need for the programmer to worry about low level stuff. Finally notice the lack of any hardcoded values for sizes. The canvas can be anything and the ball sprite will be 5% of the canvas size. This means that no matter the size of the application window, Edje will adjust sizes, delays and frames to satisfy the requirements (20fps for 5 seconds). Extra code is not needed for that. The end result is that the user can resize the application in any way she likes and the animation quality will be preserved.

Edje is all about power. We stressed in the previous section that you can describe all elements of your interface in a relative way if you want. If you don't want this, you can still revert to fixed size graphics. Let's assume that Mary likes Edje-based animations but she wants to keep the ball sprite to its original size (10x10) no matter what. That is, the canvas can be any size, the animation will continue to last 5 seconds at 20fps but the ball sprite will have a fixed size. This is where the offset keyword comes in. It allows you to describe sizes in pixels instead of percentages. Mary can rewrite the Edje description of the interface as:

Example 4.4. The same simple animation in Edje (fixed size version)

part
{ 
	name, "ball";
	type, IMAGE;
	description 
	{ 
		state, "default" 0.0;
		rel1 { 
			relative, 0.0 0.0;
			offset, 0 0;  
		}       
		rel2
		{ 
			relative, 0.0 0.0;
			offset, 10 10;  
		}       
		image
		{
			normal, "ball.png";
		}       
	}       
	description
	{ 
		state, "finish" 0.0;
		rel1 { 
			relative, 1.0 1.0;
			offset, -10 -10;  
		}       
		rel2
		{ 
			relative, 1.0 1.0;
			offset, 0 0;  
		}       
		image
		{
			normal, "ball.png";
		}       
	}       
}
program
{
	name, "animate_ball";
	signal, "show";
	action, STATE_SET "finish" 0.0;
	transition, LINEAR 5.0;
	target, "ball";
}

Remember that positive values on the x direction go rightwards and on y direction downwards. You should understand what the number 0.0,1.0,+10,-10 do. The offset is a great way of course to create depressed buttons in your interfaces. They also allow you to fine-tune the size of your interface elements if you do not like the relative approach.

What you saw so far was a trivial example of Edje animation. Here the only variable we recorded between the two states of the ball sprite was the location. But Edje allows you to record many more including size, color, text, alpha value, image e.t.c. Also you can have more than two states. You can create complex animation with multiple states acting as intermediate stations between the beginning and end of an animation. Edje also simplifies the creation of animations created by a big number of successive images (e.g. rotating logos). You can do many things in Edje! We have barely scratched the surface.

Last but not least we should mention that apart from the LINEAR transition, Edje also includes ACCELERATE, DECELERATE and SINUSOIDAL transition effects.

Edje as an IDL

Almost all C code that we have shown so far is imaginary. It doesn't show any of the API the EFL provide. This is intentional of course. This way you can concentrate on the high level concepts instead of being puzzled with why and how the API works. But this does not apply to the Edje "code".

All 3 examples of Edje snippets already mentioned are actual Edje code. You cannot simply copy-paste them for your tests because some infrastructure blocks are missing. All code however is exactly as you would write it inside your programs. If you think that the listings are a bit abstract, its because Edje is abstract, not because we have removed some important code in any way.

With that in mind, you can see that with Edje you describe your graphical interface. Your code says what your interface looks like, and what it is doing once loaded but not how it does it. This part of complexity is handled by Edje. As you saw in the previous section you can forget about timers and manual moving and sizing of interface elements. Edje works for you by abstracting all this low level code. You can keep coding the application logic rather than the application canvas.

In this sense Edje works as an Interface Description Language (IDL)(Do not confuse this term with Interface Definition Languages commonly found in distributed architectures and RPC platforms.). The concept is not entirely new. The Mozilla foundation uses XUL for describing the interface of Mozilla and Mozilla-based applications. Microsoft also introduced XAML shortly after XUL appeared on the scene. Both XUL and XAML are XML based (notice the X pattern). Edje is not XML of course.

The similarity ends there. Edje can do things that XUL and XAML were never meant to do. If this analogy helps you understand what Edje is all about, that is fine. But you will be mistaken if you decide that Edje is the EFL answer to XUL and/or XAML. The architectures also differ a lot in how the interface definition fits into the final program.

Edje code goes into a normal text file with the .edc extension. The name stands for Edge Collections. We mentioned that Edje code looks like C but it is not C. You can search the gory details of Edje syntax in the technical documents already released about Edje. We will not focus on the syntax of Edje files. We will deal however with what types of blocks (starting and ending with {}) can be included in an Edje file. These are the elements that you can collect in an Edje file (hence the name).

Table 4.1. Top Level Edje blocks

Block typeDescription
ImagesImage files that will be used in the interface
FontsFont definitions for text and textblock objects
Data itemsSimple data entries in key value pairs
Text stylesStyles definitions for text and textblock objects
Color classesColor definitions to be shared by objects
CollectionsThe description of the interface

From the name you can assume that the Collections block is the most important one. All Edje code that you have seen so far belongs to the collections block. The parts and program blocks already demonstrated are all child blocks inside the collection one. This is why we said before that some infrastructure is missing from the Edje code we have presented so far. All Edje code listings shown do not stand on their own, but must be included in a collection block (or to be more precise in a parts block, inside a groups block, inside the collections block). You can look up the source code or the Edje book to find the exact hierarchy of blocks inside the collections one. The other blocks (images, fonts, e.t.c ) have a simpler (flat) hierarchy.

Your typical Edje experience will probably go like this:

When you start playing with Edje and want to discover its abilities you will mainly use rectangles. So your first .edcfiles will just contain the collections block. You will play a bit with animations and transitions and then decide that you want to include image resources in your interface. This is where the Images block comes in.

Rectangles and images are fine for graphics but your application will sooner or later need some text. If you have lots of Text objects you will be tired of defining the font manually for each one. So you will use the Fonts block to define a font once and use it everywhere. As your application grows you will also be tired of defining colors for each element in the collections block so you will create some common definitions in the Colors block.

Once you start using !TextBlocks for lots of text (and not just single lines) you will appreciate the usage of the Style blocks which you will construct by yourself or borrow from other EFL applications (this is open source after all). Finally once you start playing with moving some of the interface properties from hardcoded values in the C code of the application, you will try to assimilate them in the Edje file in the Data Items block. The framerate of the Edje animations could be for example in the Edje file itself rather than in the C code as shown before (more on this separation on the next chapter).

To finish this chapter we will mention the fact that not all Edje functionality has been revealed. Apart from the blocks shown in the table which imply a static interface, Edje Collection files can also contain scripts that make everything a bit more dynamic. Scripts are written in Embryo (another EFL library). But since someone must first learn (X)HTML and then JavaScript, we will not say anything about Embryo in this document. Try to become familiar with just Edje right now. Just keep an open mind and remember that there is more than meets the eye. If you think that Edje files are always trivial think again. You can also open some .edc files from other EFL libraries or from the Enlightenment Window Manager to see what we mean.

Edje as Logic and Appearance separator

So after reading the previous chapter you have a (mostly blurred) idea of the contents of an .edc file. This file describes your graphical user interface in Edje. It may contain relative positions of objects, some simple animations and maybe some images. But what do you do when you write it?

If you are an experienced programmer you will probably visualize two workflows for .edc files. The static and the dynamic one.

The static approach is the most trivial one. You assume that Edje code segments are just special macros (in C). You include in your C code the "Edje.h" header file which contains the implementation of these macros and you compile your program normally as any other C application. This results into a big executable which includes everything. Both the user interface and the application logic are in one file. See the following figure.

Figure 4.6. Building the UI statically in the application.

This approach clearly helps the developer. Instead of writing lots of C code for Evas she can quickly describe what she want to do in Edje, and be more productive because of abstraction. For a user however things remain unchanged. A single executable is the whole application. Nothing can be changed without the source code. The great benefits of this approach are of course simplicity (no extra tools, just macros and the compiler) and speed (everything is compiled in the end).

The dynamic approach would be to separate the graphical representation of the application from the actual functionality (You should be already familiar with this concept if you have ever done web programming where XHTML/CSS and Server Side Code are all contained in different files). So you distribute two files for your application. The binary executable which is the code and the text .edc file which describes the interface. The application binary runs and dynamically during runtime it loads the graphical user interface described in the .edc file. This idea already exists for applications but the text file is mostly written in XML so that it is more human readable.

The dynamic approach has a clear advantage regarding flexibility. The user can change the text file representing the graphics of the application and after she runs it for a second time the application adapts to the changes. No recompiling is necessary. See the following figure. A lot of theme frameworks also depend on this technique. The obvious drawback of this is lack of speed. The binary file of the application must contain a text parser for reading the text file which has the application appearance. In the case of XML a full featured XML parser needs to be included. There are separate libraries in the Open Source community which implement XML parsers and thus free the programmer from this burden, but the fact remains that this is added complexity with controversial benefits. It is also clear that while this approach is more convenient for the user, it means more effort for the programmer in order to accomplish the required flexibility.

Figure 4.7. Loading the UI dynamically in the application.

With Edje you don't have to choose between the two approaches (static, dynamic). The Edje approach is a third one which combines the best of both worlds and does away with the disadvantages! The following table summarizes the Edje approach:

Table 4.2. The Edje Appearance/Logic separation

Static wayDynamic wayEdje way
Files Distributed1 binary1 binary + 1 text2 binaries
UI residesInside main executableIn separate text fileIn separate binary file
SpeedFastSlowFast
UI codecompiled (built-in)text (interpreted)compiled (loaded)
UI is determinedDuring compile timeDuring run timeDuring run time
UI FlexibilityNoYesYes

In Edje the .edc text file is compiled into a .edj file via the edje_cc compiler which is offered along the Edje framework. The resulting .edj which is binary is loaded during run-time into the main executable resulting into the final application which is presented to the user. See the following figure. You should see now why the solution combines the advantages of both approaches. You have a binary file with your UI (fast) that is added dynamically (flexibly) into your application. We are not aware of any other technology in the Open source world which does the same thing.

Figure 4.8. Separation of GUI and code in Edje.

You may think that the figures presented so far at too high level and do not actually explain how all this is accomplished. A great number of programmers want to get down and dirty with code to really understand a concept. Rather than presenting long code listings with an Edje application we will describe things a bit more concisely. So let's say that you have been enlightened with Edje and want to write your next application with it. What do you actually do?

First you write your .edc file which contains the User interface. Then you write the C code of your application. The C code would only contain initialization of a Canvas and nothing more (more on this later). When the UI is initialized you designate the name of the .edj (compiled Edje UI) that is to be loaded. This should be something like edje_file_load("my_UI.edj"). The name of the function does not really matter. Currently Edje is just a normal Canvas Object. You create and put it in a Canvas like any other object. The name of the associated .edj file is a property of the Edje object.

Next you rename your UI text file to my_UI.edc. You compile it with edje_cc and the resulting file is a binary one named my_UI.edj. The end user installs the binary and the .edj file (probably somewhere in /usr/share). When the application runs it dynamically loads the my_UI.edj file and draws itself of screen. If another my_UI.edj file replaces the previous one, the application will use this instead (themes anyone?).

It is rather simple actually. The only thing left to explain is how communication is handled between the UI placed into Edje and the C code of the application. No mystery here. If you already know to how to use GTK+ callbacks or QT Signals/Slots you are already set.

In the programs part of an .edc file you can define apart from animation (as shown previously) signals that are emitted from the User Interface in Edje back to the main C code of your application in response to user events. In your .edc file you would include in the programs block code like this:

Example 4.5. Sending a signal from Edje to C code.

program 
{ 
	name, "user_clicked_button"; 
	signal, "mouse,down,*"; 
	source, "ok_button"; 
	action, SIGNAL_EMIT "finished" "ok"; 
}

This should be easy to understand. The asterisk in the signal like denotes any mouse button. So this program would run when the user clicks (mouse down) on the source component (named "ok_button). The finished and ok parameters are additional options which are passed back to C code. Since an OK button has only one function (usually) these extra options are not needed but are just shown here for educational purposes.

That is all that you have to do in the .edc file of your application. To decide what happens when these signals are emitted you write in the C code function callbacks which register which signals they want to listen to. As you might expect we will not show any API for this but you can rest assured that is just normal C code and nothing extraordinary. The Edje API provides you also with functions which can change the Edje User Interface programmatically. Thus, you are given the ability to respond to Edje signals with changes in the Edje object itself (Not to be confused with introspection).

So in the end you have an application binary full of functionality (C functions) and an .edj file which contains the User Interface. These two communicate dynamically while the application runs. A media player as a full example is described in the Edje book. You should consult it if you want to see in detail how a media player would work the Edje way. All the functionality for music files (load, play, stop, pause) are in C code, while everything that is graphical (stop, play buttons e.t.c) are in the Edje object.

After reading this section you may probably think that Edje would be a good framework for themes. You are right of course! Edje was built with theming in mind and we devote a separate section for this Edje ability.

Edje as a Theming Framework

Several applications nowadays have a fixed user interface. If a user does not like the looks of the UI she cannot do anything at all. To accommodate for different preferences, applications begun to have a component based UI with toolbars, windows, frames and panels which could be resized by the user. But all these applications still retained their boring (grayish) color that is not favored by all users. Lately most user applications introduced changes to how the application looks as well. Colors, skins, pixmaps became changeable as well. The whole idea is called theming. Several often used applications (such as media players, web browsers, window managers) are expected to support themes natively. The problem is that theming does not always gets the attention it deserves. Theming is just a hack for several application. Users just overwrite the resource files (images) that are used from the application. Other application shamelessly advertise theme support while in reality they mean the ability to change the colour/skin in the best case.

Edje brings Themes to all applications that use it. Instead of writing custom code to your application you can use the Edje framework more easily. All Edje based applications are themeable with zero additional effort. If you have ever user the Enlightenment Window manager version 16 you have already some idea on what is possible. With the EFL you actually get theming on steroids.

There are many kinds of people would like to change how the application looks. Many will just need simple color changes, others will write complete themes. Some others will want to change the position of interface elements as well. You cannot predict what people will want to do with your application. If we leave aside programmers who will just download the source code and start sending patches for what they don't like, Edje caters for all non-programmers who come across your application. Look at the following table:

Table 4.3. Edje themes (viewed by users)

Casual UsersArtistsExpert Users/Themers
Time allocated for themingSome minutesSome hoursSome days
Wanted changescolourscolours and imagesthe complete UI
Edje knowledgeNoneLimitedExtensive
ContributionsNew colour combinationsNew textures/pixmapsNew themes (.edj files)

Changing the colours in a theme is trivial. The user finds the appropriate colourclass lines inside the .edc file and after appropriate changes, she recompiles its back to .edj. Nothing more to mention here.

An artist however would want to change all pixmaps of an application. Now with other theme frameworks the artist would have to either hunt down all images files inside the distributed application, or download the source code of the application and get the raw images of the user interface. Neither happens with Edje. The images block inside an .edc file does not contain just the pointers to image files which will be used by the application. It defines what images will be bundled inside the .edj file. Compiling an .edc file with edje_cc creates a binary file with all .edc code that describes the interface and the image files which are mentioned inside the interface descriptions. This makes the .edj file self-contained. An .edj file is a theme by itself. No more tarballs or zip files for themes. An .edj file is actually an EET file (another EFL library) which is a general purpose storage library for storing arbitrary information inside a single file. The .edj file is architecture independent and the programmer can choose during creation if the images will be compressed inside it or not and what would be the level of compression in the former case.

So for an artist things are simplified. She uses the edje_decc decompiler to obtain from the .edj file that comes with the application all the original resources along with the .edc file. Without looking at any line of code (Edje or C) she can start replacing the image files (.png, .jpeg e.t.c) with her own graphics. When finished, the edje_cc recreates an .edj file with the new images and exact same interface structure as before. Starting the application will bring the new interface in effect. The procedure is not getting simpler than this. Only if she changes the sizes of image files, she actually needs to edit the .edc file.

Of course one can edit the .edc file as well the image files. This gives great power to the themer and most expert users will appreciate this feature. Since an .edc file contains the complete description of the interface one can change everything regarding the application. Not only the colours or the images used but the whole structure of the UI. Location of elements, size, animations, texts, everything is configurable. One can even remove interface elements (the respective C functions in the code will remain unused) or export the same signal with a different method. The original interface for example could export a "maximize" signal when a certain button is pushed. The themer changes the .edc file to trigger the same signal during application loading. The result is that that the application is maximized as soon as it starts. And this change was inside the theme.

We saw in the previous section that all functionality is included in the C code and the Edje .edj file contains the interface which handles communication with signals. This decoupling of logic and appearance gives great power to the themer. The experienced themer can change anything at will to the point that the resulting application looks nothing like the original one. Edje does away with the concept of skins. Edje technology is a revolution when it comes to themes.

Unfortunately the EFL killer application which will showcase the full power of Edje themes does not exist yet. The most mature EFL application is the Enlightenment window manager itself. A handful of themes are already available. These can give you a glimpse of what Edje is capable of. Animated backgrounds, window borders and window buttons are easier then ever. Entrance (a replacement of xdm/kdm/gdm) also has a range of themes full of diversity. Retractable panels, animated buttons and draggable subwindows are seen in the Entrance themes.

So our only example of Edje power will be in the form of an Enlightenment module. You can of course change just the pictures of the Edje file to give a new look to a component that functions in the same way as before. See the next figure. This shows the battery module for Enlightenment. The .edc file defines several images for the different states (percent full) of the battery. By changing these images an artist can create its own look for the battery graphic which will be displayed on the screen.

Figure 4.9. Changing skin via a theme.

Another included Enlightenment module is the clock. Changing the graphics of the clock is a trivial task. The background of the clock stays the same and one needs different images for the clock hands. An experienced Edje themer can however change things in more depth. Keeping in mind that time handling stays in C code and all graphics are in Edje one can create a theme that converts the analog clock to digital. See the following figure. Notice that this is a just a new theme. There is no special code for analog and digital clocks in Enlightenment. There is also some Embryo scripting involved (which we have not discussed), but the fact remains that Edje themes give you abilities which were previously available only by changing the source code of the application.

Figure 4.10. Changing completely the appearance via a theme.

In summary theming with Edje works flawlessly. The Edje file (.edj) distributed with an application is the theme itself. Self contained and self describing, the .edj file can be changed by anyone. The binary file of the application remains untouched and nobody needs to look at the C code. No recompilation is required. It is possible also to be able to change themes on the fly, but this requires effort from the application programmer to include a way to change the .edj file used by the application via an on-screen option.

We believe that once you start distributing and changing Edje themes, all existing theme frameworks will start to look limited and clumsy compared to Edje elegance, speed and flexibility.

Using Edje to preview your GUIs

Judging by the previous chapters, you should have reached the conclusion that Edje accomplishes a lot. What more could you ask from Edje? This question is easily answered. A GUI previewer for debugging and inspecting your Edje (.edj) themes.

You might have come across with Glade or QT Designer/Builder. These tools allow you to create your interface in a natural way by dragging components onto your application and setting their properties from dialogs. The only code that remains is the functionality. Such a tool would not really work with Edje, since the power of Edje is found in dynamic interfaces with components that move around and change their appearance. A visual GUI builder would work for EWL/ETK, the toolkit part of the EFL libraries.

Edje provides you with something more straightforward. A way to preview .edj files. Without writing a single C line of code in an executable you can launch the edje executable passing as argument the name of an .edj file. You will get an instant preview of the interface. See the figure below. You can resize the interface to your liking. Clicking, dragging, and scrolling works as in the finished application, but of course all signals go nowhere so you cannot get any functionality.

Figure 4.11. Preview an Edje Theme.

{F2696, size=full}

This preview program works really well for both themers and developers alike. Themers who have an existing application and change its theme (the .edj) file can concentrate on the interface and not on the application. Changing bit by bit the application theme they can preview it in a simpler way (decoupled from the application) so that they are sure that it works as expected. When finished they can run the application and see how their interface works once deployed.

The Edje preview program allows one to view individual groups inside an Edje file. So instead of loading a really big application every time they are changing the theme and going to a deep level menu in order to find what they have changed they can instantly preview only the interface part they are interested in.(Provided the application programmer has split the interface into many groups).

For developers, Edje debugging becomes easier with the Edje previewer. Apart from the fact that they can construct and test the interface without writing any C code, the Edje previewer has one additional benefit. Once you load the interface and start playing around with it, Edje will print on the console which signals are emitted while you are clicking around. You can see exactly which part of the interface receives the event and what event it actually handles. Then you can write your C code accordingly. That is, the Edje previewer tells you exactly what events you need to capture (and register callbacks) in order to respond to user actions. The next figure shows this:

Figure 4.12. Live signal testing in Edje.

Nothing more needs to be said here. The Edje previewer provides you with live testing for your themes/Interfaces.

Choosing Edje over Evas

We will finish our discussion on Edje with a more advanced section. Using Edje is clearly a breath of fresh air in the case of themes and animation, but is it always the best tool for the job?

Everything that can be done in Edje can also be done in Evas. Edje is actually using the Evas Canvas for graphics, the EET library for .edj files and the Embryo scripting language for its effects. Edje helps the programmer to be more productive with all the facilities it provides. But that does not prevent the programmer from writing an Evas-only application using only C. Evas is a powerful Canvas on its own as was discussed before.

Edje is an abstraction over Evas that saves the programmer from writing the same code all the time (relative coordinates, themes e.t.c). This abstraction of course comes at a price. Some of the flexibility that Evas provides is lost in the Edje abstraction layer. This may sound strange since in all the previous sections we tried to stress the importance of Edje and the revolutionary features it brings to the table. We still believe this. There is however a (small) percentage of people who will find Edje a bit limited and will stick to C code and Evas or even C code and their favourite Canvas widget. They will prefer to deal with low level graphic code in order to get the maximum flexibility that not even Edje can provide.

The last sentence of the previous paragraph needs some explanation. Let's say that you want to create a custom Canvas object. What do you do with Edje? Edje will not manage this object since it does not know anything about it. You are forced to revert to pure Evas code. Does this mean that you have to leave the benefits of Edje behind? Not at all. The final concept of Edje that we will examine is Edje swallowing.

Everything we have said so far about Edje, refers to using a single Edje object for your application. The C code contains only functionality but not graphics. Your whole interface is the Edje object which fills the entire application window and everything that is GUI specific is described in the .edc file.

If you have a really tricky application which creates multiple canvas objects on the fly (their number is not known beforehand) and moves them around in positions calculated mathematically (dynamically) you are of course welcome to leave Edje and use only Evas instead. You can do whatever you want keeping in mind that everything will be implemented in C code. But this is the extreme case. Most cases you will want to keep Edje around and shift some (but not all) graphic code back into C where more flexibility is possible and you can calculate several things dynamically.

This can easily happen in two steps. First you split your single Edje object into multiple. Multiple Edje collections can be implemented by making use of the groups block in the .edc file. You keep one Edje collection for your main interface and several small ones for graphic parts which behave in a more dynamic way. Then in the main Edje collection you create your parts as before but instead of using a predefined Edje part (TEXT,RECTANGLE, IMAGE e.t.c) you mark it instead with type = SWALLOW. Then you compile your .edc file normally.

The SWALLOW Edje object is not a specific object. Instead it defines a container which can be programmed dynamically in C code regarding its contents. You can fill it with another Edje collection loaded from the same .edj file, another Edje collection loaded from another .edj file or even a custom Evas object created dynamically in C. Once the SWALLOW object is filled Edje will modify its geometry according to how the application resizes or moves. The technique is illustrated in next figure:

Figure 4.13. Swallowing an object programmed in C into a UI described in Edje.

The result is a mixed approach. You lose some Edje benefits. Themers cannot change swallowed parts of your interface since they are not inside the .edc file. You cannot use the Edje previewer to see how the interface works since these parts are created during runtime (in the case of C code). Changing anything for these parts requires manual changing of the source code.

On the other hand you have perfect flexibility. You keep Edje around for most parts of your interface and you employ sophisticated C code for specific parts which are powerful as pure Evas code would be. The choice is your to make. You can have as many SWALLOW parts as you want. The next table outlines all possible approaches.

Table 4.4. Edje vs Evas

Pure EvasPure EdjeEdje and Evas
UI Complexity inEvas (.c)Edje (.edc)Both
Edje objectsNoneOneMany
Dynamic UI morphingMaximumLimitedAvailable
Swallowed Edje partsNot possibleNoneOne or more
UI and code separationNoneHighMedium
Programming effortHighLowMedium
Theme supportNoneExtensivePartial

To fully grasp Edje you need of course to write some applications that actually use it. We could spend more time about Edje, its relationship with Evas or even Embryo but by now you should have realized the power that Edje gives you. Evas may be the base of all EFL libraries. Evas may be a powerful Canvas. Evas may be fast and optimized. But in the end Edje is the leverage you need in order to completely do away with low level Canvas code and build powerful graphic applications. Before Edje programmers were faced with a hard decision. Either use a limited toolkit for rapid prototyping of a WIMP(Windows, Icons, Menus, Pointers (that is 99% of applications)) application or a powerful but low level Canvas for custom graphics. Edje is right in the middle giving you the power that you need without forcing you one way or another. You can still use only Evas or only EWL/ETK for the two extremes, but you also keep the best of both worlds with Edje.


Prev: Chapter 3. Understanding the Evas Canvas | Next: Chapter 5. Understanding the Ecore Infrastructure Library | Index

Last Author
ajwillia.ms
Last Edited
Jan 29 2018, 5:35 AM
Projects
None
Subscribers
None