efl.emotion.Emotion
Class¶efl.emotion.
Emotion
(Canvas canvas, module_name='gstreamer1', module_params=None, **kwargs)¶Bases: efl.evas.Object
Changed in version 1.8: Keyword argument module_filename was renamed to module_name.
canvas (Canvas
) – Evas canvas for this object
module_name (string) – name of the engine to use
module_params – DEPRECATED! Use video_mute
/audio_mute
instead.
Extra parameters, module specific
**kwargs – All the remaining keyword arguments are interpreted as properties of the instance
audio_channel
¶The currently selected audio channel.
int
audio_channel_count
¶Get the number of audio channels available in the loaded media.
the number of channels
int
audio_channel_get
¶audio_channel_name_get
¶Get the name of the given channel.
the name
str
audio_channel_set
¶audio_handled
¶True if the loaded stream contain at least one audio track
bool
audio_handled_get
¶audio_mute
¶The mute audio option for this object.
bool
audio_mute_get
¶audio_mute_set
¶audio_volume
¶The audio volume.
The current value for the audio volume level. Range is from 0.0 to 1.0.
Sets the audio volume of the stream being played. This has nothing to do with the system volume. This volume will be multiplied by the system volume. e.g.: if the current volume level is 0.5, and the system volume is 50%, it will be * 0.5 * 0.5 = 0.25.
Note
The default value depends on the module used. This value doesn’t get changed when another file is loaded.
float
audio_volume_get
¶audio_volume_set
¶bg_color
¶The color for the background of this emotion object.
This is useful when a border is added to any side of the Emotion object. The area between the edge of the video and the edge of the object will be filled with the specified color.
The default color is (0, 0, 0, 0)
tuple of int (r, g, b, a)
New in version 1.8.
bg_color_get
¶bg_color_set
¶border
¶The borders for the emotion object.
This represent the borders for the emotion video object (just when a video is present). The value is a tuple of 4 int: (left, right, top, bottom).
When positive values are given to one of the parameters, a border will be added to the respective position of the object, representing that size on the original video size. However, if the video is scaled up or down (i.e. the emotion object size is different from the video size), the borders will be scaled respectively too.
If a negative value is given to one of the parameters, instead of a border, that respective side of the video will be cropped.
Note
It’s possible to set a color for the added borders (default is
transparent) with the bg_color
attribute. By
default, an Emotion object doesn’t have any border.
tuple of int (l, r, t, b)
New in version 1.8.
border_get
¶border_set
¶buffer_size
¶The percentual size of the buffering cache.
The buffer size is given as a number between 0.0 and 1.0, 0.0 means the buffer if empty, 1.0 means full. If no buffering is in progress 1.0 is returned. In all other cases (maybe the backend don’t support buffering) 1.0 is returned, thus you can always check for buffer_size < 1.0 to know if buffering is in progress.
float
buffer_size_get
¶callback_add
¶Add a new function (func) to be called on the specific event.
The expected signature for func is:
func(object, *args, **kwargs)
Note
Any extra params given to the function (both positional and keyword arguments) will be passed back in the callback function.
All the on_*_add() shortcut functions
event (str) – the name of the event
func (callable) – the function to call
callback_del
¶Stop the given function func to be called on event
all the on_*_add() shortcut functions
event (str) – the name of the event
func (callable) – the function that was previously attached
chapter
¶The currently selected chapter.
int
chapter_count
¶Return the number of chapters in the stream.
int
chapter_get
¶chapter_name_get
¶Get the name of the given chapter.
chapter (int) – the chapter number
the name of the chapter
str
chapter_set
¶eject
¶Eject the media
event_simple_send
¶Send a named signal to the object.
event_id (Emotion_Event) – the signal to emit, one of EMOTION_EVENT_MENU1, EMOTION_EVENT_MENU2, EMOTION_EVENT_UP, EMOTION_EVENT_1, or any other EMOTION_EVENT_* definition
file
¶The filename of the file associated with the emotion object.
The file to be used with this emotion object. If the object already has another file set, this file will be unset and unloaded, and the new file will be loaded to this emotion object. The seek position will be set to 0, and the emotion object will be paused, instead of playing.
If there was already a filename set, and it’s the same as the one being set now, setting the property does nothing
Set to None if you want to unload the current file but don’t want to load anything else.
str
file_get
¶file_set
¶image_get
¶Get the actual image object (efl.evas.Object
) of the
emotion object.
This function is useful when you want to get a direct access to the pixels.
New in version 1.8.
image_size
¶The video size of the loaded file.
This is the reported size of the loaded video file. If a file
that doesn’t contain a video channel is loaded, then this size can be
ignored.
The value reported by this function should be consistent with the aspect
ratio returned by ratio
, but sometimes the information
stored in the file is wrong. So use the ratio size reported by
py:func:ratio_get(), since it is more likely going to be accurate.
tuple of int (w, h)
image_size_get
¶keep_aspect
¶Whether emotion should keep the aspect ratio of the video.
Instead of manually calculating the required border to set with emotion_object_border_set(), and using this to fix the aspect ratio of the video when the emotion object has a different aspect, it’s possible to just set the policy to be used.
The options are:
EMOTION_ASPECT_KEEP_NONE
ignore the video aspect ratio, and reset any
border set to 0, stretching the video inside the emotion object area. This
option is similar to EVAS_ASPECT_CONTROL_NONE size hint.
EMOTION_ASPECT_KEEP_WIDTH
respect the video aspect ratio, fitting the
video width inside the object width. This option is similar to
EVAS_ASPECT_CONTROL_HORIZONTAL size hint.
EMOTION_ASPECT_KEEP_HEIGHT
respect the video aspect ratio, fitting
the video height inside the object height. This option is similar to
EVAS_ASPECT_CONTROL_VERTICAL size hint.
EMOTION_ASPECT_KEEP_BOTH
respect the video aspect ratio, fitting both
its width and height inside the object area. This option is similar to
EVAS_ASPECT_CONTROL_BOTH size hint. It’s the effect called letterboxing.
EMOTION_ASPECT_CROP
respect the video aspect ratio, fitting the width
or height inside the object area, and cropping the exceding areas of the
video in height or width. It’s the effect called pan-and-scan.
EMOTION_ASPECT_CUSTOM
ignore the video aspect ratio, and use the
current set from emotion_object_border_set().
Note
Calling this function with any value except
EMOTION_ASPECT_CUSTOM will invalidate the borders set with
the border
attribute
Note
Using the border
attribute will automatically
set the aspect policy to #EMOTION_ASPECT_CUSTOM.
Emotion_Aspect
New in version 1.8.
keep_aspect_get
¶keep_aspect_set
¶last_position_load
¶Load the last known position if available
By using Xattr, Emotion is able, if the system permits it, to store and retrieve the latest position. It should trigger some smart callback to let the application know when it succeed or fail. Every operation is fully asynchronous and not linked to the actual engine used to play the video.
New in version 1.8.
last_position_save
¶Save the last position if possible
New in version 1.8.
meta_info_dict_get
¶Get a python dictionary with all the know info.
all the know meta info for the media file
dict
meta_info_get
¶Retrieve meta information from this file being played.
This function retrieves information about the file loaded. It can retrieve the track title, artist name, album name, etc.
meta_id (int) – The type of meta information that will be extracted.
The info or None
str
meta_info_dict_get().
Emotion_Meta_Info for all the possibilities.
on_audio_level_change_add
¶Same as calling: callback_add(‘audio_level_change’, func, …)
on_audio_level_change_del
¶Same as calling: callback_del(‘audio_level_change’, func)
Same as calling: callback_add(‘button_change’, func, …)
Same as calling: callback_del(‘button_change’, func)
Same as calling: callback_add(‘button_num_change’, func, …)
Same as calling: callback_del(‘button_num_change’, func)
on_channels_change_add
¶Same as calling: callback_add(‘channels_change’, func, …)
on_channels_change_del
¶Same as calling: callback_del(‘channels_change’, func)
on_decode_stop_add
¶Same as calling: callback_add(‘decode_stop’, func, …)
on_decode_stop_del
¶Same as calling: callback_del(‘decode_stop’, func)
on_frame_decode_add
¶Same as calling: callback_add(‘frame_decode’, func, …)
on_frame_decode_del
¶Same as calling: callback_del(‘frame_decode’, func)
on_frame_resize_add
¶Same as calling: callback_add(‘frame_resize’, func, …)
on_frame_resize_del
¶Same as calling: callback_del(‘frame_resize’, func)
on_length_change_add
¶Same as calling: callback_add(‘length_change’, func, …)
on_length_change_del
¶Same as calling: callback_del(‘length_change’, func)
on_open_done_add
¶Same as calling: callback_add(‘open_done’, func, …)
New in version 1.11.
on_open_done_del
¶Same as calling: callback_del(‘open_done’, func)
New in version 1.11.
on_playback_finished_add
¶Same as calling: callback_add(‘playback_finished’, func, …)
on_playback_finished_del
¶Same as calling: callback_del(‘playback_finished’, func)
on_playback_started_add
¶Same as calling: callback_add(‘playback_started’, func, …)
New in version 1.11.
on_playback_started_del
¶Same as calling: callback_del(‘playback_started’, func)
New in version 1.11.
on_position_load_failed_add
¶Same as calling: callback_add(‘position_load,failed’, func, …)
New in version 1.11.
on_position_load_failed_del
¶Same as calling: callback_del(‘position_load,failed’, func)
New in version 1.11.
on_position_load_succeed_add
¶Same as calling: callback_add(‘position_load,succeed’, func, …)
New in version 1.11.
on_position_load_succeed_del
¶Same as calling: callback_del(‘position_load,succeed’, func)
New in version 1.11.
on_position_save_failed_add
¶Same as calling: callback_add(‘position_save,failed’, func, …)
New in version 1.11.
on_position_save_failed_del
¶Same as calling: callback_del(‘position_save,failed’, func)
New in version 1.11.
on_position_save_succeed_add
¶Same as calling: callback_add(‘position_save,succeed’, func, …)
New in version 1.11.
on_position_save_succeed_del
¶Same as calling: callback_del(‘position_save,succeed’, func)
New in version 1.11.
on_position_update_add
¶Same as calling: callback_add(‘position_update’, func, …)
New in version 1.11.
on_position_update_del
¶Same as calling: callback_del(‘position_update’, func)
New in version 1.11.
on_progress_change_add
¶Same as calling: callback_add(‘progress_change’, func, …)
on_progress_change_del
¶Same as calling: callback_del(‘progress_change’, func)
on_ref_change_add
¶Same as calling: callback_add(‘ref_change’, func, …)
on_ref_change_del
¶Same as calling: callback_del(‘ref_change’, func)
on_title_change_add
¶Same as calling: callback_add(‘title_change’, func, …)
on_title_change_del
¶Same as calling: callback_del(‘title_change’, func)
play
¶The play/pause state of the emotion object.
bool
play_get
¶play_length
¶The length of play for the media file.
The total length of the media file in seconds.
float
play_length_get
¶play_set
¶play_speed
¶The play speed of the media file.
This sets the speed with which the media file will be played. 1.0 represents the normal speed, 2 double speed, 0.5 half speed and so on.
float
New in version 1.8.
play_speed_get
¶play_speed_set
¶position
¶The position in the media file.
The current position of the media file to sec, this only works on seekable streams. Setting the position doesn’t change the playing state of the media file.
float
position_get
¶position_set
¶priority
¶Raise the priority of an object so it will have a privileged access to hardware resource.
Hardware have a few dedicated hardware pipeline that process the video at no cost for the CPU. Especially on SoC, you mostly have one (on mobile phone SoC) or two (on Set Top Box SoC) when Picture in Picture is needed. And most application just have a few video stream that really deserve high frame rate, high quality output. That’s why this call is for.
Note
If Emotion can’t acquire a privileged hardware resource, it will fallback to the no-priority path. This work on the first asking first get basis system.
True means high priority.
bool
New in version 1.8.
priority_get
¶priority_set
¶progress_info
¶How much of the file has been played.
Warning
gstreamer xine backends don’t implement this(will return None).
str
progress_info_get
¶progress_status
¶How much of the file has been played.
The progress in playing the file, the value is in the [0, 1] range.
Warning
gstreamer xine backends don’t implement this(will return 0).
float
progress_status_get
¶ratio
¶The video aspect ratio of the media file loaded.
This function returns the video aspect ratio (width / height) of the file loaded. It can be used to adapt the size of the emotion object in the canvas, so the aspect won’t be changed (by wrongly resizing the object). Or to crop the video correctly, if necessary.
The described behavior can be applied like following. Consider a given emotion object that we want to position inside an area, which we will represent by w and h. Since we want to position this object either stretching, or filling the entire area but overflowing the video, or just adjust the video to fit inside the area without keeping the aspect ratio, we must compare the video aspect ratio with the area aspect ratio:
w = 200; h = 300; # an arbitrary value which represents the area where
# the video would be placed
obj = Emotion(...)
r = w / h
vr = obj.ratio
Now, if we want to make the video fit inside the area, the following code would do it:
if vr > r: # the video is wider than the area
vw = w
vh = w / vr
else: # the video is taller than the area
vh = h
vw = h * vr
obj.size = (vw, vh)
And for keeping the aspect ratio but making the video fill the entire area, overflowing the content which can’t fit inside it, we would do:
if vr > r: # the video is wider than the area
vh = h
vw = h * vr
else: # the video is taller than the area
vw = w
vh = w / vr
obj.size = (vw, vh)
Finally, by just resizing the video to the video area, we would have the video stretched:
vw = w
vh = h
obj.size = (vw, vh)
Note
This function returns the aspect ratio that the video should be, but sometimes the reported size from emotion_object_size_get() represents a different aspect ratio. You can safely resize the video to respect the aspect ratio returned by this function.
float
ratio_get
¶ref_file
¶ref file
str
ref_file_get
¶ref_num
¶ref number
int
ref_num_get
¶seekable
¶Whether the media file is seekable.
bool
seekable_get
¶smooth_scale
¶Whether to use of high-quality image scaling algorithm of the given video object.
When enabled, a higher quality video scaling algorithm is used when scaling videos to sizes other than the source video. This gives better results but is more computationally expensive.
bool
smooth_scale_get
¶smooth_scale_set
¶SPU button
int
SPU button count
int
spu_channel
¶The currently selected SPU channel.
int
spu_channel_count
¶Get the number of SPU channels available in the loaded media.
the number of channels
int
spu_channel_get
¶spu_channel_name_get
¶Get the name of the given channel.
the name
str
spu_channel_set
¶spu_mute
¶The SPU muted state.
bool
spu_mute_get
¶spu_mute_set
¶suspend
¶The state of an object pipeline.
Changing the state of a pipeline should help preserve the battery of an embedded device. But it will only work sanely if the pipeline is not playing at the time you change its state. Depending on the engine all state may be not implemented.
The options are:
EMOTION_WAKEUP
pipeline is up and running
EMOTION_SLEEP
turn off hardware resource usage like overlay
EMOTION_DEEP_SLEEP
destroy the pipeline, but keep full resolution
pixels output around
EMOTION_HIBERNATE
destroy the pipeline, and keep half resolution
or object resolution if lower
Emotion_Suspend
New in version 1.8.
suspend_get
¶suspend_set
¶title
¶The dvd title from this emotion object.
Note
This function is only useful when playing a DVD.
str
title_get
¶video_channel
¶The currently selected video channel.
int
video_channel_count
¶Get the number of video channels available in the loaded media.
the number of channels
int
video_channel_get
¶video_channel_name_get
¶Get the name of the given channel.
the name
str
video_channel_set
¶video_handled
¶True if the loaded stream contain at least one video track
bool
video_handled_get
¶video_mute
¶The mute video option for this object.
bool
video_mute_get
¶video_mute_set
¶video_subtitle_file
¶The video’s subtitle file path (i.e an .srt file)
For supported subtitle formats consult the backend’s documentation.
str
New in version 1.8.
video_subtitle_file_get
¶video_subtitle_file_set
¶vis_get
¶vis_set
¶vis_supported
¶