August 29, 2014

Wow, 7 years….

Originally post to Collabora co-workers:

7 years since starting the Collabora Multimedia adventure,
7 years of challenges, struggles, and proving we could tackle them
7 years of success, pushing FOSS in more and more areas (I can still hear Christian say “de facto” !)
7 years of friendship, jokes, rants,
7 years of being challenged to outperform one self,
7 years of realizing you were working with the smartest and brightest engineers out there,
7 years of pushing the wtf-meter all the way to 11, yet translating that in a politically correct fashion to the clients
7 years of life …
7 years … that will never be forgotten, thanks to all of you

It’s never easy … but it’s time for me to take a long overdue break, see what other exciting things life has to offer, and start a new chapter.

So today is my last day at Collabora. I’ve decided that after 17 years of non-stop study and work (i.e. since I last took more than 2 weeks vacation in a row), it was time to take a break.

What’s next ? Tackling that insane todo-list one compiles over time but never gets to tackle :). Some hacking and GStreamer (obviously), some other life related stuff, traveling, visiting friends, exploring new technologies and fields I haven’t had time to look deeper into until now, maybe do some part-time teaching, write more articles and blogposts, take on some freelance work here and there, … But essentially, be in full control of what I’m doing for the next 6-12 months.

Who knows what will happen. It’s both scary … and tremendously exciting :)

PS 1: While my position at Collabora as Multimedia Domain Lead has already been taken over by the insane(ly amazing) Olivier Crete (“tester” from GStreamer fame), Collabora is looking for more Multimedia engineers. If you’re up for the challenge, contact them :)

PS 2. wtf-meter : http://www.osnews.com/story/19266/WTFs_m

PS 3. My non-Collabora email address is <my nickname>@<my nickname> dot com

August 28, 2014

GStreamer continuous testing (Part 1)

History so far

For the past 6-9 months, as part of some of the tasks I’ve been handling at Collabora, I’ve been working on setting up a continuous build and testing system for GStreamer. For those who’ve been following GStreamer for long enough, you might remember we had a buildbot instance back around 2005-2006, which continuously built and ran checks on every commit. And when it failed, it would also notify the developers on IRC (in more or less polite terms) that they’d broken the build.

The result was that master (sorry, I mean main, we were using CVS back then) was guaranteed to always be in a buildable state and tests always succeeded. Great, no regressions, no surprises.

At some point in time (around 2007 I think ?) the buildbot was no longer used/maintained… And eventually subtle issues crept in, you wouldn’t be guaranteed checkouts always compile, tests eventually broke, you’d need to track what introduced a regression (git bisect makes that easier, but avoiding it in the first place is even better), etc…

What to test

Fast-forward to 2013, after talking so much about it, it was time to bring back such a system in place. Quite a few things have changed since:

  • There’s a lot more code. In 2005, when 0.10 was released, the GStreamer project was around 400kLOC. We’re now around 1.5MLOC ! And I’m not even taking into account all the dependency code we use in cerbero, the system for building binary SDK releases.
  • There are more usages that we didn’t have back then. New modules (rtsp-server, editing-services, orc now under the GStreamer project umbrella, ..)
  • We provide binary releases for Windows, MacOSX, iOS, Android, …

The problems to tackle were “What do we test ? How do we spot regressions ? How to make it as useful as possible to developers ?”.

In order for a CI system to be useful, you want to limit the Signal-To-Noise ratio as much as possible. Just enabling a massive bunch of tests/use-cases with millions of things to fix is totally useless. Not only is it depressing to see millions of failed tests, but also you can’t spot regressions easily and essentially people don’t care anymore (it’s just noise). You want the system to become a simple boolean (Either everything passes, or something failed. And if it failed, it was because of that last commit(s)). In order to cope with that, you gradually activate/add items to do and check. The bare minimum was essentially testing whether all of GStreamer compiled on a standard linux setup. That serves as a reference point. If someone breaks the build, it becomes useful, you’ve spotted a regression, you can fix it. As time goes by, you start adding other steps and builds (make check passes on gstreamer core, activate that, passes on gst-plugins-base, activate that, cerbero builds fully/cleanly on debian, activate that, etc…).

The other important part is that you want to know as quickly as possible whether a regression was introduced. If you need to wait 3 hours for the CI system to report a regression … that person will have gone to sleep or be taken up by something else. If you know within 10-15mins, then it’s still fresh in their head, they are most likely still online, and you can correct the issue as quickly as possible.

Finally, what do we test ? GStreamer has gotten huge. in that sentence GStreamer is actually not just one module, but a whole collection (GStreamer core, gst-plugins*, but also ORC, gst-rtsp-server, gnonlin, gst-editing-services, ….). Whatever we produce for every release … must be covered. So this now includes the binary releases (formerly from gstreamer.com, but which are handled by the GStreamer project itself since 1.x). So we also need to make sure nothing breaks on all the platforms we target (Linux, Android, OSX, iOS, Windows, …).

To summarize

  1. CI system must be set-up progressively (to detect regressions)
  2. CI system must be fast (so person who introduced the regression can fix it ASAP)
  3. CI system must cover all our offering (including cerbero binary builds)

The result is here (yes, I know, we’re working on fixing the certificates once it moves to the final namespace).

How this was implemented, and what challenges were encountered and handled will be covered in a next post.

August 07, 2014

Retour de GUADEC, conspirations

As expected, GUADEC in Strasbourg was a terrific event. Huge props to the local organizing team who managed to make things work regardless of last minute curve balls, such as the venue changing or the video recording team (and their equipment) not being able to attend due to visa restrictions. I went with Alexandre Franke to pick up recording equipment only half an hour before the opening session on the first day, and manned the cameras sporadically, but was glad that other volunteers were able to fill the gaps as I was running all over the place.

Since I am independent this year (I now have my own business, as some of you might have seen from my unusual laptop sticker), I came to GUADEC thinking I would allow myself to really relax for a change:

“This is great! Besides my daily two-hours of contract work obligation, and my Pitivi talk, and my two days of GNOME Foundation board meetings, I’m a free bird cat! I’ll be able to focus on watching talks, discuss at length with everybody, and get back to making technical contributing to Pitivi this week! \(• ◡ •)/”

grump cat nope grayscale

I had a few surprises:

  • Pinpoint decided to go on strike (hey, this is France after all!) and not display half of my images, so I had to waste ridiculous amounts of time redoing my Pitivi presentation entirely with LibreOffice Impress.
  • I was asked to do the opening session of the conference.
  • And later asked to open the Teams Reports session by presenting an overview of the Foundation’s activities for the year prior to my election.
  • And then to do the closing session as well (in which my theatrical talents were revealed by way of an outrageous accent and legendary repartee).

All on short notice. No sweat.

On the night of the event at the Snooker, I hopped into Thibault’s car (the Pitivimobile; five Pitivi hackers were in it) and we tried getting from his flat to the Snooker bar. We never made it to the Snooker. Instead, we spent 2.5 hours going through every street in Strasbourg, which is quite hilarious—thanks to Thibault and Mathieu’s wonderful sense of orientation, we had a free city tour and a very entertaining experience!

The picnic event was nice and popular. Thankfully, mosquitoes focused on Europeans and left Canoodians like me alone.

The Pitivi & GStreamer hackfest was productive, lots of stabilization/bugfixing work done by Mathieu and Thibault as part of their work enabled by the fundraiser. Lubosz worked on Windows component builds because he has a gluttony for pain.

As part of my pro-bono/FOSS-related projects at idéemarque, a few months ago I spent time rethinking the contents of the GStreamer website, as well as turning those contents into a working website for demonstration and planning purposes. During the hackfest at GUADEC, I therefore spent a few hours discussing with Tim, Sebastian and Edward about those contents, the target audience, intended structure, etc. Eventually, as time allows, we’ll get to the real implementation soonish.

2014-07-31

Some whiteboard notes from the discussion

Unfortunately, in the half day that was left before my return trip, I couldn’t get much done on Pitivi itself asides from some light triaging and testing.

The scheduled talks were great. I was glad that there were only two tracks so that I wouldn’t miss too many of them… Except I still ended up missing most of them as I dealt with last minute issues all the time, so I’ll end up having to watch the recordings anyway! There’s over 100 GB of raw recordings, so let’s be patient toward those who will have the heroic duty of processing them for publishing.

A hero appears. Photo by Jakub Steiner

Photo by Jakub Steiner

I really liked that the venue was right into the heart of the city rather than on the outskirts. Strasbourg is a very pedestrian & bike-friendly city (in case you hadn’t realized from the Pitivimobile story). All the accomodation, restaurants and public transit were close by.

I was very happy to meet with a ton of people I wanted to see again, discuss issues that mattered to me and gather many insights along the way.

In the closing session, I was quite shocked by the unveiling of the Swedish Conspiracy. However, when you start looking more closely at the matter, I think some facts are difficult to overlook indeed:

Pictured: the Italo-Swedish conspiracy

Coincidence? I think not.

All in all, this year’s GUADEC felt like a more modest and informal event compared to years past, while still retaining what makes it GUADEC: great topics, great people, positive energy and one of the most welcoming communities I know.

gnome sponsored badge shadow

July 28, 2014

GUADEC 2014

Well hello there, dear Internet.

So yes, it’s been quite some time since the last blogpost. A lot has been going on, but that will be for future blogposts.

Currently at GUADEC 2014 in Strasbourg, France. Lots of interesting talks and people around.

Quite a few discussions with several people regarding future GStreamer improvements (1.4.0 is freshly out, we need to prepare 1.6 already). I’ll most likely be concentrating on the whole QA and CI side of things (more builds, make them clearer, make them do more), plan how to do nightly/weekly releases (yes, finally, I kid you not !). We also have a plan to make it faster/easier/possible for end-users (non-technical ones) to get GStreamer in their hands. More on that soon (unless Sebastian blogs about it first).

If you want to come hack on GStreamer and Pitivi, or discuss with the various contributors, there’s a Hackfest taking place at GUADEC from Wednesday to Friday. More info at https://wiki.gnome.org/GUADEC/2014/BOFs/Pitivi

July 02, 2014

Venez nombreux à GUADEC!

Pour la première fois depuis quatorze ans, la conférence GUADEC fait son retour en France. Je vous encourage à venir nombreux à cet événement qui se tiendra à Strasbourg durant la dernière semaine de Juillet. Je dois traverser l’Atlantique (à la nage), prendre plusieurs avions et autobus pour y aller, alors pas d’excuses pour ceux situés à moins de 6000 km!

“GUADEC, ça a un intérêt pour quelqu’un qui n’est pas un développeur?”

La majorité des présentations (voir le programme qui sera publié sous peu) est à la portée du grand public avec un penchant geek, du moment que vous ayiez un fort intérêt/curiosité pour les technologies impliquées. On n’y retrouve habituellement pas Gaspard le boulanger ou Amélie la chirurgienne, mais tous sont les bienvenus. Après tout, GUADEC signifie “GNOME Users and Developers European Conference”!

GUADEC 2013 interns lightning talks, by Ana Rey

(photo par Ana Rey)

Les présentateurs axent souvent leur présentation autour des avancées récentes de leur logiciel, le design et l’expérience utilisateur, la direction générale d’un projet donné, etc. Bien sûr, plusieurs sujets restent à teneur assez technique, mais je crois que ceux qui lisent Planet Libre, Planet GNOME ou Linux FR sont déjà assez à l’affut pour comprendre le contenu. L’événement étant une conférence internationale, il faut toutefois avoir une compréhension correcte de l’anglais puisque c’est la langue utilisée lors des présentations.

Outre les présentations magistrales, il y a aussi les hackfests informels et sessions BoFs (“birds of a feather”), où des petits groupes se forment autour de thématiques particulières. Par exemple, il y a traditionnellement un hackfest pour Pitivi en présence des développeurs de GStreamer, étudiants GSoC et autres membres de la communauté intéressés. Amenez votre ordinateur et votre bonne humeur!

Bref, GUADEC reste beaucoup plus “accessible” aux non-devs que le GNOME Summit, ce dernier étant un événement très technique, plus proche d’un méga-hackfest.

“Que faire en tant que non-développeur?”

  • Rencontrer les contributeurs qu’on ne connaissait que de nom depuis des années, que ce soit dans les corridors ou dans les événements (généralement il y a au moins une ou deux soirées organisées).
  • Écouter des présentations en personne, poser des questions ou débattre.
  • Pourchasser nos développeurs préférés et discuter de nos bugs favoris. Pour reproduire un bug complexe à reproduire, ou simplement pour faire en sorte que le développeur se penche sur la question en investiguant en temps réel, il n’y a rien de mieux.
  • Discuter poliment de design sans avoir à écrire des pavés ou risquer des malentendus à cause du manque richesse des interactions textuelles.
  • Jouer le rôle de reporter et faire de la couverture médiatique de l’événement.
  • Rencontrer de nouveaux contributeurs, surtout les étudiants GSoC.
  • Participer aux sessions BoF (“birds of a feather”), notamment Le Pitivi Hackfest 2014.
  • Affronter Bastien Nocera au foot.

J’espère vous voir nombreux à cet événement incontournable!

June 23, 2014

June development update

Good news everyone !

This is the first blog post of a series of updates about our latest development efforts in GStreamer / gst-editing-services / Pitivi.

This post's focus will be on MPEG transport stream, a format now nearly twenty years old, originally developed and still widely used for and by the broadcasting industry. In the mid-2000s, some people decided it would be a great idea to use this format in camcorders, stuffed a rather useless timestamp in there for good measure and started to ship AVCHD camcorders like it was the greatest thing since sliced bread.

It was not. See, most modern video codecs such as h264 rely on the notion of keyframes: to compress video streams as much as possible, most frames are encoded as their difference with the previous frame, we call these frames delta units. Sparsely distributed in the encoded stream are keyframes. These frames can be decoded without any reference to past frames, and when trying to seek in such a stream, decoding has to start from a keyframe and progress through all the delta units up to the requested frame.

Video editing implies accurate seeking, for example if you only want to include the 10 last frames of a 2-hour clip, decoding the whole file to obtain these few frames would be a pointless waste of time.

Failing to start decoding from a keyframe when seeking creates artefacts, garbled frames : the decoder is missing information to decode the delta unit, tries to provide a frame nevertheless and fails in doing so, until the next keyframe has been reached. Containers that are readily usable for editing contain information about the location of keyframes, in one form or another. This is not the case of MPEG TS, of which AVCHD is a subset. Locating the keyframes thus becomes a rather involved process, as one needs to parse the video streams in order to do so.

Backtracking to the introduction of this post, good news everyone ! We just did that, and here is a before / after video to demonstrate our changes. We can now ensure full support of AVCHD, enjoy :D

The next two posts will be respectively focused on our refactoring of our video / audio mixing stack, and our ongoing work on gnonlin, our non-linear editing engine.

June 19, 2014

Hi, ImageSequenceSrc!

Hello guys. I’ve written an element in gst-plugins-bad which I called as GstImageSequenceSrc. It works very similar to GstMultiFileSrc, but there are some differences.

Differences with GstMultiFileSrc.

Handles a list of filenames instead of a printf pattern as GstMultiFileSrc does.
* Having a list of filenames is useful because you can set the filenames you want,
in the order you want. For example, you can add more filenames or sort the list.
* There are two ways to create this list:
    a) Setting a location propertie. This value could look like:

  1. '/some/folder/with/images/ or .'
  2. '/a/path/img.jpg,/other/path/img2.jpg' or 'img1.png,img2.png'
  3. '/a/path/*.JPEG or *.png'

    b) Setting the filename-list (GList) directly.

* Creates an "imagesequence://" protocol which allows the playbin to play this element. It handles a ‘fake’ uri but it is useful finally.

gst-launch-1.0 playbin uri=”imagesequencesrc:///some/path/*.jpg?framerate=20/1&loop=1”

* It “detects” the mimetype of the images. You only have to worry about the framerate.
* It calculates the duration.

Things to improve:

* Seeking: it seeks to the wrong image sometimes (actually, after many seeks).
* The way duration is calculated.

PD: stormer, the guy who hangs on #gstreamer, was telling me to support png *and* jpeg files (both at the same time) but I don’t see a usecase.

You can see the most-stable version of this element in 
https://github.com/cfoch/gst-plugins-bad/tree/sequences/gst/sequences

The develop branch in (currently it is the same):
https://github.com/cfoch/gst-plugins-bad/tree/sequences-develop/gst/sequences

June 16, 2014

Transforming Video on the GPU

OpenGL is very suitable for calculating transformations like rotation, scale and translation. Since the video will end up on one rectangular plane, the vertex shader only needs to transform 4 vertices (or 5 with GL_TRIANGLE_STRIP) and map the texture to it. This is a piece of cake for the GPU, since it was designed to do that with many many more vertices, so the performance bottleneck will be uploading the video frame into GPU memory and downloading it again.

The transformations

GStreamer already provides some separate plugins that are basically suitable for doing one of these transformations.

Translation

videomixer: The videomixer does translation of the video with the xpos and ypos properties.

frei0r-filter-scale0tilt: The frei0r plugin is very slow, but it has the advantage of doing scale and tilt (translate) in one plugin. This is why i used it in my 2011 GSoC. It also provides a “clip” propery for cropping the video.

Rotation

rotate: The rotate element is able to rotate the video, but it has to be applied after the other transformations, unless you want borders.

Screenshot from 2014-06-16 17:54:44

Scale

videoscale: The videoscale element is able to resize the video, but has to be applied after the translation. Additionally it resizes the whole canvas, so it’s also not perfect.

frei0r-filter-scale0tilt: This plugin is able to scale the video, and leave the cansas size as it is. It’s disadvantage is being very slow.

So we have some plugins that do transformation in GStreamer, but you can see that using them together is quite impossible and also slow. But how slow?

Let’s see how the performance of gltransformation compares to the GStreamer CPU transformation plugins.

Benchmark time

All the commands are measured with “time”. The tests were done on the nouveau driver, using MESA as OpenGL implementation. All GPUs should have simmilar results, since not really much is calculated on them. The bottleneck should be the upload.

Pure video generation

gst-launch-1.0 videotestsrc num-buffers=10000 ! fakesink

CPU 3.259s

gst-launch-1.0 gltestsrc num-buffers=10000 ! fakesink

OpenGL 1.168s

Cool the gltestsrc seem to run faster than the classical videotestsrc. But we are not uploading real video to the GPU! This is cheating! Don’t worry, we will do real world tests with files soon.

Rotating the test source

gst-launch-1.0 videotestsrc num-buffers=10000 ! rotate angle=1.1 ! fakesink

CPU 10.158s

gst-launch-1.0 gltestsrc num-buffers=10000 ! gltransformation zrotation=1.1 ! fakesink

OpenGL 4.856s

Oh cool, we’re as twice as fast in OpenGL. This is without uploading the video to the GPU though.

Rotating a video file

In this test we will rotate a HD video file with a duration of 45 seconds. I’m replacing only the sink with fakesink. Note that the CPU rotation needs  videoconverts.

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! videoconvert ! fakesink

CPU 17.121s

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! gltransformation zrotation=1.1 ! fakesink

OpenGL 11.074s

Even with uploading the video to the GPU, we’re still faster!

Doing all 3 operations

Ok, now lets see how we perform in doing translation, scale and rotation. Note that the CPU pipeline does contain the problems described earlier.

gst-launch-1.0 videomixer sink_0::ypos=540 name=mix ! videoconvert ! fakesink filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! videoscale ! video/x-raw, width=150 ! mix.

CPU 17.117s

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! gltransformation zrotation=1.1 xtranslation=2.0 yscale=2.0 ! fakesink

OpenGL 9.465s

No surprise, it’s still faster and even correct.

frei0r-filter-scale0tilt

Let’s be unfair and benchmark the frei0r plugin. There is one advantage, that it can do translation and scale correctly, but rotation can only be applied at the end. So no rotation at different pivot points is possible.

gst-launch-1.0 filesrc location=/home/bmonkey/workspace/ges/data/hd/fluidsimulation.mp4 ! decodebin ! videoconvert ! rotate angle=1.1 ! frei0r-filter-scale0tilt scale-x=0.9 tilt-x=0.5 ! fakesink

CPU 35.227s

Damn, that is horribly slow.

The gltransformation plugin is up to 3 times faster than that!

Results

The gltransformation plugin does all 3 transformations together in a correct fashion and is fast in addition. Furthermore threedimensional transformations are possible, like rotating around the X axis or translating in Z. If you want, you can even use orthographic projection.

I also want to thank ystreet00 for helping me to get into the world of the GStreamer OpenGL plugins.

To run the test yourself, check out my patch for gst-plugins-bad:

https://bugzilla.gnome.org/show_bug.cgi?id=731722

Also don’t forget to use my python testing script:

https://github.com/lubosz/gst-gl-tests/blob/master/transformation.py

Graphene

gltransformation utilizes ebassi’s new graphene library, which implements linear algebra calculations needed for new world OpenGL without the fixed function pipeline.

Alternatives worth mentioning for C++ are QtMatrix4x4 and of course g-truc’s glm. These are not usable with GStreamer, and I was very happy that there was a GLib alternative.

After writing some tests and ebassi’s wonderful and quick help, Graphene was ready for usage with GStreamer!

Implementation in Pitivi

To make this transformation usable in Pitivi, we need some transformation interface. The last one I did was rendered in Cairo. Mathieu managed to get this rendered with the ClutterSink, but using GStreamer OpenGL plugins with the clutter sink is currently impossible. The solution will either be to extend the glvideosink to draw an interface over it or to make the clutter sink working with the OpenGL plugins. But I am rather not a fan of the clutter sink, since it introduced problems to Pitivi.


May 27, 2014

GstVideoOverlay Gtk+ OpenGL Sink Example in Python 3

I often forget how this exactly works, so here a minimalist example, translated from the C example in gst-plguns-bad.

Please note that this currently only works in X11, since the XID is needed.

You can exchange the sinks and sources. For example videotestsrc and xvimagesink.

Working with the Clutter sink requires a litte more work.


May 01, 2014

GSoC 2014 projects for Pitivi: transformation, chroma keying, image sequences

Thanks to GNOME, we will be able to get some reinforcements for Pitivi this summer.

We’re very pleased to have Lubosz Sarnecki making a comeback! In 2011 he implemented the cairo-based clip transformation (zoom/resize/crop) feature in the viewer. Lubosz is quite experienced with OpenGL, Blender and GStreamer, as you can see on his blog  and the variety of projects he contributes to.

lubosz working on the transformation UI in 2011

Lubosz working on Pitivi at the 2011 Desktop Summit

This year, his project is:

  1. Evaluate the need for an OpenGL-based Ken Burns effect in GStreamer, vs an approach combining the CPU-based transformation plugins (videocrop, videoscale, videorotate and videomixer).
  2. Resurrect and extend our clip transformation user interface in Pitivi, to support rotation and animatable properties.
  3. Implement a color picker and custom chroma keying UI.

We’re also introducing Fabian Orccon as a GSoC student this year. His project will be to implement a user interface for handling image sequences (timelapses, stopmotion animation, output from Blender, etc.) as regular video clips in Pitivi. This is a fair amount of UI work, with possibly some work directly in GStreamer to improve the MultiFile element in gst-plugins-good. Depending on the remaining available time, he may also tackle bug fixing and implementing various features like copy-pasting clips. You can read Fabian’s initial blog post with mockups of how the image sequence UI/workflow could look like.

April 25, 2014

How would you like Pitivi image-sequence feature look?

Hello! I’m Fabián Orccón. I’ve been accepted to Google Summer of Code 2014 for GNOME with Pitivi. My project consists on adding an image-sequence feature to Pitivi. The usecases could be creating stop-motion films or a 2D animation for example.
The idea of this feature came to my mind when I wanted to film a homemade stop-motion movie with my sister. I had Ubuntu, and by that time Pitivi was the video-editing tool by default. I realized that Pitivi doesn’t have this feature implemented, and I wanted to do it.
I’m not the expert in the arts of cinematography. I was talking with Pitivi people and Bassam Kurdali, director of the first open movie, about how Pitivi could handle this feature. After mixing and organizing ideas I have this mockup. It is just a draft; I don’t know if I’m missing something.

I would like you to tell me how would you like this feature should work. Your opinion as user is so important. Pitivi wants to be the best option for you!

image

Download | svg |



April 21, 2014

gst-validate: A suite of tools to run integration tests for GStreamer

Collabora ltd. has been developing a tool that allows GStreamer developers to check that the GstElements they write behave the way they are supposed to: GstValidate. The tool was started first to provide plug-ins developers with a tool to check that they use the framework the proper way. Since the beginning, it has been available in gst-devtools, a gst module where we collect a set of tools to facilitate GStreamer development and usage.

Well, what is it about?

The GstValidateMonitor

Basically gst-validate allows us to monitor everything that is happening inside a GstPipeline. For example if you want to check that every component of a pipeline is properly behaving, you can create a GstValidatePipelineMonitor that will track that pipeline. Then each time a GstElement is added to the pipeline, a GstValidateElementMonitor will be instantiated and start tracking that element. Then when a GstPad is added to that GstElement a GstValidatePadMonitor will be monitoring that pad.

This monitoring logic allows us to check that what those elements do respect some rules GStreamer components have to follow so all the elements of a pipeline can properly interact together. For example, a GstValidatePadMonitor will make sure that if we receive a GstSegment from upstream, an equivalent segment is sent downstream before any buffer gets out. You can find the whole list of tests currently implemented here.

The GstValidateRunner

Then there is an issue reporting system so that each issue found during the execution of the pipeline is reported with as much details as possible so that users understand what the detected misbehaviour is about and can fix it efficiently.

In term of code the only thing to do in order to get a pipeline monitored is:

int
main (int argc, gchar ** argv)
{
 GstElement *pipeline;
 GstValidateRunner *runner;
 GstValidateMonitor *monitor;

 gboolean ret = 0;

 /* Initialize GStreamer and GstValidate */
 gst_init (&argc, &argv);
 gst_validate_init ();

 /* Create the pipeline and make sure it is
  * monitored */
 pipeline = gst_pipeline_new (
     "monitored-pipeline");
 runner = gst_validate_runner_new ();
 monitor = gst_validate_monitor_factory_create (
     GST_OBJECT (pipeline),
     runner,
     NULL);

 /* HERE you can do anything you want with that
  * monitored pipeline */

 /* Now print the errors on stdout.
  * The return value of that function
  * is != 0 if if critical errors occured
  * during the execution of the pipeline */
 ret = gst_validate_runner_printf (runner);

 /* Cleanup */
 gst_object_unref (pipeline);
 gst_object_unref (monitor);
 gst_object_unref (runner);

 return ret;
}

The result of gst_validate_runner_printf will look something like:

issue : buffer is out of the segment range Detected on theoradec0.srcpad at 0:00:00.096556426
        
Details : buffer is out of segment and shouldn't be pushed. Timestamp: 0:00:25.000 - duration: 0:00:00.040 Range: 0:00:00.000 - 0:00:04.520
Description : buffer being pushed is out of the current segment's start-stop  range. Meaning it is going to be discarded downstream without any use

Here we can see that an issue occurred on the src pad of theoradec as it outputted a buffer that was not inside the segment it last pushed. This is an interesting piece of information and clearly shows an error in the element. (Note: This issue does not actually exist)

How should it be used?

GstValidate command line tools

In order to make gst-validate usage simple, we created command line tools that allow plugin developers test there elements in many use cases from a high level perspective.

The gst-validate pipeline launcher

This is a command line pipeline launcher similar to gst-launch. That tool uses the gst-launch pipeline description syntax and make sure that the pipeline is monitored and that the users will have all the reported information from the GstValidate infrastructure. As you can expect, you can monitor the playback of a media file using playbin as follow:

gst-validate-1.0 playbin uri=file:///.../file.ogg

You will then be able to see all the issues GstValidate found.

The gst-validate-transcoding tool

A command line tool allowing to test media files transcoding with a straight forward syntax. You can for example transcode any media file to vorbis vp8 in webm doing:

gst-validate-transcoding-1.0 
    file:///./file.ogg 
    file:///.../transcoded.webm 
    -o 'video/webm:video/x-vp8:audio/x-vorbis'

It will report what issues happened during the execution of that pipeline.

The gst-validate-media-check tool

A command line tool checking that media files discovering works properly with gst-discoverer. Basically it needs a reference text file containing valid information about a media file (which can be generated with the same tool) and then it will be able to check that those informations correspond to what is reported by gst-discoverer over new runs.  For example, given that we have a valid reference.media_info file, we can run:

gst-validate-media-check-1.0 
  file:///./file.ogv 
  --expected-results reference.media_info

That will then output found errors if any and return an exist code different from 0 if an error was found.

GstValidateScenarios

As you can notice, those tools let us test static pipelines execution and not that the pipeline reacts properly during execution of actions from the end user such as seeking, or changing the pipeline state, etc… In order to make that possible and easy to use we introduced the concept of scenarios.

A scenario is a set of actions to be executed on the monitored pipeline serialized in a text file. An  action (GstValidateAction) is just a function call executed at a precise time (usually based on the playback position of the pipeline).

An example of scenario:

# Just some metadatas describing the scenario
# The format is the GstStructure serialization 
# synthax
description, seek=true, duration=3.0

# When the pipeline reaches 1.0 second of
# playback it launches an accurate flushing
# seek to 10.0 seconds
seek, playback_time=1.0, start=10.0, 
flags=accurate+flush

# Send EOS to the pipeline
# so it stops and the application
# knows about it.
eos, playback_time=12.0

You can find more examples of scenarios here

gst-validate-launcher

This looks all right but shouldn’t those tests be executed automatically on a large number of samples and with the various existing scenarios? This is where gst-validate-launcher comes into play. It is basically a python application that launches the tools described above with the proper parameters. It then monitors them, check their results and serializes those results into a junit xml file (more formatter could obviously be implemented). That tools is pretty simple and it is only a matter of setting the media samples to run the tests on and set what scenarios you want to run.

Where is it used?

GStreamer Editing Services

As part of the GStreamer Editing Services project, I have made sure that ges-launch (a command line tool that launches timelines, video editing projects etc…) works with GstValidate, if compiled against it. This means that we can launch scenarios and test GES sharing the same infrastructure as the rest of GStreamer. It is also very interesting to be able to monitor dynamic pipeline (within GES the GstPipeline changes very dynamically) to discover elements misbehavior in that stressful use case. For the time being we do not have many GES specific GstValidateActions implemented, but we will implement more as needed (mostly implementing timeline edition actions, ie. moving clips around the timeline, changing effect properties, etc…) As part of the Pitvi fundraiser, we are also investigating how to serialize users action in Pitivi so that we could easily reproduce users issues outside of the application (and thus outside the python interpreter).

The GStreamer jenkins integration server

On the GStreamer continuous integration server, we are running gst-validate-launcher on a set of media sample in major formats after (almost) each commit on any of the component of the GStreamer stack. This is just the beginning and we are slowly adding more tests there making sure they pass and tracking regressions.

A lot of work has been done around that tool. We still need to clean up some part, review the few APIs we have, and a particular effort as to be made around the documentation. But now that good basis are there, we should just keep adding more tests to detect regressions in GStreamer as soon as possible. If you are interested in using that tool please come talk to us in #gstreamer on freenode!

April 14, 2014

First delivery of the pitivi fundraiser: universal daily bundles for Linux

The Pitivi community is very happy to announce the availability of easy to use, distro-independent Linux bundles to test latest version of the application. This eliminates dependency problems and allows quicker testing cycles. Our entire stack is bundled, so the only requirement is glibc ≥ 2.13.

Simply Download the bundle and run it!

This is the first delivery of the Pitivi Fundraiser—as you can see, we are already well on our way to deliver what has been promised in our detailed planning. You can have a look at what is happening with the "daily build" bundles on on our Jenkins instance (main server hosting donated by Collabora—thanks!).

To build the bundles we use Cerbero, which is the build and packaging system used by Collabora and Fluendo to construct the GStreamer SDK, and is also used by the GStreamer community to deliver GStreamer 1.x binaries for Android, iOS, Mac OS X and Windows. It is a very useful and well-designed technology, which will allow us to easily create packages for Windows and Mac OS X in the future.

This does not only apply to us, of course: work that has been made for creating Linux distro bundles allows anyone to easily create bundles for their applications with Cerbero. This has not been merged just yet, but that should happen quite soon. If you want to bundle your app using Cerbero, do not hesitate to ask us, even if it should already be really straight forward!

We raised half of the amount we are targeting for the Pitivi fundraising campaign, and we are in very good shape to be able to deliver everything on time. We need your help to reach that target.

Please donate now to make sure we will be able to provide the community with the great video editing app it deserves!

April 13, 2014

How to play a video using GES?

Hello. I want to show you an example of a command-line video player using GES. This program is very simple and my purpose is you learn the basis. I paste the enterily code here if you just want to read the code.

from gi.repository import Gtk
from gi.repository import GES
from gi.repository import Gst
from gi.repository import GObject

import signal

video_path = "/home/cfoch/Videos/samples/big_buck_bunny_1080p_stereo.ogg"

def handle_sigint(sig, frame):
    print "Bye!"
    Gtk.main_quit()

def busMessageCb(unused_bus, message):
    if message.type == Gst.MessageType.EOS:
        print "EOS: The End"
        Gtk.main_quit()

def duration_querier(pipeline):
    print pipeline.query_position(Gst.Format.TIME)
    return True

if __name__ == "__main__":
    GObject.threads_init()
    Gst.init(None)
    GES.init()

    video_uri = "file://" + video_path

    timeline = GES.Timeline.new_audio_video()
    layer = timeline.append_layer()

    asset = GES.UriClipAsset.request_sync(video_uri)
    clip = layer.add_asset(asset, 0, 0, asset.get_duration(), GES.TrackType.UNKNOWN)

    timeline.commit()

    pipeline = GES.Pipeline()
    pipeline.set_timeline(timeline)

    pipeline.set_state(Gst.State.PLAYING)

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", busMessageCb)
    GObject.timeout_add(300, duration_querier, pipeline)

    signal.signal(signal.SIGINT, handle_sigint)

    Gtk.main()

We’re going to start checking these lines.

    Gst.init(None)
    GES.init()

We always need to use these lines to initialize GES. And you have to follow this order: initialize Gst before GES. If you change the order of these lines you program won’t work.

The next important line is

    timeline = GES.Timeline.new_audio_video()

With this line we create a Timeline. I understand the timeline as the general container. It will contain all the elements necessary to play the (edited) video or audio we want. A timeline contains GESVideoTracks and/or GESAudioTracks.

For example, if we want to play a video (with sound), our Timeline will contain both, a GESVideoTrack and a GESAudioTrack. If we want to play our favorite song, the timeline will contain only a GESAudioTrack.

In this case, this line is creating a GESTimeline adding to it a GESVideoTrack and a GESAudioTrack. It is a shortcut to it:

    videotrack = GES.VideoTrack.new()
    audiotrack = GES.AudioTrack.new()
    timeline = GES.Timeline.new()
    timeline.add_track(videotrack)
    timeline.add_track(audiotrack)

The Timeline not only contains tracks, but also it contains many GESLayer. Our program will contain an only one GESLayer. But a Timeline is a stack of layers. The most-top layer will have the major priority. In the default case, the priority is given by 0. But… why do we need a GESLayer? Here we’ll add our clips, our effects, transitions. So, a GESLayer is another container. We not only create a GESLayer with this following line, but also we append the created layer to our timeline:

    layer = timeline.append_layer()

What it is a kind of confusing to me is the GESAsset. I found them as an abstract idea of information of the video or audio file for example we will use. However, a GESAsset is not only limited to audio and video. It can have information about an effect or about a GESVideoTrack/GESAudioTrack.

In this case we’ll take advantage of the method request_sync GESUriClipAsset which allow us to create a GESAsset from a real file located in our computer, for example from a ‘video.ogv’. This method gets information like the duration of the video.

    asset = GES.UriClipAsset.request_sync(video_uri)

The following line will generate the clip from the asset and add it to the layer we’ve created before.

    clip = layer.add_asset(asset, 0, 0, asset.get_duration(), GES.TrackType.UNKNOWN)

And To apply our changes…

    timeline.commit()

To be able to display our video on the screen we need the GESPipeline. The GESPipeline allow us to play or render the result of our work, the result of our timeline. So a pipeline needs to be linked to a timeline.

    pipeline = GES.Pipeline()
    pipeline.set_timeline(timeline)

By default, the state of the pipeline is NULL. However, we can change this state to PLAYING or RENDER. When its state is in PLAYING the pipeline will display the result in the screen. If the state is RENDER it will ‘export’ the result of our work in an out_file.ogv video, for example. We will set the state or mode of the pipeline in PLAYING because we want only to play the video.

    pipeline.set_state(Gst.State.PLAYING)

If we remove lines bewtween 43 and 48 lines, and we execute the program, we can see the video playing. But these lines are necessary to handle when to stop the program. The program is an active process until something stop it. If we don’t stop it, when our video has finished, it will be still executing. We don’t want. We would like to stop/close/finish our program when. Ok… but… what do we have to stop? We have to stop the loop. What loop?

    Gtk.main()

Every pipeline contains a GstBUS by default. A GstBUS is a system which is watching out to some mesages; this messages could be for example: the EOS of the video or another messages. See http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstMessage.html#GstMessageType. We need to add a message handler to the bus of our pipeline using gst_bus_add_watch().

    bus.add_signal_watch()

If you want to know more about the GstBUS, by the way. Look at this link: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-bus.html

We’ve already added the message handler, but now we need to add the function we’re going to use to handle the message. This function is busMessageCb. ‘Cb’ from Callback. We connect the bus with this function with the following line.

    bus.connect("message", busMessageCb)

The function busMessageCb is created by the developer. The target of this function is to stop the loop create by Gtk.main() when the playing of the video has finished.

def busMessageCb(unused_bus, message):
    if message.type == Gst.MessageType.EOS:
        print "EOS: The End"
        Gtk.main_quit()

We’re finishing… this following line is just going to tell us what nanosecond of the video is the program playing each 300 miliseconds. Each 300 miliseconds our program will execute the duration_querier function which prints on the console the time of the video in a certain moment in nanoseconds.

    bus.connect("message", busMessageCb)

Finally, as an extra, I want this program being able to get closed when the user press “Ctrl+C”. The line below allow it. We’re taking advantage of the signal python library. And the program knows when “Ctrl+c” keys have been pressed because of the signal.SIGINT parameter. When these keys are pressed it calls to the function handle_sigint(sig, frame) which says "Bye!" and stop the loop.

    signal.signal(signal.SIGINT, handle_sigint)

April 08, 2014

How do you visually represent a project’s timeline?

Here is a fun example to illustrate why software development in general is a complex endeavour:

  1. You think you’re going to fix a tiny problem: “hey, maybe we could make ‘s welcome dialog look a bit nicer“.
  2. Eventually, someone proposes a design or idea that looks interesting, and you realize that to truly realize it, you should also implement an audacious new feature: a way to visually represent an entire timeline as a thumbnail (that one is an open question, by the way; if you have some clever ideas, feel free to share them)
  3. …and to display new feature B properly, you should also consider—ideally—being a good citizen and implementing feature C upstream, in the toolkit you use instead of doing your own thing in your corner.

This kind of serendipity and interdependence happens regularly in FLOSS applications like Pitivi where we prioritize quality over “meeting shareholders’ deadlines and objectives”, which is why we sometimes take more time to flesh out a solution to a problem: we aim for the best user experience possible, all while negotiating and working with the greater software ecosystem we live in, instead of silently piling up hacks in our application… and we depend on the involvement of everyone for things to progress.

March 21, 2014

GStreamer Hackfest and the first Beta release of Pitivi, "Ra is a happy"

Last week-end, part of the Pitivi Team went to the GStreamer Hackfest in Google's offices in Munich to work with twenty other GStreamer hackers on various important technical issues. A big thanks to Google and Stefan Sauer for hosting the event! Keep your eyes peeled: we will soon blog the results of the work the Pitivi team has accomplished during the hackfest.

During the hackfest a very important milestone has been reached: the first GStreamer Editing Services, GNonLin and gst-python stable versions in the 1.X branch have been released. That means that these very central components of the Pitivi project are now considered stable.

While this backend work was essential to the beta release, we also want to specifically thank Alexandru Balut for his impressive involvement during the 0.92 -> 0.93 cycle. He provided an impressive amount of bug fixing and cleanup patches in Pitivi itself, and has greatly helped the project reach a beta state. Any inquiries regarding the 0.93 release codename must be sent in his general direction.

This release will be the basis on which we will start our work for our ongoing fundraiser. We've done that work in our spare time, and we're excited about what we'll be able to accomplish once we start working full time! Thibault Saunier has already been preparing bundles for the release, more on that in the next post!

Once again, you can help us in producing a rock solid stable release, by donating and spreading the message!

Pitivi 0.93 released

Last week, a flash snowstorm brought me around 2ft of snow overnight. I thought, “If I’m going to clear that much snow, might as well have some fun and make a timelapse out of it”, and so I did. While watching it, I realized, “Hmm… that’s an interesting metaphor for the huge amount of preparatory and cleanup work we’ve been doing in the past few years”:

Today, we’re very happy to announce another great Pitivi release. It brings a truckload of bug fixes and refinements, which you can read about in the 0.93 release notes (prepared by yours truly). This release now brings us to a quality level where we can let go of the “alpha” status and call this a “beta”. Many nasty bugs are gone and people are increasingly relying on it for their own projects. Besides the video above, the 2014 fundraiser‘s video and the Pitivi showcase, I was quite pleased to see someone using Pitivi to easily make a nice video for a commercial booth at a technology tradeshow!

0.93 is the result of continued efforts in our spare time—occasional hacking during vacations, nights and week-ends. Just imagine what could be achieved if Mathieu and Thibault could be funded to work full-time towards bringing us to 1.0!

Update: you may also want to take a look at this blog post.

March 04, 2014

Ayuda a PiTiVi a llegar a su versión 1.0

PiTiVi, el editor de video open source, ha iniciado una recaudación de fondos para poder llegar a su versión 1.0.

Como muchos de los usuarios de linux pueden saber, actualmente hay pocos programas de código abierto para editar video. A ello se le suma que ninguno de ellos son muy buenos. En los últimos dos años, PiTiVi ha pasado a usar una potente librería para edición de video llamada Gstreamer Editing Services (GES). PiTiVi ahora es una UI para esta librería. Los dos desarrolladores que trabajaron en GES han iniciado una campaña de recolección de fondos en Internet para hacer de PiTiVi una versión estable, ofrecer un programa de calidad y lanzar PiTiVi 1.0. Por ahora ya se ha recaudado más de un tercio del total (35.000 euros) en menos de dos semanas.

El lema de PiTiVi es “allowing everyone on the planet to express themselves through filmmaking, with tools that they can own and improve (permitir a todos en el planeta expresarse a sí mismos a través de la producción audiovisual con herramientas que ellos puedan poseer y mejorar)”. El equipo quiere construir una aplicación de edición de video flexible y de gama alta que pueda ser usado por profesionales o aficionados.

Desde un punto de vista técnico, PiTiVi es probablemente el editor de video opensource más prometedor (disponible en Linux) porque es respaldado por Gstreamer (un framework multimedia), el cual es usado por la mayoría de las distribuciones de Linux. Se ha requerido bastante tiempo por los desarrolladores de GES y Gstreamer. Usar una librería de edición de video de gama alta como GES hace relativamente fácil la tarea de crear un editor de video para adaptarlo necesidades específicas, por ejemplo. Todas las aplicaciones que usan Gstreamer (como Totem) se beneficiarán de su desarrollo.

Esta no es una inciativa personal. En primer lugar, dos es máß que uno. En segundo lugar, GES ha sido desarrollado con una colaboración muy cercana del equipo de Gstreamer. Además, hay otros desarrolladores y contribuidores involucrados, y cada verano aparecen los estudiantes que participan del Google Summer of Code. ¡La comunidad es muy activa!

El rol de esta camapaña es acelerar el desarrollo para conseguir nuestras metas más rápido. Las donaciones van a través de la fundación sin ánimos de lucro GNOME. Esto quiere decir que las donaciones en USA son deducibles de impuestos. Por favor, vea el video de la campaña y done en http://fundraiser.pitivi.org

Gracias :o)

March 03, 2014

Pitivi Fundraiser Week One Update (And A Great Piece Of News)

Greetings Pitivi supporters!

We hope everyone had a great week! We've had a rather hectic one, and hopefully that's just the beginning. This is the first update for our fundraising campaign, be sure to check our blog weekly for more ;)

Announcement!

We are happy to announce that the GStreamer maintainers decided to show us their faith and support, by allocating 2 500 € to our project from GStreamer funds! This is great news for several reasons:

  • It's obviously nice to get such an amount of money as it represents seven percent of the total needed to get a 1.0 release or, to put it another way, three weeks of full time development!

  • GStreamer is the central component of our architecture, and it is the one on which we plan to spend most of our time during the push to 1.0. Pitivi really is just the tip of the iceberg, to put things in perspective, it now only is a mere 25,000 lines of Python code, whereas GStreamer and its plugins represent around 1.5 million lines of code.

    Our work really benefits every other project that uses GStreamer (for example, accurate seeking in ogg files? That was us!), and it is meaningful to see the GStreamer maintainers acknowledge that "officially", many thanks to them! And then many more for the road. They're awesome and a big reason why we love working on Pitivi.

  • We really hope this donation will help everyone that cares about Open Source, be they individuals craving for flawless multimedia handling on Linux or companies interested in building products around GStreamer, to see that we are an integral part of the community, and that donating to the campaign is not only about getting a great video editor, but also about improving the core multimedia engine shared by most if not all the Linux distributions!

Acknowledgement!

We would like to thank each and everyone of the 350+ backers that already donated to the campaign and helped us break the 10 000 euros bar during this last week. 11 000 € is a great amount of money, sufficient to cover our expenses for three months of full-time development! With your help, we already made it to a third of our first goal, and with your help we can make it to Pitivi 1.0 and beyond. Anything helps, be it blogging, tweeting and sharing on social networks, or getting the word out to journalists.

DistroWatch also decided to make us the recipient of their monthly donation, and granted us 280 euros, it's a great honor for us to be listed among the previous recipients of that donation!

Appetizement!

The dictionary doesn't seem to agree that this word should exist, but it's here nevertheless. Next week should see an interesting announcement for all the fans of Python, Romania and clean code, make sure to stay tuned on our twitter or to add this blog to your RSS feeds ;)

February 27, 2014

Pitivi status update for Q1 2014, fundraiser launch

Since my previous technical update in January, I haven’t had time to touch Pitivi’s code. Thankfully though, Alexandru Băluț has been filling the gap with a ton of refactoring work: around 150 commits! That took a fair amount of time to review and merge, believe me. Besides code cleanup, he also finished the port of the viewer to a cluttersink, fixed fonts and theme colors detection for the timeline (so it looks fine even if you’re not using the Adwaita GNOME theme).

Alexandru took the opportunity to not only fix some bugs, but also do some visual refinements on the timeline ruler: it now shows hours and miliseconds only when needed (depending on the zoom level) and subtly grays out units that did not change from one “tick” to another, so your eyes can focus on what actually changed:

2014-02-26

More interestingly though, with the limited time available to me, I helped prepare Pitivi’s 2014 fundraising campaign. It involves a huge amount of groundwork from an administrative and marketing point of view. It’s been months in the making.

  • I’m glad that Mathieu and Thibault were able to spearhead the work of setting up the infrastructure, preparing the majority of written materials and setting up their legal and financial relationship with the GNOME Foundation.
  • Huge props to Fateh Slavitskaya and Bassam Kurdali for the great work they put into making the fundraiser video (I’m a filmmaker, believe me, I know how hard and time-consuming it is to get this done right, especially when you have voice-overs and 3D animation involved. Oh, and all the editing was done with Pitivi, VFX with Blender). Thanks to Thomas Vecchione for cleaning up the voice recordings and doing proper sound mixing for the video, I definitely can hear the difference.
  • Thanks to Oliver Propst for drafting initial contents for the FAQ and various emails that we had to send for outreach! It is always easier for me to start from existing contents rather than stare at a blank page (you could say I hate reinventing the wheel :)

And thanks to the 300 backers who believed in us and broke the 7000 euros bar in five days since the launch! This is a great start—it could pay for two months of full-time work for one developer such as Mathieu—but we need to reach the minimum of 35 000 and the target of 100 000 if we really want to achieve the goals we’ve set. Get involved and spread the word!

You may also be interested in the following blog posts that did not show up on Planet GNOME (but are visible on Planet Pitivi):

Oh, and besides our G+ page, we’re on Twitter now. I reclaimed control of our “brand” by kindly asking Twitter’s legal department (yes, really). Until now, the @pitivi account was held by someone unknown to us, inactive and unreachable since 2010, so it was not helping anybody:

2014-02-26-2

It is now linked from the main website alongside other social gadgetry.

Also, some upcoming events:

February 26, 2014

Using GStreamer to make smooth slow motion!

This is a very good example of what our developers can do! There has been some preliminary work on bringing slow and fast motion to GStreamer and Pitivi, and a plugin has been created to allow for frame interpolation, which means you and I with our regular 24 frames per second cameras will be able to get smooth slowmotion from Pitivi in the future!

All that work has not yet been merged and thoroughly tested, and we need your help to make it happen!

To help you understand the difference between regular and smooth slowmotion, here is a video showing both types side by side, created by Alexandre Prokoudine. The difference is quite stunning!


Slowmo video effect with GStreamer from Alexandre Prokoudine on Vimeo.

February 25, 2014

Votes: a tool of engagement and development.

During the creation of the campaign, we debated what kind of perks we should offer. The thing is, we are not t-shirt creators, we are software developers and UI designers.

We believe people who give us money do so in order for us to develop a good software, and thus we tried to focus on perks that made real sense. What could we offer to the community that would help us in making the software that they truly want? Our answer to that question: a voice, simple as that.

Though we already have a very active community and listen to feedback from our users, we were missing a way to quantify the priority of feature requests for the people to whom our software matters enough to sponsor our work.

We decided on two types of perks we would offer: invitations to hangouts organized monthly (we'll tell you more about that in a future post), and the subject of today's post, our voting system!

We wanted to grant the possibility of voting to anyone who donated, from the lowest amount available, and decided to weigh the vote proportionally to the amount given, with steps at which the curve flattens a bit, to make sure people who can't donate 300 euros still have a reasonable chance to have their voice heard.

Later I'll post implementation details for those interested, on my personal blog. Suffice to say that it works, and more exciting that people are already making good use of it!

This brings me to the exciting news we want to share: we made the current vote results public! Obviously, as we're in the early stages of the campaign, they're not nearly as significant as they will be later, but we think it's already interesting data, and you can have a look at them live right now .

Now to our own analysis of the data so far:

The first exciting figure for us is that even though we're not yet guaranteed to reach a funding point that can put vote results into development (you can help us get there faster), one third of our backers already took the time to go through the form and rate features on the 0 to 10 scale, and we clearly expect that ratio to grow if we reach the 35,000 bar.

We interpret that as a sign that our voting system answers a real demand. This figure is a clear success in our effort to create sustainable community engagement to support responsive and dynamic Pitivi development, alongside the growing number of people who choose to put their faith in us and donate.

Some points in the feature ranking results caught our attention:

  • The clear first place of hardware accelerated decoding and encoding. This is really interesting to the engineers among us, who already salivate at the possible prospect of implementing it!

    It also goes to show that performance is critical to people using video editing software, and reassures us in our architectural choices: the decision to ally with GStreamer means a lot of the heavy lifting is done as part of a partner project that doesn't have to be written by Pitivi developers from scratch. Instead, we contribute to GStreamer while also reaping the huge benefits of it -- and that means we can focus better on the video editing side of our code (making sure dynamic pipelines work with hardware-accelerated decoders/encoders, adapting and extending our integration test suites to ensure it keeps on being true)

  • The very pragmatic second place of copy paste, a small but oh-so-helpful feature, which goes to show that our backers are sensible, productivity and detail-oriented people.

  • The low ranking of Windows and Mac ports, which is certainly due in part to the fact that awareness about our campaign is pretty much limited to the Free Software community for now.

  • Finally, something we don't really know how to interpret on the spot, but that is interesting to remark nevertheless is that the three last spots are occupied at the time of writing by external project formatters, such as Final Cut projects.

I'll repeat that these rankings are absolutely not definitive, as already existing backers can change their votes and new backers should hopefully continue to give their opinion on what matters to them.

The conclusion is: "Don't like these rankings ? Donate and you can contribute in changing them!"

We're really interested in your analysis of these early results, and hoping that discussion will occur about them in the comments section or on our IRC channel (#pitivi on freenode)!

Thanks for reading, the Pitivi team

February 21, 2014

Give some love to Pitivi !

Today we're thrilled to announce a crowdfunding campaign to support the development of Pitivi!

We have made the choice not to use one of the major crowdfunding platforms such as kickstarter for multiple reasons, and instead partner with the GNOME foundation, which is ideologically aligned with us and will support our financial infrastructure.

We are proud of that partnership, as we share their objective of "creating a free software computing platform for the general public", and the foundation is excited as well:

"GNOME is proud to host the Pitivi campaign. Pitivi fills a real need for stable and approachable high quality video editing. Its software architecture and UI design share the same sleek and forward-thinking approach that we value in the GNOME project." — Karen Sandler, executive director of the GNOME Foundation

With that campaign, our aim is to provide everyone with a rock-solid Free Software video editor, based on the awesome technology that is GStreamer. We have spent a lot of time working on the campaign website, and it holds a lot more content than a simple blogpost could.

We know that what we want to do is the right thing, and requesting money for quality and stabilization first is the correct and honest thing to do. We obviously encourage you to donate to the campaign, but we also hope that you will be willing to spread the message, and explain why what we do is important and good.

Free and Open Source video editing is something that can help make the world a better place, as it gives people all around the world one more tool to express themselves creatively, fight oppression, create happiness and spread love.

Hoping you'll spread the love too, thanks for reading !

February 15, 2014

Applying for a GSoC project is all about early involvement and commitment

This year, Pitivi‘s focus for GSoC projects will be a little bit different than in 2013. As you can see in our preliminary ideas page, there is much less GStreamer (or GES) work involved, as we tried to focus on Pitivi UI work — easier, concrete projects, mostly only in Python. Most of the hardcore backend work we needed to accomplish was done throughout 2013. Of course, there are still some hardcore project ideas around if you feel like you’re up for a challenge. Anyway, the list of ideas is just that: a list of ideas. We’re more than happy for you to come up with your own ideas (thinking outside the box is a positive trait, feel free to impress us).

More importantly, here’s how our approach differs this year:

  • We changed the application “procedure” (see the “How to apply and get started” section on the ideas page). Essentially, “discuss your ideas with us” has become step #3 instead of step #1. The first steps are now get your build working, test and use the application, figure out what you love and hate about it, and then start contributing some fixes immediately. This came out of the realization that this is what we actually tell everybody who shows up on IRC asking “I don’t know what to get started with as a contribution”: get the dev version, use it, then you’ll certainly see things you want to improve to get your feet wet.
  • Like one of the maintainers of Inkscape once said, I will not believe anything you write in your GSoC application. Your formal application is only 10% of the equation. We will base ourselves 90% on your involvement and demonstrated ability to contribute code to Pitivi. The more good-quality patches you’ve made, the more chances you have. As such, we expect you to start getting involved in February (March at the latest). If you only start hanging around by the time GSoC application period starts, you will need to work very hard to convince us that you have what it takes.

January 29, 2014

Presenting and hacking Pitivi at Pycon 2014

Pycon, or what I’d call the “main Pycon” (vs one of the sub-Pycons), has traditionally always been held in the United States of America. This year, it will be held in Montréal.

pycon2014-logo

The most astute among you might have noticed that Montréal (Québec) is not actually in the USA (because trying to capture Québec City during a snowstorm on New Year’s Eve is a terrible idea).

Since Montréal is my homebase (and Pycon happening there might be the event of a lifetime), I decided to showcase… you guessed it: the new version of the Pitivi video editor.

  • I will be making a “scientific congress”-sized poster (122×122 cm, that’s 4×4 feet for you southerners). I enjoy distilling marketing contents into visually compelling print materials, so I’m pretty excited about making an unconventional, jaw-dropping poster. I’ll be using some Pitivi donation money to cover the printing costs (around 25$ according to UQÀM’s estimates, other places start at 90$…). If you have ideas of things I should have on the poster (besides nice charts/graphs, screenshots and artwork), feel free to comment.
  • I won’t be just talking in front of an unusually large piece of paper: it will be a live demo booth with Pitivi running in front of the poster (maybe I’ll need to get something better than a netbook…).

Besides talks, tutorials and poster sessions, Pycon also has sprints (which, as I understand it, are what I usually refer to as “hackfests”). So I will most certainly be looking to host a Pitivi sprint/hackfest from April 14th to 17th.

If you are attending Pycon this year, I hope you will drop by to say hi, try out the coolest pythonic open-source video editor out there or get started hacking on the code!

January 20, 2014

Announcing pitivi's fundraising campaign !

Our team

I've been a part of the Pitivi story on and off since three years now, whenever I could find time really. I've loved every moment I've spent in the community, I've made friends, I've learnt good engineering practices, how to fit in a team, how to communicate clearly, and so much more I can't even start to list.

Not gonna tell the whole story because that would be boring to write and even more to read, but eventually and naturally I became a maintainer alongside Thibault and Jean-François. Jean-François has been around the project for 10 years now, he's awesome, a really dedicated guy. Thibault and I are friends since a long time, well before I started programming, I've been his padawan for the best part of my initiation to programming and Free Software, and he's a great Jedi !

Recently, Alexandru Balut has also started to work with us again, I don't know him as closely as I know Jeff and Thib, but he's commited so much things in the last two months that we've had a hard time reviewing it and preparing the campaign at the same time ! I don't know him as well on a personal level as I know Jeff and Thibault, but I like working with him a lot, he makes great and clean patches and has a seemingly boundless dedication to cleaning up the code and making it elegant.

All this to say I'm proud and happy to be part of such a team. Free Software's most important asset isn't the code, the bug trackers, the continuous integration servers, it's the people, and these folks are great, I can't stress that enough.

Our project

You might have guessed by now that our project isn't to grow genetically modified potatoes in the Southern part of Italy, even though that seems like a compelling idea at first sight. Give it a second thought and you'll realize the hygrometry of that region is absolutely not appropriate, give it a third one for good measure then forget about it.

I'll briefly explain what it is that makes of the Pitivi project the most exciting open source video editing project in my very humble opinion. The reason goes down to our design choices. We've played the long game, and based ourselves on the gstreamer set of libraries and plugins. For a visual and greatly simplified explanation of how that choice is a good thing, you can refer to this animation, then have a look at the impressive list of plugins that we can tap into. GStreamer is where most of the companies interested in open source multimedia invest their money and their time, it's where most of the exciting stuff happens and it's definitely where an ambitious video editing application has to look at.

We also have made the choice of clearly splitting our editing core, our model, from our view, and made it a library, with an awesome API, gstreamer-editing-services, directly usable from C, C++, javascript, python and every language supported by introspection, and possibly any other language provided someone writes bindings for it. That choice was the right one, decoupling components always pays off in the long term, and we are finally starting to see the benefits of that choice: Pitivi has seen its size divided by two, while gaining in stability. This makes it much easier for new contributors to come in, and for us to maintain it.

tl; dr: GStreamer rocks, and GES is great. With that said, we are aware that the stabilization is not yet over. Pitivi is in a beta state, and it still needs intensive work to make it so we kill the bugs and they never come back. To do this, we must extend our test suites, we must continue collaborating with GStreamer devs, we must create better ways for users to share with us failing scenarios. For all this we've got great ideas, but what we miss is being able to work full-time on the project, which basically means we need money, for reasons I don't think I have to detail !

I'm afraid this might sound a little boring, as we all tend to be more attracted to feature promises and shiny things, and that's obviously what we all deserve, but I think that's not what we need right now (hope I got the quote right). Fortunately we estimate that phase to be around 6 months long for one person full time, we did *a lot* of the groundwork already, and we just have to expand on that, and track the corner cases cause the devil is in the details, and he knows how to hide damn well. After that, we will be ready to unleash GStreamer's power, and come up with great features in no time, and ride on the work of others to get for example hardware acceleration basically for free. From that moment, when we'll have released 1.0, things will get seriously real, and our backers will be able to vote on the features they care the most about. I've worked on the voting system and I think it's a great thing to have, I'm really impatient to see it used in real life (and hopefully not break), I think I'll write a more technical blogpost on its implementation.

How you can help.

I'm writing this the day before launching the campaign, and I have the website in the background, taunting me with its "0 € raised, 0 backers" message. Fortunately I also have the spinning social widgets to cheer me up a bit, but it's not exactly enough to get me rid of my anxiousness. I know that what we do is right, and that requesting money for stabilization first is the correct and honest thing to do. Obviously, I hope that you will donate to the campaign, but I also hope that after taking the time to read that rather lengthy blogpost in its entirety, you will be able to spread the message, and explain why what we do is important and good. Free and Open Source video editing is something that can help make the world a better place, as it gives people all around the world one more tool to express themselves, fight oppression, create happiness and spread love. Hoping you'll spread the love too, thanks for reading !

January 04, 2014

Scratching some Media Library itches

Besides looking for a job, catching a cold and shovelling snow, this holiday season I spent some time scratching itches in Pitivi. For starters, thumbnails generation: if you’ve been using the new Pitivi, you certainly ran into this:

medialibrary - partial thumbs

As you can see, thumbnails were not necessarily being shown for all clips. That was not the result of a buggy algorithm or anything like that (I don’t code often, but when I do, it’s perfect, of course), simply the fact that my solution for bug 432661 was partial: it would fetch the thumbnails if they already exist, but not create new ones.

Well, this is solved now. No more need to browse the folder with Nautilus to generate thumbnails! This does not block the user interface, thanks to Alexandru Băluț who helped me finish the job by correcting my incorrect attempt at threading with his own implementation.

medialibrary - complete thumbs

The only downside of the threaded version is that you lose the simple benchmarking system I had put in place, but no matter: I’ve measured the numbers already, so let me share them with you.

As you know, I’m pretty adamant about maintaining good performance and responsiveness in Pitivi, especially for operations such as importing clips or loading projects. I know that iconview can be slow, so my commit (before Alex’s changes) had various timers to check if searching through the iconview or updating its items was a significant performance problem. I tried with two scenarios: importing 68 clips to thumbnail, and importing 485 clips to thumbnail. Here are the resulting processing times in seconds:

68 thumbnails 485 thumbnails
Generation 20.131 141.059
Model search 00.1186 004.8595
Model insertion 00.0432 000.4

thumbnail pixbuf iconview model insertion benchmark

As you can see, for reasonable amounts of clips, the amount of time spent interacting with the iconview/treeview model, even using a naïve linear search, is insignificant:

  • The proportion of time spent searching increases when you have to generate hundreds of thumbnails, but it’s still mostly negligible (in the “worst case” shown here, it rises from 0.6% to approximately 3.4%).
  • Surprisingly, the time spent updating the model (inserting the pixbufs) is even more negligible than the time spent searching.
  • In the real world, it is unlikely that each and every thumbnail needs to be generated, searched and inserted. This happens only once in a while for a portion of your clips, not everytime you import each clip or load a project. No point in obsessing and complicating the code with caching, fancy search algorithms or data structures.

I’m hoping that we replace both the iconview and listview by a single “responsive” GTK FlowBox widget eventually, if FlowBox has better (or equal) performance than IconView, but this might take a while. If someone wants to tackle this or wants to help, please get in touch.

Better importing progress indication and error handling

First, the way we counted progression was wrong from a psychological standpoint. A bit tricky to explain, but the label needs to represent the position in the process (so eventually it reaches 100%), whereas the progressbar must hint at the amount of work remaining to complete (so it never goes to 100%).

Second, the error infobar’s “Close” button took an insane amount of space, which became quite apparent in verbose languages like French. GTK+ really didn’t want me to compact that button no matter what I tried, and using an icon instead of text didn’t yield as much space savings as using a single unicode character: ✖

Third, the error infobar was a bit of a tease: it told you that errors occured but did not tell you how many:

pitivi medialibrary error infobar - after

Well, now it does (you can also see the more balanced layout with the new ✖ button):

pitivi medialibrary error infobar - before

As you can see, when it comes to designing a good user interface, I’m kind of obsessive about details.

HeaderBar

I also have a branch where Pitivi uses the client-side decorations and HeaderBar widget from GTK+ 3.10, however I’m blocked by the fact that GStreamer’s video overlay doesn’t work with client-side decorations:

2014-01-04

In other news:

  • Alexandru Băluț fixed importing and timeline thumbnailing for MPEG-TS clips. He also spent a big amount of time and efforts on refactoring various modules.
  • Mathieu Duponchelle made rendering to MP4 and MPEG-TS possible again.
  • Jakub Steiner’s new super-high-res logo icon for Pitivi has been merged.
  • Tomas Karger got started on updating the user manual. It’s great to see someone new getting involved to finally give the user manual the attention it deserves! Thanks, Tomas.
  • I looked again into reimplementing the missing codecs installer dialog, but I’m blocked by the wrong information being provided by the discoverer error signals. On hold for now.
  • I implemented a very nice playback performance boost feature, but I’m blocked by the fact that it would pause the pipeline everytime a backup is saved in the background. Need to think of a solution for that without resorting to questionable hacks.

November 24, 2013

Wanted: documentation specialist

The Pitivi project is looking for someone who loves teaching/writing, to fill the position of “documentation specialist” among our team! The documentation specialist will be responsible for updating our fantastic user manual (online version here) based on release notes of 0.91 and 0.92 as well as your own imagination/ideas for improvement.

cat writing

The user manual is written in Mallard, but I can even accept contributions in plaintext (I would then do the markup myself). You only have to write it in English; other languages are handled by the hyperactive GNOME translation teams. You will receive:

  • No pay
  • No company-provided car, gym or insurance
  • Praise, hugs, possibly some beers at events such as GUADEC
  • Something great to put in your résumé/CV as a technical writer and contributor to open-source
  • The feeling of a job well done in a project that matters for the world
  • An autographed biography of Richard Stallman in French, if you’re into that sort of thing. I’m not even kidding, there’s a guy actually offering that as a perk on the Internet.
  • A gentle entry into the world of contributing to Pitivi if you want to go further

Apply now on #pitivi on irc.freenode.net !

November 04, 2013

Pitivi 0.92 and the wandering opensourcerer

The past few weeks have been pretty crazy.

At the last minute, I ended up going to the GStreamer Conference in Edinburgh, thanks to the GStreamer project sponsoring my attendance. As always, it was a fantastic event and it was great to meet up with old friends and see great topics being discussed. I was pretty impressed by the amount of attendees too. I’m feeling guilty for having missed good talks while being dragged into hallway discussions or being hammered by jetlag, but our pals at Ubicast recorded everything so I should be able to catch up later. My good friend Luis summarized the event much better than I could, so I won’t go into detail here. Except that Thibault won a bottle of whiskey and was unable to claim it, so I picked it up and brought it back for him (I don’t hack on GStreamer itself, so I don’t need the whiskey):

2013-10-25

After gstcon, I embarked on a pretty hectic journey throughout Europe, with the goal of prospecting sustainable development opportunities and to eventually crash at Mathieu Duponchelle’s home in order to spend time on Pitivi together.

shadow of the colossus horse ride

Since I’m someone who tends to plan and optimize everything, hopping from city to city in a semi-improvised way over the past two weeks has been somewhat stressful. Especially the part where I found out the hard way that some capitals have their public transportation system to the airport shut down at night, or the part where I had to sleep on the floor in the entrance of the airport that was also closed at night even though you have flights departing before sunrise. That was fun. It is also a significant amount of transportation costs piling up (even though I’m an expert at minimizing them), so if you’d like to help offset some of the costs resulting from our current Pitivi activities, please feel free to tip a few bucks ;)

Managing and maintaining an established open-source project does not just mean coding: there is a lot of administrativia and planning involved, which has been consuming my free time lately. After discussions with Karen and Jim at the GNOME Montréal Summit, the post-GSoC stuff to deal with, the big research project I’ve been tackling and the research around the infrastructure needed for it, all on top of paperwork needing to be done at home during my absence, there was pretty much no time left for me to look at code. I somehow still managed to squeeze in a few commits to fix some blockers and a few trivial things, but that means that larger projects (even something as “simple” as reviewing a new set of icons) had to be renegotiated in my TODO list.

2013-11-02

While staying with Mathieu in the past few days, we made some good progress on a big project coming up for Pitivi (more on that later) and released 0.92 to provide some fixes and polish over the earth-shattering 0.91. As you can see, this release comes just one month after the biggest release in years, so we are indeed aiming to get back onto a “release early, release often” cycle now. We hope you like it.

October 05, 2013

Pitivi 0.91 “Charming Defects”

And so it has come to this.

2013-10-05

A few days ago, we stealthily published the tarball of the first Pitivi video editor release based on the GES engine. Incredible but true! 0.91 is finally out! In case you were living under a rock for the past two years, this release is the result of a major rework of the entire Pitivi architecture, and as such it is considered an “alpha” release. From a very (VERY) high level, it includes:

  • Replacing the core of Pitivi by GES; 20 thousand lines of code removed
  • Porting to GStreamer 1.x
  • Porting to GTK+ 3.x
  • Replacing GooCanvas by Clutter for the timeline
  • An automated UI test suite, with many checks for mission-critical parts
  • Fixing hundreds of bugs
  • Implementing many new features
  • UI polish all over the place
  • Refactoring pretty much the entire codebase

Since 0.15.2, over 1300 changes have been committed in Pitivi alone. That’s not even counting the incredible amount of work we’ve done in GES and GStreamer. See the details in the 0.91 release notes.

I would like to extend my deepest thanks to all the people who, through their contributions and the trust they have put in us over the past two years, have helped us achieve this milestone. The adventure continues however: we need your involvement more than ever to continue on the path we have set! If you’ve been shy about getting involved because of the huge amount of architectural changes we were going through, now is the perfect time to start contributing on fresh grounds.

Oh, and one more thing: “PiTiVi” is now known as “Pitivi”. Our website, documentation and infrastructure have been updated to reflect this small branding adjustment (the only part I didn’t update yet is the logo image). Moreover, in order to support open standards, free ourselves from YouTube (or any third party, really) and to make it easier to transition to a responsive design (more on that later), we now use HTML5 videos in open formats all across the website. Woohoo!

September 04, 2013

Fix it thrice

Some of you may be familiar with the good old “fix it twice” adage: fix the problem and then ensure it never happens again.

Last year, when I made Pitivi’s automatic backup feature work, I requested someone to write extensive automated tests for it (with Dogtail), so that I could feel confident about this feature never being broken again, even if it underwent massive refactoring or if everything else changed around it.

Turns out that it did break again this year. After the port to GES Assets and GES Project, loading a backup project would not actually populate the media library and timeline! The tests were not strict enough to check those kinds of “details”, so this flew completely under the radar.

I thought I’d fix the whole thing again… however, in addition to not wanting to see it break again later I wanted to be more efficient, so I did things differently this time: test-driven programming. I refactored our automated UI tests, made them much more exhaustive and sadistic, made them fail, and then spent time fixing the code until the tests would pass. And whenever I found another bug in that branch, I wrote tests to fail before I would fix the issue.

Now it works, and I can be confident about it again. I have effectively fixed it thrice.

Testing a bulletproof vest, 1923 - cropped

Why do I care so much about making it bulletproof? Because projects are Pitivi users’ most important data. It is critical to ensure their safety through unsaved changes warnings, disaster-proof automatic backups, backup restoring that actually works, and preventing accidental overwrites: Pitivi tries to gently steer you away from overwriting important stuff (you need to go through the “Save As” process and confirm the overwrite, if you consider the restored version to be the correct one). All that, and much more, is part of my automated tests.

Other recent stuff

Software Center description

I’ve been watching closely as Richard Hughes put forth a proposal for a new specification to describe applications in the GNOME Software Center. Here is what I wrote for Pitivi.

Summary:

A video editor that aims to appeal to hobbyists and professionals alike, with a strong focus on usability, efficiency and quality

Description:

Integrating well with the GNOME desktop environment and other applications, Pitivi sports a beautiful user interface designed to be powerful yet easy to learn.

With a non-modal editing workflow, a framerate-independent and playhead-centric timeline, Pitivi allows you quickly and accurately trim, split and review your scenes. Pitivi’s ripple and roll editing features allow spending more time on storytelling and less time on “pushing clips around”.

Some other features include:

  • Accepts any file formats supported by the GStreamer multimedia framework
  • Can animate hundreds of special effects and filters with keyframable properties
  • Ability to set custom aspect ratios, framerates and rendering presets
  • Easy to use crossfades and SMPTE transitions
  • Multihead-friendly with detachable user interface components

Hopefully this should capture the essence of what it can do.

On a sidenote, we currently use INTLTOOL_XML_RULE to make this file translatable, and I heard that’s terribly uncool. Please help us by providing a patch to make it work using itstool instead!

New project render dialog, filesize estimates

It feels great when you pick up a patch you wrote “blindfolded” four months ago (back then, it was impossible to test) and see it “just work”: I had this small “Knuth” moment when it not only rebased cleanly, but worked as intended with only one or two Python syntax adjustments. The patch in question is a little feature I wanted to make it easier to figure out if the resolution/quality/bitrate settings I chose are getting me to get the kind of filesize I want: output filesize estimation.

2013-08-31 23:26:09

The algorithm I devised is naïve, but does the trick for the vast majority of cases. I cheat a little bit by waiting long enough to have gathered enough data for prediction to be somewhat significative, and by fuzzying the estimates — you’d be surprised how innacuracy can improve perceived accuracy!

While I was doing that, I started playing around to make the rendering dialog prettier and more informative. For starters, the “Rendering movie” header felt wrong for various reasons, and this became quite apparent once you add the “Estimated filesize” text right below it. Rewording the header to say something like “Your movie is now being rendered and encoded” didn’t help much.

That header was basically still duplicating what was in the window title. I pondered what could be put in its place; emptiness would look a bit silly, as the dialog would just be a titlebar, a progressbar and buttons. I then decided to put some informational text for educational purposes: few people actually understand what factors influence the encoding performance (or the speed at which your project renders), yet many people are surprised when it takes a long time, so I thought I’d add a contextual tip:

2013-09-01 00:14:09

2013-09-01 00:12:17

“What’s that ridiculous amount of empty space below the progressbar”, you say? A bug in GTK, it seems. I spent quite some time changing things all over the place until I discovered that making the window non-resizable “fixes” the issue. My guess is that GTK3′s size calculations are based on the height of the window before its width would be set by the “default width” property in my dialog… and then the height would never shrink. How odd. Anyway, the new text paragraph does prevent the window from being too narrow now, so the dialog doesn’t really need to be resized by users. Here’s how it looks like now:

2013-09-01 09:54:22

Certainly more useful than what we started out with, isn’t it? Paying attention to little details like that — having empathy for your userbase — allows creating what I call “intuitive” or “self-documenting” software.

Brace yourselves, 0.91 is coming

Oh, did I mention that Pitivi actually renders now? Well yes, that’s because we (especially Mathieu) have been doing a massive amount of work all across the GStreamer, GES and Pitivi stack to make this happen.

We have been incredibly busy in the past few months, pushing really hard to finally get a release out of the door by the end of the summer. In recent times, we have committed around 250 changes in Pitivi, 180 in GES, and I don’t know how many hundreds in GStreamer. My spare time was spent swimming through emails, having many technical discussions on IRC and dealing with hundreds of bug reports/issues/TODOs.

Working hard at GUADEC

Working hard at GUADEC

With a few issues remaining (mainly related to scaling in videomixer), we’re now at the point where stuff… works. Audio mixing and video compositing are back (and keyframable), effect properties are keyframable, deadlocks and crashers are pretty much gone (as far as we can tell), etc. I’m starting to run out of bugs. Amazing!

We can finally make an alpha release in the near future.

How you can help with 0.91

0.91 will be our first alpha release based on GES. It’s a very exciting release with many improvements, bug fixes and new shiny things. You can help by:

  • Trying out the development version right now. Come have a chat and let us know how well it works for you!
  • Documenting. The user manual is quite outdated right now (it is still based on the 0.15 featureset). We need writers!
  • Helping me prepare the huge release notes. If you’re crazy enough.
  • Creating simple and lightweight test projects for Mathieu
  • Helping with packaging efforts (ensuring this can be packaged nicely, fixing build/setup issues etc.). Two issues you can tackle immediately are:
  • Revising the existing bug reports and doing a big triage cleanup
  • Hacking! As usual, we’ll be happy to help you get started.
  • Spreading the news

August 28, 2013

View Side-by-Side Stereoscopic Videos with GStreamer and Oculus Rift

Screenshot from 2013-08-28 16:36:17

GStreamer can do a lot. Most importantly it has the exact functionality I was looking for when I wanted to play a stereoscopic video on the Oculus Rift: Decoding a video stream and applying a GLSL fragment shader to it.

Available Players

I found a few solutions that to try to achieve that goal, but they were very unsatisfactory. Mostly they failed to decode the video stream or didn’t start for other reasons. They are not maintained that well, since they are recent one man projects with compiled only releases on forums. And worst of all, they only support Windows.

Surprisingly, I experienced the best results with OculusOverlay and VLC Player. Which transforms a hardcoded part of your desktop in a very hacky way with XNA. Works also with YouTube.

VideoPal is a player written in Java and using JOGL. In theory it could work in Linux but:

Exception in thread “main” java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons:
no swt-win32-3740 in java.library.path

Yeah.. no time for jar reverse engineering and no link to the source. I was able to run it on Windows, but it couldn’t open a H264 video.

There is also OculusPlayer using libvlc but does not release the source. The idea is good, but it didn’t work.

VR Player is a GPLv2 licenced Player written in C#. It also couldn’t decode the stream.

Another player is VR Cinema 3D, which does not play a stereo video, but simulates a virtual cinema with a 2D film. Funny idea.

Get some Stereo Videos

You can search for stereoscopic videos on YouTube with the 3D search filter. There a tons of stereoscopic videos, like this video of Piranhas.

Download the YouTube video with the YouTube downloader of you choice, which supports 3D videos. For example PwnYouTube.

For convenient usage in the terminal you should rename the file to something short and without spaces.

Using GStreamer

The minimal GStreamer pipeline for playing the video stream of a mp4 file (QuickTime / H.264 / AAC)  looks like this

$ gst-launch-1.0 filesrc location=piranhas.mp4 ! qtdemux ! avdec_h264 ! autovideosink

It contains the GStreamer elements for file source, QuickTime demuxer, H264 decoder and the automatic video sink.

If you want more information on the elements, try gst-inspect

$ gst-inspect-1.0 qtdemux

If you want audio you need to name the demuxer and add a audio queue with a decoder and an audio sink.

$ gst-launch-1.0 filesrc location=piranhas.mp4 ! qtdemux name=dmux ! avdec_h264 ! autovideosink dmux. ! queue ! faad ! autoaudiosink

Let’s add some Oculus Rift distortion now. We will use a GLSL fragment shader and the glshader element from the gst-plugins-gl package for that. Since the GStreamer GL Plugins are not released yet, you need to build them by yourself. You could use my Archlinux AUR package or the GStreamer SDK build system cerbero. Here is a tutorial how to build GStreamer with cerbero.

In the GLSL shader you can change some parameters like video resolution, eye distance, scale and kappa. This could also be done with uniforms and set by a GUI.

The final GStreamer pipeline looks like this. Since we are using a GL plugin, we need to use the glimagesink.

TL;DR

$ gst-launch-1.0 filesrc location=piranhas.mp4 ! qtdemux name=dmux ! avdec_h264 ! glshader location=distortion.frag ! glimagesink dmux. ! queue ! faad ! autoaudiosink

Seeking and full screen are features that could be achieved in a GStreamer python application.


July 26, 2013

Prague, Brno, London, GNOME + Pitivi

Guess where I am going?

2013-07-25--22.30.51

That moment when you think “jeez, I wish I could bring less equipment with me for once”…

To the end of the world and back

To that event “where nobody goes anymore”. I’m not entirely sure what’s in the air this year. I’m not talking about what’s been going on in the industry and the public perception of GNOME in various circles, that might not be the whole picture. There may also be something else I’d like to assess while I’m there. While I deeply disagree with the use of sensationalistic and “demotivational” titles like Lionel’s last blog post, some of those concerns are quite valid and important to our sustainability. I, for one, am looking forward to participating in the ecosystem BoF.

Taking it easy. Maybe.

  • Before GUADEC, I will spend a few days visiting Prague with friends. I’m pretty bad at this “taking vacations” thing however; my obsessive self might spend time trying to fix some Pitivi issues instead of being a good tourist.
  • I will not be working while at GUADEC. I will be devoting my mind to attending and giving (two!) presentations, interacting with new and old community members and having fun (might even try playing futbol this year, but I fear Bastien will kick my arse for all those years of pestering him with bug reports).
    • I might have a twisted definition of “fun” though: I have 30+ items in GTG tagged “@guadec”, which means I will be chasing down people all over the place. I’m looking at you, Allan, Bastien, Karen, Cosimo, Benjamin, Zeeshan, Jon, Lionel, Jakub, Sebastian, Tim, Edward,… gosh, the list is endless. I’m forgetting a ton of people. I also have to catch up with Igalians, Lanedians, Pitivites… the fact that I didn’t mention your name doesn’t imply that you’re safe.
  • Unfortunately, special circumstances will call me back to work in the last few days of GUADEC. I will be in London on the 8th and 9th.

If you want to catch me in Prague or London, let me know.

2013-07-25--23.13.44

All the equipment shown previously fits in these two carry-on bags. Nekohayorganization at its finest.

Pitivi happenings

  • Pitivi hackfest @ GUADEC 2013. Be there.
  • I think I’m at the point where I really need someone to succeed me in the role of writing the Pitivi user guide and release notes for the upcoming release. Please get in touch if you love writing and are interested in helping out. If you’re at GUADEC, we can even get started together!
  • Interesting results for the 2013 Pitivi user survey, pretty much confirming what I’ve been thinking for many years. Hopefully I’ll touch upon that in my talk.
  • For those interested in vastly improved playback performance with super-high-definition footage: take a look at our spec for proxy editing and let us know what you think. Anton, one of our GSoC students, is working on laying the foundations for that.
    • I haven’t had time to blog about the big question around video codecs, but I’ll get to that eventually. Grab me at GUADEC to discuss this, if the feature matters to you or if space-performance tradeoffs are a problem that you find interesting to think about.
  • Brace yourselves, a new website is coming. It’s friggin’ awesome. Not releasing it yet though, so you can relax for now.
  • Mathieu Duponchelle made a small script to do special video compositing of a dance clip… in ten minutes! That’s how powerful GES is: it’s not just about making full-fledged video editing apps, it’s also a powerful scripting and automation engine. See his explanations (with source code!) here.

June 25, 2013

2013 open source video editor user survey

What was initially planned as a one-question referendum for Pitivi users (how critical is it for us to have perfect xptv import on the upcoming release) became a full-fledged survey to give us a clearer picture of what users care the most about these days. If you’re a fan of Free Software and video editing, please take a few seconds to fill this survey. Please please share this with everyone you know who is interested in Free and Open-Source video editing. Thanks!

Version française: la prochaine version de Pitivi approche rapidement. Suite à une discussion concernant nos priorités à court terme afin de pouvoir sortir une nouvelle version au cours de l’été (avec un peu de chance), nous avons concocté un court sondage sur votre utilisation des logiciels de montage vidéo libres. S’il-vous-plaît, veuillez prendre quelques secondes pour répondre à ce délicieux questionnaire, et n’hésitez pas à en parler à tous ceux autour de vous qui s’intéressent à l’édition vidéo libre!

June 08, 2013

Status update — new Pitivi timeline, GSoC projects, etc

Dear shareholders fans, here is the quarterly report from the frontlines of Pitivi, your favorite futuretrocyberpunk video editor.

2013-06-08

A typical day in my life as of late

I will cover the following from a very high-level view (I’ll have to make separate blog posts to cover them in detail, there’s too much to say):

  • The state of our multimedia stack
  • Our new timeline canvas and how you can help
  • This year’s accepted Summer of Code projects
  • Upcoming GUADEC presentation

Also, a small announcement: for those who also want some shorter, less formal status updates and occasional feedback probes, you can look at the new PiTiVi G+ page.

Clutter timeline canvas

In preparation for the Summer of Code, but mostly just to help us and demonstrate how much of a badass he is, Mathieu Duponchelle killed our goocanvas-based timeline and redid the whole thing with Clutter. In two weeks. This is what it looks like at the moment:

The state of our multimedia stack

We fixed some initial bugs in GNonLin in Milan. Since then, further investigation revealed that most of the issues we are encountering are actually generic bugs in GStreamer. Mathieu is now working full time on fixing bugs everywhere in the stack. We hope to have something showable (beta) for GUADEC and a release this fall — as you can imagine however, I’m unable to make solid promises at this point.

There are still many things that we need to rearchitect in GNonLin, but that’s a story for another blog post.

We recently started using Github’s bug tracking tool to keep a more easily manageable list of issues we have to deal with in the development version. Please note:

  • This is not a replacement for GNOME Bugzilla. It is meant as a temporary measure for our extended development version, for stuff that is too small, fluid or uncertain to be filed as proper bug reports. Once we release, we’ll move remaining issues to Bugzilla. Upstream issues in GStreamer, once properly identified, are always filed in GStreamer’s (GNOME) Bugzilla.
  • We welcome your help in fixing or investigating these issues. There’s also an “easy” tag for those of you looking for bitesized stuff.

Stuff you can fix in Pitivi, right now.

If working on GStreamer and GES is not your cup of tea, we still have a ton of fun little projects for you to do on Pitivi itself, including:

  • A bunch of little nitpicks/papercuts for our Clutter timeline
  • During the port to Clutter, Daniel Thul helped Mathieu by rearchitecturing and fixing the timeline clip thumbnailer module. It works, but there are some remaining, significant performance issues. This is a great project if you’re looking for finite, concrete, self-contained work to improve performance with a very highly visible impact on the UI. You would get a ton of love points for this.
  • We have a title editor UI, but it needs some love. You don’t want us to ship with an unrefined UI, do you?
  • UI tests!

This Summer of Code’s projects

We have four students working on Pitivi this summer, thanks to GNOME kindly offering us an extra slot to be able to achieve our mission. Here’s our ambitious list of project goals:

  • General bug fixing in the entire GStreamer + GES + Pitivi stack, to be able to make a Pitivi release.
  • Motion ramping (keyframable fast/slow motion, allowing the speed of clips to be changed over time, ideally with smooth frame interpolation)
  • Finish the enablement of our new timeline layers management UI
  • Reimplement audio waveforms. Better, faster, stronger.
  • Video compositing
  • Proxy editing. I’m currently writing a design and API requirements brainstorming page to help plan this feature. I will share it in my next blog post, where I’ll present some of the dilemmas I’m encountering (particularly around codecs).

Presenting at GUADEC

Come to GUADEC and attend my presentation at the beginning of August. Plans are a bit fuzzy for the time being, but you can expect my typical award-winning presentation style. Also, as usual, we’ll be having a Pitivi hackfest there. Czech it out.

Fun with videomixer

Introduction

When you've spent the whole week painstakingly fixing bugs, coding something just for fun is a welcome breath of fresh air. These last weeks, one of my areas of work has been the gstreamer "videomixer" element. It needed some love, and still needs, but I've been able to fix some of the issues we had. When we first ported gstreamer-editing-services and gnonlin to gstreamer 1.0, even the most basic editing became impossible. That was quite frustrating to say the least, and being able to do edition once again feels extremely good !

One of the great things with the extraction of pitivi's edition code to GES is that you can now write fancy scripts to make automated edition, and with a little luck you won't encounter a bug on your way. At the end of this article, you will find a video showing an example result.

There haven't been much tutorials about using GES, the only way to learn that is either looking at the examples on the git repo, or to directly look at pitivi's source code. With that blogpost I'm gonna try to present that cool library, while coding something fun. The idea from the script came from that video : http://vimeo.com/35770492, linked to me by Nicolas Dufresne. We won't be able to reproduce the most advanced bits of this video, as it also seems to be content-aware at some points, but we will make a fun script nevertheless !

Sounds sweet, where's the code ?

Looks fine, explain it now !

I'll select the meaningful bits, assuming you know python well enough. If not, this is easily translatable to C, or any language that can take advantage of GObject introspection's dynamic bindings.

First, let’s look at the main.


    timeline = GES.Timeline.new_audio_video()

This convenience function will create a timeline with an audio and a video track for us.


    asset = GES.UriClipAsset.request_sync(sys.argv[1])
    audio_asset = GES.UriClipAsset.request_sync(sys.argv[3])

This is part of the new API. Thanks to that, GES will only discover the file once, discovering meaning learning what streams are contained in the media, how long it lasts and other infos. Previously, we would discover the file each time we created an object with it, which was not optimized. request_sync is not what you would use in a GUI application, instead you would want to request_async, then take action in a callback.

Now, let’s look at createLayers, which is where the magic happens.


    layer = timeline.append_layer()

A timeline is a stack of layers, with ascending “priorities”. Thanks to these layers, we are able for example to decide if a transition has to be created between two track objects, or, if two clips have an alpha of 1.0, which one will be the “topmost” one.


    clip = layer.add_asset(asset, i * Gst.SECOND * 0.3, 0, asset.get_duration(), GES.TrackType.UNKNOWN)

This code is very interesting. We are basically asking GES to : create a clip based on the asset we discovered earlier, set its start a i * 0.3 seconds, its inpoint (the place in the file from which it will be played) to 0, and its duration to the original duration of the file. The last argument means : for every kind of stream you find, add it if the timeline contains an appropriate track (here, audio and video). We could have decided to only keep the VIDEO, but that was a good occasion to show that.

With that logic, we can now see that the resulting timeline is gonna be sort of a “canon”: one video mixed with n earlier versions of itself.


    for source in clip.get_children():
        if source.props.track_type == GES.TrackType.VIDEO:
        break
 
    source.set_child_property("alpha", alpha)
    alpha = mylog(alpha)

Here, I browse the children of my timeline element, and when I find a video element, I set the alpha of an element inside it, and update the alpha. The log here makes it so each layer has the same perceived opacity at the end.

Afterwards, we create a pipeline to play our timeline, and if needed we set it to the render mode, that code is quite self-explanatory.

We now just have to wait until the EOS, or until the user interrupts the program.

I use Gtk.main() out of pure laziness, a GLib mainloop would work as well.

How does it look like then ?

I really hope this example made you want to learn more about GES, it’s a great library that lets you do awesome stuff in very few lines of code, we’re in active development and the best is still to come !

Here is the promised video:

Notice the code was only tried with mp4 containing h264, feel free to report any issues with other codecs on my github !

April 28, 2013

No more stuck rendering dialogs!

If you’ve tried rendering projects with Pitivi 0.15 or older, chances are you’ve encountered one of these dreadful situations where the rendering process would get stuck:

  • …at the beginning, with the progressbar saying it’s currently “estimating” — which was a lie that I corrected a little while ago.
  • …at the very end. Extra trolling points for having made you waste a huge amount of time to get a 0 bytes output file (if we’re lucky, that bug is gone).
  • …somewhere in the middle, because caps negotiation failed, some elements were not linked, GStreamer thinks you ran out of available RAM, or because you’ve been very naughty.

In any such case, the rendering dialog just sat there and smiled at you, as if everything was fine in the world. Well, no more:

slap

Pitivi is going to give you the honest, brutal truth.

This is the result of a horrifying thought suddenly springing to my mind yesterday night: “Hey, what if the code was not even checking for errors in the pipeline when rendering?”

Indeed, it wasn’t. How silly is that! I have thus prepared a simple fix to improve the situation: catch pipeline error messages, abort the render (you really don’t want to ignore a GStreamer error) and display an error dialog. This will at least let people know that something is wrong and that they should start writing patches to GStreamer instead of accusing Pitivi of hurting kittens. You’d be surprised how many people can sit for hours in front of that stuck progressbar.

Before I commit the fix however, I would need your feedback on the usability of that dialog:

2013-04-27

This is not terribly pretty, but it’s better than nothing. A few things to consider:

  • In that screenshot, all the text except the window title (“Error While Rendering Project”) comes from the GStreamer pipeline error message (the error and the error’s details). I know that the error details look ugly, but I suspect it wouldn’t be useful to GStreamer/Pitivi developers if we don’t have them “verbatim”. Maybe we could try to mangle the error details string (split using “:” and take only the first and last two items of the resulting list?) and encourage the user to run from a terminal to get better debug info, but that feels a bit backwards.
  • We should probably have some less-scary text to accompany the actual error details. Something that guides the user towards an action that can be done to address the problem (ex: reporting a bug). Maybe it can be placed between the header and the details (above the “qtdemux.c” line)? The problem is finding a universal text to be used.
  • If we consider the route where we suggest the user to report bugs, where should we point to? The Pitivi bugs investigation page? Pitivi bugzilla? GStreamer bugzilla? The distro’s bug tracker?
  • Let’s keep this simple, both visually and in terms of code/implementation.

What do you think? Is the current approach sufficient or is there something better that we can easily do?

Update: here’s an alternative dialog with some more comprehensible text, where the actual error (as seen in the previous screenshot) gets shoved under the rug by putting it in a GTK expander widget (clicking “Details” reveals the error’s details as above):

2013-04-29

April 11, 2013

PiTiVi and the 2013 Summer of Code

This year will be a little bit different. In a rather unexpected turn of events, PiTiVi has been accepted as a mentoring organization but GStreamer has not. Fear not however, as GStreamer has no better ally than the PiTiVi team when it comes to pushing our favorite multimedia framework to its limits and beyond. As you may know, PiTiVi makes heavy use of the GStreamer Editing Services library and, in turn, GNonLin and the rest of GStreamer. With the switch to GES and the irrevocable shedding of our old skin, any backend work done for the sake of the PiTiVi project ends up benefitting GStreamer and other projects.

One way to look at things is that there is no such thing as a PiTiVi backend anymore. PiTiVi is a frontend that pushes the latest and greatest open-source multimedia technologies forward.

With the GES port nearing completion, this is the first time that we can truly say there are three interrelated components to contribute to. This new reality sets the tone for a different way to look at PiTiVi project ideas this year: you can finally…

Choose your character class

pitivi hacker style

Are you a ninja? A spellcaster? A tank? While most projects are a balance of backend and UI work, we know that some people prefer to lean more to one side or another of the continuum — that’s why I created a new visual notation for our ideas page this year. Instead of an “easy/hard” system (which would be inaccurate and misleading, as perceived difficulty is measured differently for everybody), we simply provided a visual indication of the expected involvement in the various components for a given project idea (for example, “PiTiVi: ◼◼◻◻◻ GES: ◼◼◼◼◼  GStreamer: ◼◼◼◻◻”). So if you were looking for something closer to a hardcore GStreamer GSoC project, you can spot ideas that might interest you here.

Not a programmer? You can help raise awareness about this. Maybe you know a brilliant hacker friend/relative or a top-notch computer science student waiting for a chance to make a big difference in the world. Tell that person about how cool and welcoming PiTiVi is and how getting involved is the best way to advance free, powerful and intuitive video editing for everyone!

February 17, 2013

Setting up Supybot with the Bugzilla plugin

Supybot is an IRC bot, an application which can connect to a specific IRC channel and do stuff there. For example, with the Bugzilla plugin, Supybot can report on the channel whenever a new bug is filed, or if somebody mentions "bug 1234" in the conversation, it will print details about bug 1234.

Install supybot

First, you have to install Supybot. If you are using Arch Linux, get supybot from AUR, otherwise read the INSTALL file.

Create a supybot user on your system, or on a virtual machine where you want to run the bot.

groupadd --system supybot
useradd -m --system -g supybot supybot

Install the supybot Bugzilla plugin

If you want your bot to announce when new bugs are created, you need to set up an email account, register it on bugzilla and set it up so it gets mails when bugs for your project are created. For this go to Bugzilla -> Preferences -> Email Preferences and read the User Watching section!

Once you have the email account receiving emails from bugzilla, setup getmail on your machine so it downloads the messages from the email account to /var/mail/supybot.

touch /var/mail/supybot
chown supybot.supybot /var/mail/supybot
chmod g+w /var/mail/supybot
chmod o-rwx /var/mail/supybot
chown supybot.supybot /var/mail/supybot

Set the getmail job to download the messages using POP and to delete them from the server after retrieving them.

[destination]
type = Mboxrd
path = /var/mail/supybot

[options]
verbose = 2
message_log = ~/.getmail/gmail.log

Now add yourself to the supybot group (so your getmail cronjob can write the bugzilla emails to /var/mail/supybot). For this to take effect the best option is to re-login!
usermod -a -G supybot YOUR_USERNAME

Add a crontab to your account (not supybot) to fetch the bugzilla mail every minute.

* * * * * getmail -d -q -r /path/to/your/getmail/config/file

Setup your bot

Next you have to create a config, and here it gets even more tricky. Unfortunately, most of the Supybot documentation is gone because the supybot.com website is dead and only redirects to the sourceforge page of the project, where Supybot can be downloaded. You have to create a config using supybot-wizard:

mkdir /home/supybot/pitivi
cd /home/supybot/pitivi
# I suggest to act as an "advanced user" when the wizard asks you
# about what kind of user you are!
supybot-wizard
[...]
# Now see the conf file it created, feel free to rename it to bot.conf. ;)
ls -l *.conf


Copy the Supybot Bugzilla plugin to your conf.

cd /home/supybot/pitivi
mkdir plugins
cd plugins
bzr co bzr://bzr.everythingsolved.com/supybot/Bugzilla


Copy the bugzilla section from this sample config file to your bot.conf file. Or copy it from the pitivibot's bot.conf file below.

###
# Determines the bot's default nick.
#
# Default value: supybot
###
supybot.nick: pitivibot

###
# Determines what alternative nicks will be used if the primary nick
# (supybot.nick) isn't available. A %s in this nick is replaced by the
# value of supybot.nick when used. If no alternates are given, or if all
# are used, the supybot.nick will be perturbed appropriately until an
# unused nick is found.
#
# Default value: %s` %s_
###
supybot.nick.alternates: %s` %s_

###
# Determines the bot's ident string, if the server doesn't provide one
# by default.
#
# Default value: supybot
###
supybot.ident: supybot

###
# Determines the user the bot sends to the server. A standard user using
# the current version of the bot will be generated if this is left
# empty.
#
# Default value:
###
supybot.user: pitivibot

###
# Determines what networks the bot will connect to.
#
# Default value:
###
supybot.networks: freenode

###
# Determines what password will be used on freenode. Yes, we know that
# technically passwords are server-specific and not network-specific,
# but this is the best we can do right now.
#
# Default value:
###
supybot.networks.freenode.password:

###
# Determines what servers the bot will connect to for freenode. Each
# will be tried in order, wrapping back to the first when the cycle is
# completed.
#
# Default value:
###
supybot.networks.freenode.servers: chat.eu.freenode.net

###
# Determines what channels the bot will join only on freenode.
#
# Default value:
###
supybot.networks.freenode.channels: #pitivi

###
# Determines what key (if any) will be used to join the channel.
#
# Default value:
###
supybot.networks.freenode.channels.key:

###
# Determines whether the bot will attempt to connect with SSL sockets to
# freenode.
#
# Default value: False
###
supybot.networks.freenode.ssl: False

###
# Determines how timestamps printed for human reading should be
# formatted. Refer to the Python documentation for the time module to
# see valid formatting characters for time formats.
#
# Default value: %I:%M %p, %B %d, %Y
###
supybot.reply.format.time: %H:%M %Y-%m-%d %Z

###
# Determines whether elapsed times will be given as "1 day, 2 hours, 3
# minutes, and 15 seconds" or as "1d 2h 3m 15s".
#
# Default value: False
###
supybot.reply.format.time.elapsed.short: True

###
# Determines the absolute maximum length of the bot's reply -- no reply
# will be passed through the bot with a length greater than this.
#
# Default value: 131072
###
supybot.reply.maximumLength: 131072

###
# Determines whether the bot will break up long messages into chunks and
# allow users to use the 'more' command to get the remaining chunks.
#
# Default value: True
###
supybot.reply.mores: True

###
# Determines what the maximum number of chunks (for use with the 'more'
# command) will be.
#
# Default value: 50
###
supybot.reply.mores.maximum: 50

###
# Determines how long individual chunks will be. If set to 0, uses our
# super-tweaked, get-the-most-out-of-an-individual-message default.
#
# Default value: 0
###
supybot.reply.mores.length: 0

###
# Determines how many mores will be sent instantly (i.e., without the
# use of the more command, immediately when they are formed). Defaults
# to 1, which means that a more command will be required for all but the
# first chunk.
#
# Default value: 1
###
supybot.reply.mores.instant: 1

###
# Determines whether the bot will send multi-message replies in a single
# message or in multiple messages. For safety purposes (so the bot is
# less likely to flood) it will normally send everything in a single
# message, using mores if necessary.
#
# Default value: True
###
supybot.reply.oneToOne: True

###
# Determines whether the bot will reply with an error message when it is
# addressed but not given a valid command. If this value is False, the
# bot will remain silent, as long as no other plugins override the
# normal behavior.
#
# Default value: True
###
supybot.reply.whenNotCommand: False

###
# Determines whether error messages that result from bugs in the bot
# will show a detailed error message (the uncaught exception) or a
# generic error message.
#
# Default value: False
###
supybot.reply.error.detailed: False

###
# Determines whether the bot will send error messages to users in
# private. You might want to do this in order to keep channel traffic to
# minimum. This can be used in combination with
# supybot.reply.error.withNotice.
#
# Default value: False
###
supybot.reply.error.inPrivate: False

###
# Determines whether the bot will send error messages to users via
# NOTICE instead of PRIVMSG. You might want to do this so users can
# ignore NOTICEs from the bot and not have to see error messages; or you
# might want to use it in combination with supybot.reply.errorInPrivate
# so private errors don't open a query window in most IRC clients.
#
# Default value: False
###
supybot.reply.error.withNotice: False

###
# Determines whether the bot will send an error message to users who
# attempt to call a command for which they do not have the necessary
# capability. You may wish to make this True if you don't want users to
# understand the underlying security system preventing them from running
# certain commands.
#
# Default value: False
###
supybot.reply.error.noCapability: False

###
# Determines whether the bot will reply privately when replying in a
# channel, rather than replying to the whole channel.
#
# Default value: False
###
supybot.reply.inPrivate: False

###
# Determines whether the bot will reply with a notice when replying in a
# channel, rather than replying with a privmsg as normal.
#
# Default value: False
###
supybot.reply.withNotice: False

###
# Determines whether the bot will reply with a notice when it is sending
# a private message, in order not to open a /query window in clients.
# This can be overridden by individual users via the user configuration
# variable reply.withNoticeWhenPrivate.
#
# Default value: False
###
supybot.reply.withNoticeWhenPrivate: False

###
# Determines whether the bot will always prefix the user's nick to its
# reply to that user's command.
#
# Default value: True
###
supybot.reply.withNickPrefix: False

###
# Determines whether the bot should attempt to reply to all messages
# even if they don't address it (either via its nick or a prefix
# character). If you set this to True, you almost certainly want to set
# supybot.reply.whenNotCommand to False.
#
# Default value: False
###
supybot.reply.whenNotAddressed: False

###
# Determines whether the bot will allow you to send channel-related
# commands outside of that channel. Sometimes people find it confusing
# if a channel-related command (like Filter.outfilter) changes the
# behavior of the channel but was sent outside the channel itself.
#
# Default value: False
###
supybot.reply.requireChannelCommandsToBeSentInChannel: False

###
# Supybot normally replies with the full help whenever a user misuses a
# command. If this value is set to True, the bot will only reply with
# the syntax of the command (the first line of the help) rather than the
# full help.
#
# Default value: False
###
supybot.reply.showSimpleSyntax: False

###
# Determines what prefix characters the bot will reply to. A prefix
# character is a single character that the bot will use to determine
# what messages are addressed to it; when there are no prefix characters
# set, it just uses its nick. Each character in this string is
# interpreted individually; you can have multiple prefix chars
# simultaneously, and if any one of them is used as a prefix the bot
# will assume it is being addressed.
#
# Default value:
###
supybot.reply.whenAddressedBy.chars:

###
# Determines what strings the bot will reply to when they are at the
# beginning of the message. Whereas prefix.chars can only be one
# character (although there can be many of them), this variable is a
# space-separated list of strings, so you can set something like '@@ ??'
# and the bot will reply when a message is prefixed by either @@ or ??.
#
# Default value:
###
supybot.reply.whenAddressedBy.strings:

###
# Determines whether the bot will reply when people address it by its
# nick, rather than with a prefix character.
#
# Default value: True
###
supybot.reply.whenAddressedBy.nick: False

###
# Determines whether the bot will reply when people address it by its
# nick at the end of the message, rather than at the beginning.
#
# Default value: False
###
supybot.reply.whenAddressedBy.nick.atEnd: False

###
# Determines what extra nicks the bot will always respond to when
# addressed by, even if its current nick is something else.
#
# Default value:
###
supybot.reply.whenAddressedBy.nicks:

###
# Determines whether the bot will unidentify someone when that person
# changes his or her nick. Setting this to True will cause the bot to
# track such changes. It defaults to False for a little greater
# security.
#
# Default value: False
###
supybot.followIdentificationThroughNickChanges: False

###
# Determines whether the bot will always join a channel when it's
# invited. If this value is False, the bot will only join a channel if
# the user inviting it has the 'admin' capability (or if it's explicitly
# told to join the channel using the Admin.join command)
#
# Default value: False
###
supybot.alwaysJoinOnInvite: False

###
# Determines what message the bot replies with when a command succeeded.
# If this configuration variable is empty, no success message will be
# sent.
###
supybot.replies.success: The operation succeeded.

###
# Determines what error message the bot gives when it wants to be
# ambiguous.
###
supybot.replies.error: An error has occurred and has been logged. Please\
contact this bot's administrator for more\
information.

###
# Determines what message the bot replies with when someone tries to use
# a command that requires being identified or having a password and
# neither credential is correct.
###
supybot.replies.incorrectAuthentication: Your hostmask doesn't match or your\
password is wrong.

###
# Determines what error message the bot replies with when someone tries
# to accessing some information on a user the bot doesn't know about.
###
supybot.replies.noUser: I can't find %s in my user database. If you didn't\
give a user name, then I might not know what your\
user is, and you'll need to identify before this\
command might work.

###
# Determines what error message the bot replies with when someone tries
# to do something that requires them to be registered but they're not
# currently recognized.
###
supybot.replies.notRegistered: You must be registered to use this command.\
If you are already registered, you must\
either identify (using the identify command)\
or add a hostmask matching your current\
hostmask (using the "hostmask add" command).

###
# Determines what error message is given when the bot is telling someone
# they aren't cool enough to use the command they tried to use.
###
supybot.replies.noCapability: You don't have the %s capability. If you think\
that you should have this capability, be sure\
that you are identified before trying again.\
The 'whoami' command can tell you if you're\
identified.

###
# Determines what generic error message is given when the bot is telling
# someone that they aren't cool enough to use the command they tried to
# use, and the author of the code calling errorNoCapability didn't
# provide an explicit capability for whatever reason.
###
supybot.replies.genericNoCapability: You're missing some capability you\
need. This could be because you\
actually possess the anti-capability\
for the capability that's required of\
you, or because the channel provides\
that anti-capability by default, or\
because the global capabilities include\
that anti-capability. Or, it could be\
because the channel or\
supybot.capabilities.default is set to\
False, meaning that no commands are\
allowed unless explicitly in your\
capabilities. Either way, you can't do\
what you want to do.

###
# Determines what error messages the bot sends to people who try to do
# things in a channel that really should be done in private.
###
supybot.replies.requiresPrivacy: That operation cannot be done in a channel.

###
# Determines what message the bot sends when it thinks you've
# encountered a bug that the developers don't know about.
###
supybot.replies.possibleBug: This may be a bug. If you think it is, please\
file a bug report at acker/?func=add&group_id=58965&atid=489447>.

###
# A floating point number of seconds to throttle snarfed URLs, in order
# to prevent loops between two bots snarfing the same URLs and having
# the snarfed URL in the output of the snarf message.
#
# Default value: 10.0
###
supybot.snarfThrottle: 10.0

###
# Determines the number of seconds between running the upkeep function
# that flushes (commits) open databases, collects garbage, and records
# some useful statistics at the debugging level.
#
# Default value: 3600
###
supybot.upkeepInterval: 3600

###
# Determines whether the bot will periodically flush data and
# configuration files to disk. Generally, the only time you'll want to
# set this to False is when you want to modify those configuration files
# by hand and don't want the bot to flush its current version over your
# modifications. Do note that if you change this to False inside the
# bot, your changes won't be flushed. To make this change permanent, you
# must edit the registry yourself.
#
# Default value: True
###
supybot.flush: True

###
# Determines what characters are valid for quoting arguments to commands
# in order to prevent them from being tokenized.
#
# Default value: "
###
supybot.commands.quotes: "

###
# Determines whether the bot will allow nested commands, which rule. You
# definitely should keep this on.
#
# Default value: True
###
supybot.commands.nested: True

###
# Determines what the maximum number of nested commands will be; users
# will receive an error if they attempt commands more nested than this.
#
# Default value: 10
###
supybot.commands.nested.maximum: 10

###
# Supybot allows you to specify what brackets are used for your nested
# commands. Valid sets of brackets include [], <>, and {} (). [] has
# strong historical motivation, as well as being the brackets that don't
# require shift. <> or () might be slightly superior because they cannot
# occur in a nick. If this string is empty, nested commands will not be
# allowed in this channel.
#
# Default value: []
###
supybot.commands.nested.brackets: []

###
# Supybot allows nested commands. Enabling this option will allow nested
# commands with a syntax similar to UNIX pipes, for example: 'bot: foo |
# bar'.
#
# Default value: False
###
supybot.commands.nested.pipeSyntax: False

###
# Determines what commands have default plugins set, and which plugins
# are set to be the default for each of those commands.
###
supybot.commands.defaultPlugins.addcapability: Admin
supybot.commands.defaultPlugins.capabilities: User
supybot.commands.defaultPlugins.disable: Owner
supybot.commands.defaultPlugins.enable: Owner
supybot.commands.defaultPlugins.help: Misc
supybot.commands.defaultPlugins.ignore: Admin

###
# Determines what plugins automatically get precedence over all other
# plugins when selecting a default plugin for a command. By default,
# this includes the standard loaded plugins. You probably shouldn't
# change this if you don't know what you're doing; if you do know what
# you're doing, then also know that this set is case-sensitive.
#
# Default value: Plugin Admin Misc User Owner Config Channel
###
supybot.commands.defaultPlugins.importantPlugins: Plugin Admin Misc User Owner Config Channel
supybot.commands.defaultPlugins.list: Misc
supybot.commands.defaultPlugins.reload: Owner
supybot.commands.defaultPlugins.removecapability: Admin
supybot.commands.defaultPlugins.unignore: Admin

###
# Determines what commands are currently disabled. Such commands will
# not appear in command lists, etc. They will appear not even to exist.
#
# Default value:
###
supybot.commands.disabled:

###
# Determines whether the bot will defend itself against command-
# flooding.
#
# Default value: True
###
supybot.abuse.flood.command: True

###
# Determines how many commands users are allowed per minute. If a user
# sends more than this many commands in any 60 second period, he or she
# will be ignored for supybot.abuse.flood.command.punishment seconds.
#
# Default value: 12
###
supybot.abuse.flood.command.maximum: 12

###
# Determines how many seconds the bot will ignore users who flood it
# with commands.
#
# Default value: 300
###
supybot.abuse.flood.command.punishment: 300

###
# Determines whether the bot will defend itself against invalid command-
# flooding.
#
# Default value: True
###
supybot.abuse.flood.command.invalid: True

###
# Determines how many invalid commands users are allowed per minute. If
# a user sends more than this many invalid commands in any 60 second
# period, he or she will be ignored for
# supybot.abuse.flood.command.invalid.punishment seconds. Typically,
# this value is lower than supybot.abuse.flood.command.maximum, since
# it's far less likely (and far more annoying) for users to flood with
# invalid commands than for them to flood with valid commands.
#
# Default value: 5
###
supybot.abuse.flood.command.invalid.maximum: 5

###
# Determines how many seconds the bot will ignore users who flood it
# with invalid commands. Typically, this value is higher than
# supybot.abuse.flood.command.punishment, since it's far less likely
# (and far more annoying) for users to flood witih invalid commands than
# for them to flood with valid commands.
#
# Default value: 600
###
supybot.abuse.flood.command.invalid.punishment: 600

###
# Determines the default length of time a driver should block waiting
# for input.
#
# Default value: 1.0
###
supybot.drivers.poll: 1.0

###
# Determines what driver module the bot will use. Socket, a simple
# driver based on timeout sockets, is used by default because it's
# simple and stable. Twisted is very stable and simple, and if you've
# got Twisted installed, is probably your best bet.
#
# Default value: default
###
supybot.drivers.module: default

###
# Determines the maximum time the bot will wait before attempting to
# reconnect to an IRC server. The bot may, of course, reconnect earlier
# if possible.
#
# Default value: 300.0
###
supybot.drivers.maxReconnectWait: 300.0

###
# Determines what directory configuration data is put into.
#
# Default value: conf
###
supybot.directories.conf: /home/supybot/pitivi/conf

###
# Determines what directory data is put into.
#
# Default value: data
###
supybot.directories.data: /home/supybot/pitivi/data

###
# Determines what directory temporary files are put into.
#
# Default value: tmp
###
supybot.directories.data.tmp: /home/supybot/pitivi/data/tmp

###
# Determines what directory backup data is put into.
#
# Default value: backup
###
supybot.directories.backup: /home/supybot/pitivi/backup

###
# Determines what directories the bot will look for plugins in. Accepts
# a comma-separated list of strings. This means that to add another
# directory, you can nest the former value and add a new one. E.g. you
# can say: bot: 'config supybot.directories.plugins [config
# supybot.directories.plugins], newPluginDirectory'.
#
# Default value:
###
supybot.directories.plugins: /home/supybot/pitivi/plugins

###
# Determines what directory the bot will store its logfiles in.
#
# Default value: logs
###
supybot.directories.log: /home/supybot/pitivi/logs

###
# Determines what plugins will be loaded.
#
# Default value:
###
supybot.plugins: Web Admin Misc Bugzilla User Owner Config Channel

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Admin: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Admin.public: True

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Bugzilla: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Bugzilla.public: True

###
# Determines whether the bug snarfer will be enabled, such that any
# Bugzilla URLs and bug ### seen in the channel will have their
# information reported into the channel.
#
# Default value: False
###
supybot.plugins.Bugzilla.bugSnarfer: True

###
# Users often say "bug XXX" several times in a row, in a channel. If
# "bug XXX" has been said in the last (this many) seconds, don't fetch
# its data again. If you change the value of this variable, you must
# reload this plugin for the change to take effect.
#
# Default value: 300
###
supybot.plugins.Bugzilla.bugSnarferTimeout: 300

###
# The fields to list when describing a bug, after the URL.
#
# Default value: bug_severity priority target_milestone assigned_to bug_status short_desc
###
supybot.plugins.Bugzilla.bugFormat: bug_severity priority target_milestone assigned_to bug_status short_desc

###
# The fields to list when describing an attachment after announcing a
# change to that attachment.
#
# Default value: type desc filename
###
supybot.plugins.Bugzilla.attachFormat: type desc filename

###
# How various messages should be formatted in terms of bold, colors,
# etc.
###

###
# When the plugin reports that something has changed on a bug, how
# should that string be formatted?
#
# Default value: teal
###
supybot.plugins.Bugzilla.format.change: teal

###
# When the plugin reports the details of an attachment, how should we
# format that string?
#
# Default value: green
###
supybot.plugins.Bugzilla.format.attachment: green

###
# When the plugin reports the details of a bug, how should we format
# that string?
#
# Default value: red
###
supybot.plugins.Bugzilla.format.bug: red

###
# The number of results to show when using the "query" command.
#
# Default value: 5
###
supybot.plugins.Bugzilla.queryResultLimit: 5

###
# A path to the mbox that we should be watching for bugmail.
#
# Default value:
###
supybot.plugins.Bugzilla.mbox: /var/mail/supybot

###
# How many seconds should we wait between polling the mbox?
#
# Default value: 10
###
supybot.plugins.Bugzilla.mboxPollTimeout: 10

###
# Various messages that can be re-formatted as you wish. If a message
# takes a format string, the available format variables are: product,
# component, bug_id, attach_id, and changer)
###

###
# What the bot will say when somebody adds a new attachment to a bug.
#
# Default value: %(changer)s added attachment %(attach_id)d to bug %(bug_id)d
###
supybot.plugins.Bugzilla.messages.newAttachment: %(changer)s added attachment %(attach_id)d to bug %(bug_id)d

###
# What the bot will say when a new bug is filed.
#
# Default value: New %(product)s bug %(bug_id)d filed by %(changer)s.
###
supybot.plugins.Bugzilla.messages.newBug: New %(product)s bug %(bug_id)d filed by %(changer)s.

###
# How should we describe it when somebody requests a flag without
# specifying a requestee? This should probably start with "from." It can
# also be entirely empty, if you want.
#
# Default value: from the wind
###
supybot.plugins.Bugzilla.messages.noRequestee: from the wind

###
# The various Bugzilla installations that have been created with the
# 'add' command.
#
# Default value:
###
supybot.plugins.Bugzilla.bugzillas: gnome

###
# Determines the URL to this Bugzilla installation. This must be
# identical to the urlbase (or sslbase) parameter used by the
# installation. (The url that shows up in emails.) It must end with a
# forward slash.
#
# Default value:
###
supybot.plugins.Bugzilla.bugzillas.gnome.url: https://bugzilla.gnome.org/

###
# Additional search terms in QuickSearch format, that will be added to
# every search done with "query" against this installation.
#
# Default value:
###
supybot.plugins.Bugzilla.bugzillas.gnome.queryTerms:

###
# Should *all* changes be reported to this channel?
#
# Default value: False
###
supybot.plugins.Bugzilla.bugzillas.gnome.watchedItems.all: False

###
# Whose changes should be reported to this channel?
#
# Default value:
###
supybot.plugins.Bugzilla.bugzillas.gnome.watchedItems.changer:

###
# What components should be reported to this channel?
#
# Default value:
###
supybot.plugins.Bugzilla.bugzillas.gnome.watchedItems.component:

###
# What products should be reported to this channel?
#
# Default value:
###
supybot.plugins.Bugzilla.bugzillas.gnome.watchedItems.product: pitivi

###
# The names of fields, as they appear in bugmail, that should be
# reported to this channel.
#
# Default value: newBug, newAttach, Flags, Attachment Flags, Resolution, Product, Component
###
supybot.plugins.Bugzilla.bugzillas.gnome.reportedChanges: newBug

###
# Some Bugzilla installations have gdb stack traces in comments. If you
# turn this on, the bot will report some basic details of any trace that
# shows up in the comments of a new bug.
#
# Default value: False
###
supybot.plugins.Bugzilla.bugzillas.gnome.traces.report: False

###
# Some functions are useless to report, from a stack trace. This
# contains a list of function names to skip over when reporting traces
# to the channel.
#
# Default value: __kernel_vsyscall raise abort ??
###
supybot.plugins.Bugzilla.bugzillas.gnome.traces.ignoreFunctions: __kernel_vsyscall raise abort ??

###
# How many stack frames should be reported from the crash?
#
# Default value: 5
###
supybot.plugins.Bugzilla.bugzillas.gnome.traces.frameLimit: 5

###
# If commands don't specify what installation to use, then which
# installation should we use?
#
# Default value:
###
supybot.plugins.Bugzilla.defaultBugzilla: gnome

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Channel: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Channel.public: True

###
# Determines whether the bot will always try to rejoin a channel
# whenever it's kicked from the channel.
#
# Default value: True
###
supybot.plugins.Channel.alwaysRejoin: True

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Config: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Config.public: True

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Misc: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Misc.public: True

###
# Determines whether the bot will list private plugins with the list
# command if given the --private switch. If this is disabled, non-owner
# users should be unable to see what private plugins are loaded.
#
# Default value: True
###
supybot.plugins.Misc.listPrivatePlugins: False

###
# Determines the format string for timestamps in the Misc.last command.
# Refer to the Python documentation for the time module to see what
# formats are accepted. If you set this variable to the empty string,
# the timestamp will not be shown.
#
# Default value: [%H:%M:%S]
###
supybot.plugins.Misc.timestampFormat: [%H:%M:%S]

###
# Determines whether or not the timestamp will be included in the output
# of last when it is part of a nested command
#
# Default value: False
###
supybot.plugins.Misc.last.nested.includeTimestamp: False

###
# Determines whether or not the nick will be included in the output of
# last when it is part of a nested command
#
# Default value: False
###
supybot.plugins.Misc.last.nested.includeNick: False

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Owner: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Owner.public: True

###
# Determines what quit message will be used by default. If the quit
# command is called without a quit message, this will be used. If this
# value is empty, the nick of the person giving the quit command will be
# used.
#
# Default value:
###
supybot.plugins.Owner.quitMsg:

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.User: True

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.User.public: True

###
# Determines whether this plugin is loaded by default.
###
supybot.plugins.Web: False

###
# Determines whether this plugin is publicly visible.
#
# Default value: True
###
supybot.plugins.Web.public: True

###
# Determines whether the bot will output the HTML title of URLs it sees
# in the channel.
#
# Default value: False
###
supybot.plugins.Web.titleSnarfer: False

###
# Determines what URLs are to be snarfed and stored in the database in
# the channel; URLs matching the regexp given will not be snarfed. Give
# the empty string if you have no URLs that you'd like to exclude from
# being snarfed.
#
# Default value:
###
supybot.plugins.Web.nonSnarfingRegexp:

###
# Determines the maximum number of bytes the bot will download via the
# 'fetch' command in this plugin.
#
# Default value: 0
###
supybot.plugins.Web.fetch.maximum: 0

###
# Determines whether the bot will always load important plugins (Admin,
# Channel, Config, Misc, Owner, and User) regardless of what their
# configured state is. Generally, if these plugins are configured not to
# load, you didn't do it on purpose, and you still want them to load.
# Users who don't want to load these plugins are smart enough to change
# the value of this variable appropriately :)
#
# Default value: True
###
supybot.plugins.alwaysLoadImportant: True

###
# Determines what databases are available for use. If this value is not
# configured (that is, if its value is empty) then sane defaults will be
# provided.
#
# Default value: anydbm cdb flat pickle
###
supybot.databases:

###
# Determines what filename will be used for the users database. This
# file will go into the directory specified by the
# supybot.directories.conf variable.
#
# Default value: users.conf
###
supybot.databases.users.filename: users.conf

###
# Determines how long it takes identification to time out. If the value
# is less than or equal to zero, identification never times out.
#
# Default value: 0
###
supybot.databases.users.timeoutIdentification: 0

###
# Determines whether the bot will allow users to unregister their users.
# This can wreak havoc with already-existing databases, so by default we
# don't allow it. Enable this at your own risk. (Do also note that this
# does not prevent the owner of the bot from using the unregister
# command.)
#
# Default value: False
###
supybot.databases.users.allowUnregistration: False

###
# Determines what filename will be used for the ignores database. This
# file will go into the directory specified by the
# supybot.directories.conf variable.
#
# Default value: ignores.conf
###
supybot.databases.ignores.filename: ignores.conf

###
# Determines what filename will be used for the channels database. This
# file will go into the directory specified by the
# supybot.directories.conf variable.
#
# Default value: channels.conf
###
supybot.databases.channels.filename: channels.conf

###
# Determines whether database-based plugins that can be channel-specific
# will be so. This can be overridden by individual channels. Do note
# that the bot needs to be restarted immediately after changing this
# variable or your db plugins may not work for your channel; also note
# that you may wish to set
# supybot.databases.plugins.channelSpecific.link appropriately if you
# wish to share a certain channel's databases globally.
#
# Default value: True
###
supybot.databases.plugins.channelSpecific: False

###
# Determines what channel global (non-channel-specific) databases will
# be considered a part of. This is helpful if you've been running
# channel-specific for awhile and want to turn the databases for your
# primary channel into global databases. If
# supybot.databases.plugins.channelSpecific.link.allow prevents linking,
# the current channel will be used. Do note that the bot needs to be
# restarted immediately after changing this variable or your db plugins
# may not work for your channel.
#
# Default value: #
###
supybot.databases.plugins.channelSpecific.link: #

###
# Determines whether another channel's global (non-channel-specific)
# databases will be allowed to link to this channel's databases. Do note
# that the bot needs to be restarted immediately after changing this
# variable or your db plugins may not work for your channel.
#
# Default value: True
###
supybot.databases.plugins.channelSpecific.link.allow: True

###
# Determines whether CDB databases will be allowed as a database
# implementation.
#
# Default value: True
###
supybot.databases.types.cdb: True

###
# Determines how often CDB databases will have their modifications
# flushed to disk. When the number of modified records is greater than
# this part of the number of unmodified records, the database will be
# entirely flushed to disk.
#
# Default value: 0.5
###
supybot.databases.types.cdb.maximumModifications: 0.5

###
# Determines what will be used as the default banmask style.
#
# Default value: host user
###
supybot.protocols.irc.banmask: host user

###
# Determines whether the bot will strictly follow the RFC; currently
# this only affects what strings are considered to be nicks. If you're
# using a server or a network that requires you to message a nick such
# as services@this.network.server then you you should set this to False.
#
# Default value: False
###
supybot.protocols.irc.strictRfc: False

###
# Determines what user modes the bot will request from the server when
# it first connects. Many people might choose +i; some networks allow
# +x, which indicates to the auth services on those networks that you
# should be given a fake host.
#
# Default value:
###
supybot.protocols.irc.umodes:

###
# Determines what vhost the bot will bind to before connecting to the
# IRC server.
#
# Default value:
###
supybot.protocols.irc.vhost:

###
# Determines how many old messages the bot will keep around in its
# history. Changing this variable will not take effect until the bot is
# restarted.
#
# Default value: 1000
###
supybot.protocols.irc.maxHistoryLength: 1000

###
# A floating point number of seconds to throttle queued messages -- that
# is, messages will not be sent faster than once per throttleTime
# seconds.
#
# Default value: 1.0
###
supybot.protocols.irc.throttleTime: 1.0

###
# Determines whether the bot will send PINGs to the server it's
# connected to in order to keep the connection alive and discover
# earlier when it breaks. Really, this option only exists for debugging
# purposes: you always should make it True unless you're testing some
# strange server issues.
#
# Default value: True
###
supybot.protocols.irc.ping: True

###
# Determines the number of seconds between sending pings to the server,
# if pings are being sent to the server.
#
# Default value: 120
###
supybot.protocols.irc.ping.interval: 120

###
# Determines whether the bot will refuse duplicate messages to be queued
# for delivery to the server. This is a safety mechanism put in place to
# prevent plugins from sending the same message multiple times; most of
# the time it doesn't matter, unless you're doing certain kinds of
# plugin hacking.
#
# Default value: False
###
supybot.protocols.irc.queuing.duplicates: False

###
# Determines how many seconds must elapse between JOINs sent to the
# server.
#
# Default value: 0.0
###
supybot.protocols.irc.queuing.rateLimit.join: 0.0

###
# Determines how many bytes the bot will 'peek' at when looking through
# a URL for a doctype or title or something similar. It'll give up after
# it reads this many bytes, even if it hasn't found what it was looking
# for.
#
# Default value: 4096
###
supybot.protocols.http.peekSize: 4096

###
# Determines what proxy all HTTP requests should go through. The value
# should be of the form 'host:port'.
#
# Default value:
###
supybot.protocols.http.proxy:

###
# Determines whether the bot will ignore unregistered users by default.
# Of course, that'll make it particularly hard for those users to
# register or identify with the bot, but that's your problem to solve.
#
# Default value: False
###
supybot.defaultIgnore: False

###
# A string that is the external IP of the bot. If this is the empty
# string, the bot will attempt to find out its IP dynamically (though
# sometimes that doesn't work, hence this variable).
#
# Default value:
###
supybot.externalIP:

###
# Determines what the default timeout for socket objects will be. This
# means that *all* sockets will timeout when this many seconds has gone
# by (unless otherwise modified by the author of the code that uses the
# sockets).
#
# Default value: 10
###
supybot.defaultSocketTimeout: 10

###
# Determines what file the bot should write its PID (Process ID) to, so
# you can kill it more easily. If it's left unset (as is the default)
# then no PID file will be written. A restart is required for changes to
# this variable to take effect.
#
# Default value:
###
supybot.pidFile:

###
# Determines whether the bot will automatically thread all commands.
#
# Default value: False
###
supybot.debug.threadAllCommands: False

###
# Determines whether the bot will automatically flush all flushers
# *very* often. Useful for debugging when you don't know what's breaking
# or when, but think that it might be logged.
#
# Default value: False
###
supybot.debug.flushVeryOften: False

###
# Determines what the bot's logging format will be. The relevant
# documentation on the available formattings is Python's documentation
# on its logging module.
#
# Default value: %(levelname)s %(asctime)s %(name)s %(message)s
###
supybot.log.format: %(levelname)s %(asctime)s %(name)s %(message)s

###
# Determines what the minimum priority level logged to file will be. Do
# note that this value does not affect the level logged to stdout; for
# that, you should set the value of supybot.log.stdout.level. Valid
# values are DEBUG, INFO, WARNING, ERROR, and CRITICAL, in order of
# increasing priority.
#
# Default value: INFO
###
supybot.log.level: INFO

###
# Determines the format string for timestamps in logfiles. Refer to the
# Python documentation for the time module to see what formats are
# accepted. If you set this variable to the empty string, times will be
# logged in a simple seconds-since-epoch format.
#
# Default value: %Y-%m-%dT%H:%M:%S
###
supybot.log.timestampFormat: %Y-%m-%dT%H:%M:%S

###
# Determines whether the bot will log to stdout.
#
# Default value: True
###
supybot.log.stdout: True

###
# Determines whether the bot's logs to stdout (if enabled) will be
# colorized with ANSI color.
#
# Default value: False
###
supybot.log.stdout.colorized: True

###
# Determines whether the bot will wrap its logs when they're output to
# stdout.
#
# Default value: True
###
supybot.log.stdout.wrap: False

###
# Determines what the bot's logging format will be. The relevant
# documentation on the available formattings is Python's documentation
# on its logging module.
#
# Default value: %(levelname)s %(asctime)s %(message)s
###
supybot.log.stdout.format: %(levelname)s %(asctime)s %(message)s

###
# Determines what the minimum priority level logged will be. Valid
# values are DEBUG, INFO, WARNING, ERROR, and CRITICAL, in order of
# increasing priority.
#
# Default value: INFO
###
supybot.log.stdout.level: DEBUG

###
# Determines whether the bot will separate plugin logs into their own
# individual logfiles.
#
# Default value: False
###
supybot.log.plugins.individualLogfiles: False

###
# Determines what the bot's logging format will be. The relevant
# documentation on the available formattings is Python's documentation
# on its logging module.
#
# Default value: %(levelname)s %(asctime)s %(message)s
###
supybot.log.plugins.format: %(levelname)s %(asctime)s %(message)s

###
# These are the capabilities that are given to everyone by default. If
# they are normal capabilities, then the user will have to have the
# appropriate anti-capability if you want to override these
# capabilities; if they are anti-capabilities, then the user will have
# to have the actual capability to override these capabilities. See
# docs/CAPABILITIES if you don't understand why these default to what
# they do.
#
# Default value: -owner -admin -trusted
###
supybot.capabilities: -owner -admin -trusted

###
# Determines whether the bot by default will allow users to have a
# capability. If this is disabled, a user must explicitly have the
# capability for whatever command he wishes to run.
#
# Default value: True
###
supybot.capabilities.default: False

Make it dance

Now start your bot from the command line. This way you can see the debug log messages, in case you need to figure out why it does not work.

sudo -u supybot supybot /home/supybot/pitivi/bot.conf


Finally, create /lib/systemd/system/supybot.service to have it started automatically.
[Unit]
Description=Pitivi IRC bot

[Service]
User=supybot
ExecStart=/usr/bin/supybot /home/supybot/pitivi/bot.conf
UMask=0007
Restart=on-abort
StartLimitInterval=5m
StartLimitBurst=1

[Install]
WantedBy=multi-user.target

Remember to start it and enable it to start automatically.
systemctl start supybot
systemctl enable supybot


Alternatively, create a file in /etc/init/ so Supybot is started automatically when the system starts. I probably copied the file below from somewhere, but I don't remember from where.

# This IRC bot serves #pitivi.
description "Pitivi IRC bot"

# I got this from mysql.conf, you might want to have a look
# at other files in /etc/init/ and copy a section which
# looks appropriate. Basically it should start the daemon
# after the network is started.
start on (net-device-up
and local-filesystems
and runlevel [2345])
stop on runlevel [016]

# Restart if it dies unexpectedly. Should not happen.
respawn

# Make sure the binary exists.
# It's recommended you install supybot from the current git
# HEAD, because no features are being worked on, and it
# should be stable: http://sf.net/projects/supybot/develop
# In this case, the binary is in /usr/local/bin/supybot.
# This may differ if you are using, the package included in
# your Linux distribution, for example.
pre-start script
test -x /usr/local/bin/supybot || { stop; exit 0; }
end script

# Output to the console whatever it outputs.
console output

# Run the bot with the specified config file.
#
# The bot does not need to run as a daemon, unless
# there are other jobs depending on its sucessful start,
# for example. If there are, you should add "expect fork",
# and specify --daemon to the command line, and hope that
# it works, because:
# "One needs to be very careful with the expect stanza
# to avoid confusing Upstart: the program must fork in
# precisely the expected way." ion
#
# To create the supybot system user and system group, and
# add yourself to the group so you can easily edit files
# run:
# addgroup --system supybot
# adduser --system --ingroup supybot supybot
exec sudo -u supybot /usr/local/bin/supybot /home/supybot/pitivi/bot.conf


Congratulations! Send me an email and tell me how happy you are that you have your bot. ;)

September 03, 2012

KSE 0.1 released !

I just finished fixing the blocker bugs for the open source assets' manager KSE, and I pushed a release there : https://github.com/MathieuDuponchelle/Kerious-Resource-Editor on the branch 0.1.

I recorded this screencast to make a demo of the features included in this release.



I want to say thank you to Simon Corsin for his work on the windows port and manual management.

I'd be glad to see pull-requests on github, or comments, critics and feature requests here.

Here is the text of the video for those with no audio:

Hi, my name is Mathieu Duponchelle and I am glad to present you the first release of kse, which is
a tool to simplify asset's management written in python.

This release covers spritesheet creation, sprite naming, automatic sprites detection in an existing spritesheet,
animation referencing and tons of other stuff.

Spritesheet creation.

Let's create a new spritesheet, and start importing resources.
First we create what we call an "atlas", specify its width and height and the location of the resulting file.
Then we can specify the size we wish the next sprite to have, and start adding sprites by double-clicking.
Next step is to start naming all these sprites, for now the naming part is a bit tedious, but it works.

Animation referencing.
Another cool feature of KSE is that we can reference animations easily, using multiple selection.
Let's say these sprites belong to the same animation.

We now save the project, and have a look at the xml that was produced.

We can see that a simple xml parser could easily extract coordinates for a given name, here X.

Automatic sprite detection.

Let's now say that we want to use an already existing spritesheet. I have this one, which will make for a good
demonstration.
A really cool feature is automatic sprite detection, which can be configured with this button.
Match-size will specify the desired width and height multiples for the resulting sprites.
Now let's try it out !
As you can see, KSE detected the sprites and automatically highlighted them. However, if the position or size
of any sprite does not match what we intend, we can easily resize and move them using the mouse or, if we want
a more fine-grained control the comboboxes that we have here.


Conclusion.

With some friends of mine, we already developed a space shooter using KSE.
This helped me finding out the features that were most needed.
It can greatly improve the collaboration between designers and developers, by reducing the number of back and forth
trial and errors, allowing the programmer to only parse the xml file looking for the sprite or the animation name,
and thus not having to hardcode coordinates, which is an obvious advantage, and allowing the designer to try things
without having to modify the code or asking the programmer to do so.

KSE now works on both linux and Windows, it has not been tested on OSX yet.

My thoughts for future features are listed in the poll on the right-hand side of the blog, don't hesitate to comment if you have other ideas/remarks/critics. Thanks for watching !

July 17, 2012

Selectable layers

Finally, it’s done. Thanks to some hints in the source code of Jokosher I was able to implement selectable layers in PiTiVi.
The problem is that some widgets in Gtk+ don’t have gdk screens. This means that you can not change their background color, which is a really important thing for showing if a layer is selected or not.

The solution for this problem is to derive the widget from EventBox. This gives the possibility to modify the background using the modify_bg(…) function.

Based on that work I was able to implement proper selectable layers. So if you click on a layer control widget or a
layer in the canvas now, the layer gets selected and changes its background color. You can see a selected layer in the following screenshot. Note that I disabled all non-functional widgets (the entry, sliders, buttons…) for now, so this isn’t the final look.

Selected video layer

This work builds the base for more nice stuff like drag and drop layer reordering or menu actions on layers. I’ll cover these in the next post.


June 27, 2012

Week 1 & 2

Back from holidays I worked on including the newly created widgets in the user interface.

Before all video or audio layers only had one label. Now we want one widget for every audio/video layer. Therefore I had to change the internal logic quite a bit.

Afterwards I noticed that hard coded layer heights didn’t work well with the new widgets. The more layers you had the more you saw that they weren’t properly aligned. My solution for this problem is that I adjust the layer height to the height of the layer control widget. This also gives us basic folding support as well.

The current UI now looks like this:

PiTiVi screenshot with new layer control widgets.

You can find my branch on github. I’m also happy to hear your comments, hints or whatever you want to say :)


June 19, 2012

Week -1

Because I didn’t have time to work on my project after the official start because of exams and vacation I started working the weeks before.

The first part consisted of designing and implementing new control widgets for layers with more functionality. After some inital atteps that didn’t look good, nekohayo (btw my super helpful mentor) jumped in and helped me to review and improve the designs. We ended with these mockups in Glade:

Then I started implementing them using pygtk. Unfortunately we still have to use Gtk+ 2 so we don’t get the cool new style. Furthermore I didn’t yet find a way to make a container widget selectable, so I added the ‘Delete’ button to delete layers. However this should be gone in the final version.

The next step was to use these new widgets in the PiTiVi interface. I’ll cover this in the next post!


May 20, 2012

Advanced Layers for PiTiVi

I’m happy to announce that my project proposal for the summer of code 2012 has been accepted!

I’ll be working on creating an advanced layer system for the PiTiVi video editor under the umbrella of the GStreamer project! Thanks to everyone who helped me!

About my project:

The current approach of layers in PiTiVi does not provide more functionality than grouping video and audio sources in different levels. I want to add an advanced layer system to the PiTiVi video editor to improve the video editing workflow and user experience. This is based on two ideas. On the one hand putting more functionality into layers helps to avoid repetitive tasks like adjusting volume and panning, muting or applying effects to clips. On the other hand improving the management of layers by offering to name, reorder, fold and resize them makes working with layers more convenient.


April 27, 2012

Getting video properties out of GStreamer

GStreamer is a pretty nice piece of software for handling almost all of your multimedia needs but the time will come that you get to the point where seemingly easy things are pretty hard to figure out. I hit this point when working on the thumbnailing code of PiTiVi and realizing that there is no […]

October 28, 2011

Homework :)

I had plenty of other things to do today, so I iterated from that code :
http://encrypt3d.wordpress.com/2007/06/19/level-order-traversal/ , which I found pretty elegant.

Function of the code:

Breadth first tree traversal, display the level and show if the node is the first one of its level (starting from the left).

Missing:

As it was homework, I had to give the is_first property. What would be better would be to display the order of the node in the level.
Also, wondering about the best way to display a tree.
May'be starting by the bottom ?


Code:


typedef struct s_infos
{
  int is_first;
  int level;
  t_btree *node;
} t_infos;

t_infos    *create_new(t_infos *par, t_btree *n, int first)
{
  t_infos    *q;

  q = malloc(sizeof(t_infos));
  q->node = n;
  q->is_first = first ? 0 : 1;
  if (par)
    q->level = par->level + 1;
  else
    q->level = 0;
  return(q);
}

void    levelorder(t_btree *p,
        void (*applyf)(void *item,
                   int current_level,
                   int is_first_elem),
        int *size,
        int *qptr)
{
  t_infos    **queue;
  int    *tab;
  t_infos    *q;

  q = create_new(0, p, 0);
  tab = malloc(sizeof(int) * 100);
  queue = malloc(sizeof(t_infos *) * 100);
  while(q)
    {
      applyf(q->node->item, q->level, q->is_first);
      if(q->node->left)
    {
      queue[*size] = create_new(q, q->node->left, tab[q->level + 1]);
      *size += 1;
      tab[q->level + 1] = 1;
    }
      if(q->node->right)
    {
      queue[*size] = create_new(q, q->node->right, tab[q->level + 1]);
      *size += 1;
      tab[q->level + 1] = 1;
    }
      q = queue[*qptr];
      *qptr += 1;
    }
}

void    *btree_apply_by_level(t_btree *root,
                              void (*applyf)(void *item,
                                             int current_level,
                                             int is_first_elem))
{
  int    size;
  int    qptr;

  size = 0;
  qptr = 0;
  levelorder(root, applyf, &size, &qptr);
}

Usage:

Let root be the root of the tree, and applyf the adress of a function returning void and using as arguments the data of each processed node, its level and its is_first property (Crap how useless is that ..) Also, t_btree is a classical node structure containing the left child, the right child and the data.

Also, today I coded a function that allows you to insert a data in a red black tree.
I'll try to post that  when I have time, it's a little longer but I think it's worth it.

September 28, 2011

PiTiVi 0.15 is out

The PiTiVi team is proud to announce the 0.15 release. This will be the last release using the "traditional" core/engine of PiTiVi. The next releases will be based on GStreamer Editing Services (GES) and should thus depart significantly from this release in terms of performance, features and stability.





Jean François did a great screencast that sums up what has been done during the last 2 years:





He also published the video of the 2011 desktop summit and PiTiVi/GES hackfest for your enjoyment:




All those videos are also accessible in HTML5 versions on the showcase page.



Thanks to everybody who made this release possible!

July 09, 2011

Weekly report number 6 and 7 for my GSoC

Hi everyone !

These last two weeks were pretty busy for me.
First off, last week I reached my mid-term evaluation goal, which was to load a GESTimeline with xptv projects thanks to my GESPitiviFormatter, then load a GESTimelinePipeline with the timeline, then connect it to the viewer in Pitivi, all of this using my python bindings for GES. This was a long sentence but quite a short job, and this was still pretty hacky.
Afterwards, I played around a little with projects, only to realize that I had a nasty bug with the interaction of effects and transitions, which obviously came from my formatter. This was in the middle of last week.

After this discovery, I moved in to another flat, and the end of the week was not really filled with working.

I started to tackle this bug by the beginning of this week, and I must admit it took me a long time to figure out where it came from (the respective priorities of transitions and effects, which determine which object is "above" and which is "under" had to be set in the appropriate way), and still longer to fix it : once I made out how everything had to fit together, I also realized that the calculating of the needs in terms of transitions had to be refactored, I needed to compare the sorted track objects one after another, and not the track objects belonging to the sorted timeline objects one after another. I hope I'm clear enough, anyway I had to add a function to the GESTrack API to achieve this, which makes me think I'll also have to update the bindings aha.

Also, during this week I broke my scooter, and that didn't help me in concentrating on my problems. It now lies in pieces in my yard waiting for me to fix it or for someone to steal it xD.

Anyway.

These bugs are now fixed, and I can now attack the rest of the integration with a free mind.

I will now explain you how to checkout my work and give it a try, if you find bugs don't hesitate to contact me.

This is if you don't use jhbuild :

First, you'll need pitivi. After that, you'll need the latest gst-python, then you'll have to get to my GES repository, which is at https://github.com/Mathieu69 as usual. You'll want to checkout the "integration" branch, autogen it *without* prefix (this is due to the CONFIGURED_PYTHON_PATH in pitivi, will fix that when I'll have time), make and make install.

To test the bindings : ipython, from gst import ges. If it imports, fine ! You're nearly done. Otherwise, mail me with the traceback.

Afterwards, all you'll have to do is go to my pitivi repo, and checkout the "intcheck" branch. Just switch to it, launch pitivi, and load a project *using the start up wizard*. You'll then be able to press play and profit :D You can also pause, undock the viewer, and that's pretty much all xD.

Anyway, what's interesting is to compare the playback between master and my branch, you'll be able to see the difference in smoothness ;) GES makes playback nearly smooth, it only lags a little when transitions start or if you think that a clip is way cooler with agingtv, chromium, color lookup table filter, vertigo TV, and whatever effect *at the same time*, where in pitivi and with my computer you can only see one picture every 2/3 seconds, with no transitions. When transitions or effects kick in, you just want to switch to pause and go ahead by hand, it's nearly smoother ;)

If you have issues at any point of the testing, mail me or directly come to pitivi's IRC channel, my nick is Mathieu_Du.

Voila !

June 16, 2011

PiTiVi: Let's go faster!

As I said in the last release announcement post, the PiTiVi community needs changes in the releasing process, and as newly appointed release manager I have the primary goal of making the development as dynamic as possible. The last release cycle lasted 8 months, which is way too long when we are supposed to be following the "release early, release often" philosophy. Avoiding it, is something we really want to work on, but I think a little explanation of why it happened is well needed:
  • We had blocker bugs that could only be fixed by the main developers, and it was difficult for them to do so.
  • The person in charge of releasing was much too busy to do the job, which is why I have now been appointed as new release manager and will do my best to find the time to do it well.
  • We did not have any release schedule, which meant that we did not have any obligation concerning releases.
Also I want to explain a few of the reasons why I do believe this is something we really want to avoid:
  • Users think nothing is happening, that the project is dying. Yet, as the activity on our git repository shows, rumors of our death have been greatly exaggerated.
  • Potential contributors are hesitant to invest in the project because they think the project is dormant.
  • Without a feature freeze, bugs tend to accumulate as new features land Contributors may be discouraged by the fact that they do not see their work reach the users quickly enough.
  • There are many more reasons, as you can guess, but listing them all here would take too much time and I would have to delay the release ;-)
Therefore, we decided to take a set of measures concerning our release process to keep our current developers active and attract as many new contributors as possible. So here are the changes that are going to occur in the near future:
  • First and foremost, starting from now, PiTiVi should follow the Gnome release schedule. This is not going to be easy, since our manpower is quite limited, but this is something that is really needed to ensure a healthy development pace.
  • Have an official patch review policy: we want to "guarantee" that any patch that is sent to us will be reviewed *within 3 weeks*. This is also a great opportunity for new contributors who have experience in programming in general and would like to start by reading and improving other's code. Collaboration towards making a great patch is a great motivator! Upholding this goal will be a challenge, and your help in reviewing patches is very welcome.
  • We want to make sure that deprecated libraries or broken/unmaintained features get removed as fast as possible. Dead code must die.
The project is pretty active these days, and I believe it is the right time to get those changes done. The feature list is becoming bigger and bigger and thanks to our very close relationship with the GStreamer multimedia framework, implementing new ones is becoming very simple. Also, the GStreamer community is very active and we are looking forward the GStreamer 1.0 release which should come out soon and will bring us new opportunities toward our goal of creating a video editor that can serve the needs of professional video editors.

So if you are interested in helping us making those changes happen, you are very welcome to have a look at our wiki page and our task list. I also warmly encourage you to join us on #pitivi on the freenode irc server, I am sure people will be happy to help you getting involved in this great software development project!

June 10, 2011

Calendar Update Part II - After The Release

Following up on my promise to detail some of my future plans. But first, a word about that nasty memory leak.

Memory leak in pycairo or pango/cairo.


I am 80% confident that there's a bug in the python bindings shipped on ubuntu 10.10. I developed a minimal example, (full source here), which just draws some text repeatedly in a window. The relevant bit of this example is the following function:

def draw_text(self, cr, text, x, y, width):
pcr = pangocairo.CairoContext(cr)
lyt = pcr.create_layout()
lyt.set_text(text)
lyt.set_width(pango.units_from_double(width))
lyt.set_wrap(pango.WRAP_WORD_CHAR)
cr.move_to(x, y)
pcr.show_layout(lyt)
return lyt.get_pixel_size()[1]

This little script will om nom nom your memory -- over 100 mb of memory after just a few minutes, and I think we can all agree that even if this it's not the most efficient example, its memory usage should be nearly constant. I haven't yet opened a bug, because I still don't know which component(s) contain the actual leak. At least feel better that it's probably not something I'm doing wrong, which gives me the confidence to move forward with the release.

To mitigate against the leak, I threw in an optimization that ensures only one pangocairo context and one text layout is created, per expose event, per widget. This seems like a reasonable thing to do in general, so I will probably leave this optimization in even after the bug is fixed. Unfortunately, all this does is slow down the leak. The leak is still there, which means you still wouldn't want to leave the calendar application running indefinitely.

In passing I'd like to say that the idea that a pango layout needs to be associated to a particular context bothers me a lot. I wonder if I can get away with re-using layouts with different pangocairo contexts.

A better grammar

The current parser for recurrence expressions could use some serious help. The parser for the existing grammar is in the "proof-of-concept" stage of development. It's intuitive for simple things, but you will notice that parenthesis are needed to get correct parsing of more complicated examples. This is due to unresolved ambiguities in the grammar.

Most of the ambiguity could likely be resolved with associativity and precedence declarations, but when you're developing a new notation it's difficult to know in advance what the intiutive associativity is. Basically I didn't bother trying to get it right at all, and rely on parenthesis as a crutch. I recognize that this neither intuitive nor natural. Even I am surprised at how wrong the unparenthesized parse can be.

I am also aware that this approach is horribly anglo-centric. You would have to write separate parsers for  every language you wanted to support, and it's a good bet that language constructs don't always map onto each other so neatly.

I am also playing around with graphical approaches to the same problem, but to be perfectly honest they've all sucked so far. I might do a separate posting on one of them just to show you how god-awful it is.

A mouse-driven way to add recurrence exceptions.

When dragging a recurring event, the current behaviour is to shift the entire recurrence pattern. This is a deliberate choice on my part: it's a novel feature that I want to highlight it. Unfortunately, many times time what you actually want to do is shift just the occurrence you've selected, and occasionally you want to move all the events after the selected occurrence.

Note, you can do this now: however, you have to go through the text interface. Even I find this quite tedious. The solution most obvious to me is to introduce control and meta modifier keys, but I have found that other users do not share my enthusiasm for mouse modifiers. In usability parlance, we say that   modifiers are less discoverable than more explicit features.

Another idea would be to add some tool bar buttons that silently add modify the recurrence rules of the selected event. This would be fairly easy to do, but I have a couple of misgivings. The first is that the toolbar is getting crowded already. The second is that it won't be obvious what those buttons do at first. Clicking on the "except this occurrence" button will not produce an immediately visible change, only modify the behavior of future interaction. This would be especially be confusing if you hit the button by accident.

A more radical idea would be to place special "handles" on the clip itself. That way dragging from a handle could have a different meaning from dragging the clip itself. I am leaning towards this approach, but it is not without drawbacks of its own. For example, the handles compete for space with the event text. They will necessarily be small, which could hurt usability in a tablet.

Integration with other calendars

Syncing with Google Calendar and / or EDS
importing from, and exporting to, .ics.

This work ought to be pretty straight-forward given how the code is organized, but I haven't really looked in detail at the APIs / File formats I'd be working with. If you're interested in helping out with that, you know where to find me ;)

Event alarms

This might not be necessary if I can use something like EDS as the default back-end. But we'll see. Depending on how hard it is to map the concepts in my application onto the APIs exposed by EDS, it might be easier to roll my own implementation.

June 09, 2011

Calendar Update



What a crazy couple of months it's been. I have been back in San Francisco since late February, but not found the motivation or time to post until now. Among other things, I've been to racing school, and I've taken up drumming as a new hobby. I also attended the meego conference here, where I was showing off my QML GES demo (another post about this is still in progress).

I thought I'd take some time to write about the work I've done on my desktop calendar application over the past several months. This will stop me from pestering my friends about it, which will make them a lot happier.

  • Major cosmetic face-lift.
  • Added an infinite-scrolling month view.
  • Added support for all-day events, which are displayed in a pane along the top row.
  • Added support for recurring events via a natural-ish language parser which supports english-like recurrence expressions such as "every two days from now until october", or "the 3rd wednesday of every month except this wednesday"
  • You can easily shift the entire recurrence pattern by dragging a single occurrence with the mouse
  • Possible to select the same block of time across multiple days.
  • Added support for editable event text.
  • Removed dependency on goocanvas. Since all the drawing code is custom, and the entire widget is re-painted on every expose, there really is no point in using goocanvas.




I am considering making an initial release, but would like to address the following issues:
  • I need to come up with a name. As my music teacher once told me, "Name your children." Anything is better than cal.py.
  • Adopt some system for installation, most likely distutils.
  • Currently the calendar widget leaks about 100kb of memory for every expose event. After a few minutes of scrolling and dragging it's pushing 100mb. It's obviously a bug, and I've narrowed it down to the functions I wrote to simplify drawing text. Disabling these functions reduces the memory consumption to a much more reasonable 15-ish mb, Either I am doing something memory inefficient in python, or I've found a leak in pango / cairo. I would very much like to understand that better.
  • I recently added all-day events. You can create an event as an all-day event, or you can drag an existing event to the all-day area. But you can't drag an all-day event back into the timed area.
I did some "usability testing" on a hapless friend of mine, and the results were encouraging but showed there was a great deal of room for improvement. Unfortunately, my select-then-add paradigm of creating new events is confusing for those used to applications like google calendar and evolution. In evolution I find the click-to-create behaviour frustrating. Google calendar, on the other hand, seems to get it right.

After the release
This post is getting long enough already. I've also got some ideas for subsequent improvements, which I will summarize here. TTFN

  • A better grammar
  • A mouse-driven way to add recurrence exceptions.
  • Integration with other calendars
  • Event alarms

June 03, 2011

Second weekly report for GSoC

Here is my weekly report for the second week of GSoC :

Hi !

I've spent my week refactoring my formatter, and after successfully integrating the effects handling, I've banged my head against the track objects wall.

My first version worked fine, but I had worked around the following problem in a way that was not the ideal one :

When you create a timeline object, the track objects are not immediately created, and when timeline objects are ungrouped in Pitivi, you want to access the track objects of the timeline file sources to move them accordingly. I worked that around by creating two file sources when needed, and setting one mute and the other blind.

My new version added a "track-object-added" signal to GESTimelineObject, which I caught to move the track objects accordingly. This worked fine, but there is now another trouble : transitions added after track objects were created are not taken into account, even if GST_DEBUG tells me they were really added. I have a good test case that shows that, and I pushed that yesterday for bilboed and thiblahute to have a look.

I just started to have a look at the unhandled argtypes in the bindings, and I will work this week-end to get back in my schedule.

I'll be able to use the old version of the formatter to start integrating it anyway, for the API will not change when the refactoring is done.

By the next report, I plan to have finished the bindings, to have them merged soon, cause other GSoCers will need them. I will then start the integration, to reach the goal of my mid-term evaluation : having pitivi load projects, and see them in the viewer.

Have a good week-end :)

June 01, 2011

PiTiVi 0.14 "No longer kills kittens"

The PiTiVi team is proud to announce the immediate availability of our new 0.14 release, with major new features, bug fixes and usability improvements. We hope you can use and enjoy the improvements in PiTiVi 0.14, and report bugs you may encounter (bugs are a fact of life! Help us hunt them out!).

Some of the major new features deserve to be enhanced in here such as:
  • Audio and video effects: with PiTiVi 0.14, it is now possible to add all the effects provided by GStreamer and some other libraries, which, depending on what is installed on your system, means "a lot". The user interface for managing those effects will improve in future releases.
  • Ability to preview video, audio and image files before importing: the file chooser dialog now lets you directly preview the files you want to import. This is pretty useful when you don't remember the name of a file you wanted to import, or when you're just "looking around" for media to use in your project.
  • Welcome dialog: when starting PiTiVi, a startup assistant now helps you load recently used projects in two clicks or create a new project. Don't feel like using this dialog? Just click its Close button or press the "Escape" key on your keyboard. However, we trust that this little fella will make you much more efficient, not less. We hate useless windows. See also this post, for some explanation of the design process that went into this welcome dialog.
...And more. See the full release notes for a more exhaustive list of goodies.

This release cycle has been very eventful. As the list of contributors reveals, lots of people got involved in the software development. I also want to thank all the people who worked hard on translating PiTiVi in a timely manner and reporting localization issues.

We are now entering a new cycle on the road to maturity. Four students are working full time on PiTiVi this summer, thanks to the Google Summer of Code program, and we will try to increase the general development pace to keep the great momentum we had in the past few weeks. Stay tuned for another blog post on how we plan to achieve this!

To celebrate, I personally wanted to do some dogfooding by making this little video, with the additional constraint of using only footage coming from my own camera (no, I am not a video artist!):







But this is for sure, not the best PiTiVi can do. See our showcase page for examples of videos we found to be aesthetically interesting, and feel free to submit your own! Post links to your videos in comments, and if they're really cool, they could end up in our showcase! ;)

PiTiVi pre-release

The whole PiTiVi team is pleased to announce a pre-release of the PiTiVi video editor.

This pre-release contains the past 7 months of work, implementing this pretty new feature list:
  • Audio and video effects
  • Completely redesigned project settings dialog, with the ability to create presets
  • Completely redesigned rendering dialog
  • Welcome dialog that helps you start a project or load recent projects in two clicks
  • Ability to preview video, audio and image files before importing
  • Add a "best fit" zoom button
  • Ability to jump to an exact position in the timeline
  • Ability to specify custom aspect ratios and framerates
  • Show a progress bar when loading projects
  • 300% faster project timeline loading
  • Search bar in the Media Library
  • Ability to detach all the tabs and the previewer
  • New manpage
  • Commandline render mode
  • Use the standard infobar widget all around
  • And lots of bug fixing

Unless anything critical or regressions pop up, expect the 0.14 release next tuesday (May 31st).

Please test it and abuse it and report bugs at:

Tarballs are available here:
Thanks for testing and helping us make this the best release ever :)

May 27, 2011

I have been lurking and hacking Pitivi's source code for quite a while now, and I think it is now time for me to sum up my opinions about it.


The first time I launched it, I had just installed Ubuntu and the version I had was 0.13.4, without the effects and anything. I was new to coding, so the only thing I could judge about it was the user interface and how everything fitted together. And I must admit that I thought : "Wow ! This is not what I would use if I wanted to make a short movie or anything .."

Anyway, I started messing up with the code, making my way through the architecture with wild prints and crazy changes, and I quicly reckoned that Pitivi was only the visible part of a greater framework, namely GStreamer, and that this framework really rocked, and gave Pitivi an unprecedented advantage on what existed before : A solid community, with hundreds of active members, all of them dedicated on taming the wild beast which is media handling, writing a great number of plugins ranging from codec handling to effect creation or object transformation !

Pitivi was nothing but the editing emerged part of the iceberg, and a part which was still to grow to its real size. After all, the version naming is not random, the 0.13.x name means that it really is not mature yet, and misses a bunch of basic features essential to video editing, listing them would be too fastidious.

But the whole point of my argumentation is that I think Pitivi has bet on the good horse, cause now that I begin to know a little more about GStreamer's internals, I realize the power that lies in its architecture : Flexibility and Modularity, through the use of combinable elements to form new elements, which in turn can be combined to finally make complex pipelines, ready to be played or rendered at will.

I don't know well the other available frameworks, for example MLT, but what I know is that I love GStreamer, and a lot of other people look like they love it as well, shaping up the biggest media framework community by far.

That's why I stuck up to pitivi until now, eventually applying for a Summer of Code to be able to give it still more of my time, and I can say now I'm happy of my choice. Indeed, a long-awaited release has just arrived, shipping a whole lot of goodies, first of them being the effects handling, and I can see some lines moving. Pitivi's IRC channel has never been so active, and new enthusiastic developers wannabe's pop up every day. On top of that, Gstreamer and Gnome have allocated no less than 4 Summer of codes to us (against only one last year) which means we're gonna be able to take this software to a new level :
  1. One GSoCer is going to implement complex object recognition algorithms (robust estimation of Optical flow), which will make pitivi conscious of the different layers (background, foreground) and moving objects in a video.
  2. Another will use Gstreamer filters to provide object's transformation options to the user, and also give titling support to Pitivi, another essential feature that it was lacking until now.
  3. The third one will allow users to upload videos to streaming websites such as youtube or dailymotion, and also provide them with rendering presets (IPod, YouTube, Android Phone etc..).
  4. I will integrate a new C library in pitivi, which will replace a lot of its backend with C optimized functions, making it more performant in the process, and allowing new features
     integration (for example new kinds of transitions, the default one for now being the classical overused crossfade).
This is exciting isn't it ? I can't wait to see the next release, which will hopefully come way faster than the previous one. Kudos to thiblahute for bringing it to us !

My Google summer of code weekly report number 1.

Hi everyone !

I started working on my project before the actual Summer of Code began, and I already successfully compiled the static bindings with a code coverage of about 90 %, wrote the beginning of a test suite for them and a basic example script.

Since then, effects have been merged in GES master, so the .xptv file formatter I had written has got outdated, and since it's a critical component of my project, I decided to update it before going on with the bindings, in the hope of getting a near 100% code coverage.

This has involved some massive refactoring, since I had not clearly envisioned the way the formatter would work with the effects (shame on me ;), and I'm not done yet : effects are integrated, but I still have to re-integrate the transitions properly).

However, work is going well, and I'm not blocked anymore on my way to integrating GES in pitivi, given one of the members of the IRC channel gave me a hand on solving the problem of GST argtypes that were not properly handled by the code generator I use to create the bindings.

By the end of next week, I will surely have finished my work on the formatter (code cleanup, fixing of the tests), and I hope to have solved the GST argtypes problem completely as well (This could be the matter of 2 or 3 hours of coding if everything goes as intended). With some luck, I'll be able to write overrides for the last 10 or 12 functions as well, which will leave me free to attack the real meat of my project, i.e. integration in pitivi proper.

Most of my problems this week were very specific to my project, and I don't think they'd be very useful to anyone.

December 18, 2010

Working for Collabora Multimedia :)



This month is my first month working (part time) for the multimedia team at Collabora. I think you all know about this company which is specialized in open source softwares particularly in Gstreamer and all majors Gnome/Freedesktop technologies.

So I am now working on GStreamer related technologies and in particular on PiTiVi to make it rock more and more!

I am very glade to be part of this team and am looking forward the good time I am going to have hacking as a multimedian :)

December 10, 2010

Change in the plans !

YouTube law-kicked us so I had to find more suitable streamers.

I just implemented archive.org and blip.tv and it rocks.
Archive has a very serious, research-oriented(but not only) database, with private collections and plenty of available formats for most videos.
The only problem I see with it is the 'picture coming soon' on 3/4 of the thumbs..
I hope it's true :)
That is not a problem with blip.tv, which is more youtube-like. You only get .flv on the other hand.
Anyway, I'm now lurking for another friendly streaming site. Any suggestions accepted :) 




November 27, 2010

Drag and drop would be difficult though :)

First blog : Introduction.

Hi ! I'm making this blog to present the results of my work on Pitivi, the open source video editor that is shipped with ubuntu 10.04. My git branches (not always up to date :) are available here :
https://github.com/Mathieu69.
As I currently am unemployed, I have found quite a lot of time to mess with pitivi's source code, and produced some enhancement features that were suggested in the Pitivi_love wiki.
At the moment, I work on a feature that no doubt people will like. It's the ability to browse youtube's videos within pitivi (see screenshot), directly stream them ( a bit like totem's plugin that actually does not work for me :) and download them. They are then added to the sourcelist, where you can edit them as usual clips ( to make a cool video playlist with fancy transitions for example ?).
The retrieval of the video can also be done directly with its url, and I just implemented the processing of dailymotion's urls as well. Direct streaming is not available for the moment with this method, but it can be implemented quite easily. One could also add a lot of ways to retrieve contents online with this method. The search is permitted by google gdata API, but a download address can however be figured out from google video and other kinds of online streamers. GData also provides an API to search Picasa, I dont know to what extent it would be useful in a video editor. Any feedback appreciated, have a nice day :)

Feeds