Tag Archives: Plone

Dexterity vs. Archetypes

TL;DR: migrating your Archetypes content to Dexterity shrinks your Data.fs considerably!

I’ve started looking into migrating Archetypes content in one of the sites we’re running to Dexterity. But before I started coding I wanted to make sure that the juice is worth the squeeze.

The site contains roughly 130k content objects. Most of them are default Plone File type, with a few additional string fields (added with archetypes.schemaextender) and a custom workflow. Nothing fancy, just your standard integration project. New content is imported into the site in batches of ~10k items every now and then with the help of the awesome collective.transmogrifier.

To test how Dexterity compares to Archetypes for our use-case I first created a new Dexterity type that matched the feature-set of the Archetypes type. Then I created a fresh instance, used Transmogrifier to import 13k items (10% of our production data) and ran some numbers.

Results are pretty amazing. With Archetypes, an import of 13k items into a fresh Plone 4.2 site took 61 minutes and the resulting Data.fs had 144 MB (details). With Dexterity, the same import took only 18 minutes and the resulting Data.fs had 64 MB (details). That’s a whopping 70% decrease in import time and a 55% decrease in Data.fs size!

More than enough to be a valid reason to invest in rewriting our types and write that migration script.

Raspberry PI boot to browser

Here at NiteoWeb, we use various SaaS monitoring and logging providers such as Librato Metrics and Papertrail to keep on top of our Plone and Pyramid projects. Hence the need to have a wall-mounted screen to display various graphs and outputs these services. What better way to drive the screen than a Raspberry Pi!

Getting the Raspberry Pi to boot into X and connect to the net was fairly trivial, just follow the official docs. However, getting the Pi to boot directly into a browser (also called “kiosk” mode) required some research. This is how I’ve done it in the end:

  1. Disable screen sleep — so the screen stays on
    $ sudo nano /etc/lightdm/lightdm.conf
    # add the following lines to the [SeatDefaults] section
    # don’t sleep the screen
    xserver-command=X -s 0 dpms
  2. Hide cursor on inactivity
    $ sudo apt-get install unclutter
  3. Configure LXDE to start the Midori browser on login
    $ sudo nano /etc/xdg/lxsession/LXDE/autostart 
    # comment everything and add the following lines
    @xset s off
    @xset -dpms
    @xset s noblank
    @midori -e Fullscreen -a http://plone.org

That’s it! Reboot and you are done!

Convert z3c.form field desc. to tooltip

Let’s say you have a typical form with some input fields. By default z3c.form displays a description (if provided in form definition) above each form field, like in the screenshot below for example:

Original form

Sometimes however, you might want to make things a little bit different. Perhaps you want to save some screen space by hiding field descriptions, but still want to provide helpful additional information on form fields to your users. One way of achieving this is to display field descriptions as tooltip text:

Modified form with tooltips

Not bad.

And it’s easy too. Simply set the title attribute of a corresponding form field and you’re done.

I’ll show you how we did it for one of our clients. The form was called List Events, because, well, it lists upcoming events based on search criteria.

# file eventlist.py in browser directory
from z3c.form import form

class ListEventsForm(form.Form):

    def updateWidgets(self):
        """Move fields' descriptions to title attributes of HTML form elements."""
        super(ListEventsForm, self).updateWidgets()
        for name, widget in self.widgets.items():
            widget.title = widget.field.description

In the ListEventsForm class we override the updateWidgets method in order to iterate through all form fields and set their title attribute.

Note that this makes tooltips to show, but field descriptions themselves are still visible. We hide them with the following CSS rule (“eventlist-form“ being our form’s ID):

#eventlist-form div.formHelp {
    display: none;

And that’s all, folks!

How to change element’s ID with Diazo?

A common scenario: on your website all subpages share a common header, but you want a different header on the front page. Let’s say you differentiate between both header versions by their ID attribute and define two different sets of CSS rules for each version.

When applying Diazo rules to a theme file, you therefore need to change header element’s ID, depending on whether the first page or one of the subpages was requested. Here’s a snippet from rules.xml, which does exactly that:

<!-- change header's ID attribute -->
<prepend css:theme="#header-index">
    <xsl:attribute name="id">header-subpage</xsl:attribute>

ID of the header as defined in theme file (header-index) is changed to header-subpage after the rule above is applied.

The rule basically says “Match #header-index element in theme file and use some inline XSL on it”. The “xsl:attribute” tag matches current element’s attribute whose name is “id” (with “current element” being the #header-index element as matched by the outer <prepend> tag). The content of the “xsl:attribute” tag is a new value for the ID attribute.

Nothing spectacular here indeed, but I lost quite some time trying to figure out how to do it in a simple yet efffective way. Very frustrating for such a small task. Various suggestions found on Google simply didn’t work or were a bit too complicated (there must be an easier way to do it, right?).

So to help you avoid all the trouble, I decided to wrote this blog post. Too bad it didn’t exist before. 😉

Robot on Travis – uploading results to S3

This is a walkthrough of how one could upload to Amazon S3 screenshots and other output files produced by Robot framework ran in a Travis CI build. The reason why we want to do this is to be able to inspect what Robot sees and have more information when a test fails. It’s written with some things specific to Plone development, nevertheless it should still be useful for any other framework/language supported by Travis.

Preparing Amazon S3

  1. Go to http://aws.amazon.com/ and sign-up & login.
  2. Go to http://aws.amazon.com/s3/ and click “Sign Up Now” to enable Amazon S3 for your account.
  3. Go to https://console.aws.amazon.com/s3/home and click “Create Bucket” named “my-travis-builds” or something similar. Travis will upload screenshots inside this bucket.
  4. Go to https://console.aws.amazon.com/iam/home, click “Users” in the left navigation sidebar and then click “Create New Users” button. Enter “travis” as a username and keep the “Generate an access key for each User” checked. This is the user that Travis CI will use to upload files to your Amazon S3 account. When your user is created click the “Download credentials” — we’ll need them later.
  5. Now click on the “travis” user, select the “Permissions” tab and click “Attach User Policy”. Select “Custom Policy” and click “Select”. Enter “travis-upload-only” or similar as the Policy Name and paste the following into the Policy Document field:
      "Statement": [
          "Action": [
          "Effect": "Allow",
          "Resource": [
          "Action": [
          "Effect": "Allow",
          "Resource": [
          "Action": [
          "Effect": "Allow",
          "Resource": [

    This policy gives the “travis” user minimal required permissions to upload files into the “my-travis-builds” bucket with s3cmd. We are now ready to start uploading!


Preparing s3cmd

Travis will later use s3cmd to upload files to Amazon S3. Before moving on to configuring Travis, you need to add a “.s3cfg” file to your repository. This file configures s3cmd with access credentials. Open the “credentials.csv” you downloaded earlier when creating a “travis” user through Amazon IAM and paste access and secret keys into “.s3cfg”. Commit & push.



Configuring Travis CI

I’m assuming you already have “.travis.yml” in your repository and you are already running builds on Travis CI. If this is not the case, check out the following URLs to get you up to speed:

Then, if you haven’t yet, add the following two lines to “before_script” in your .travis.yml file, to enable the virtual X frame buffer, so Robot tests have something to run against.

  - "export DISPLAY=:99.0"
  - "sh -e /etc/init.d/xvfb start"

Moving on, you’ll need “s3cmd” installed on your Travis VM, so add the following to your .travis.yml.

  - sudo apt-get install s3cmd

Now, as the last step, add the following line to “after_script” in your .travis.yml file. This uses the s3cmd installed above and the .s3cfg added in the previous section to upload screenshots created by Robot to the “my-travis-builds” bucket on S3, inside the #<travis_job_id> folder.

  - s3cmd put --acl-private --guess-mime-type --config=.s3cfg selenium-screenshot* s3://my-travis-builds/#$TRAVIS_JOB_ID/

Now go back to https://console.aws.amazon.com/s3/home and bask in the glory of having Robot test screenshots in your S3 bucket!

Robot screenshots