• Why unpaid corporate hack days are a bad idea

    August 07, 2014

    There has been a lot of discussion about corporate hack days, prompted by the Unilever hack day called Re.Hack with the tagline “reinventing Commerce”.

    I know people working for design agencies who do spec (speculative) work. It’s risky and disheartening, but impossible to avoid for some companies. One person I spoke to a long time ago used to spend two thirds of their time doing spec work, and win only a third of the contracts. Not a desirable position to be in if you have several employees.

    For a freelancer, attending a two day event is not as damaging as putting a whole agency on a project for a week, but that time still costs money in missed opportunities.

    I’ve heard of day rates in tech from as low as £200 per day to well over £1500. Two days not working for other clients works out at anything between £400 and £3000+. That’s not including any time spent working in the evening or even through the night.

    3beards, the organisers of Re.Hack pointed out in their blog post that intellectual property rights remain with the participants, they all have chances to win prizes, and potentially a commercial contract with Unilever. I don’t think this is enough: simply retaining ownership of what you produce is not adequate payment for consultancy services. Nor is a chance to win a Fitbit.

    I don’t mean to pick on this one event, it’s far from the worst I’ve seen, but it’s a good recent example of a multinational corporation either being misguided or downright cynical in their approach to generating new ideas for as little as possible.

    The Cadbury Olympic hack day springs to mind. It generated so much vocal criticism that the rules of the event were changed to highlight the benefits of participation, mainly through stipulating what prizes could be won and that participants would retain the IP.

    And can you remember the one where you had to pay cash money (i.e. buy a ticket) to do some free work for McDonald’s (event is now password protected)?

    Big companies often cite the freshness of the ideas coming from hack days as the primary reason for hosting one, and organising that kind of event is a perfectly valid way to explore some challenges. However, there’s no reason for these events to be exploitative to achieve their aims.

    One approach is to organise internal hack days, where people who already have domain knowledge, or people from other departments, can use their strengths to bring about new ways of thinking. They don’t have to work for free, you keep all the IP, and your costs are low. There are additional benefits: tightening up of existing internal networks, creation of new ones, generation of new interests and enthusiasm, and the opportunity to try out and learn new things. All those benefits are retained within the organisation. It’s such a successful model that even the government organises events like that.

    Another one is simply paying people for their time. The BBC runs an event called Connected Studio (which I have participated in) where groups are given a subject area. Then they have some time to develop and pitch ideas. Interesting ones may be further developed in conjunction with the BBC and potentially commissioned. All participants are paid a fixed day rate for their work.

    I have been to hack days of both kinds, and some that mixed the two approaches: bringing in professionals to an internal hack day can bring some new ideas and approaches and encourage collaboration. I have a very positive view of what was accomplished during the events that I have attended.

    But organising a hack day is not the only way to get similar results. You can for example make multiple small commissions. When the Royal Shakespeare Theatre wanted to see lots of small projects developed for the World Shakespeare Festival they made a number of tiny commissions from technologists, artists and students. Instead of hosting a single event, the projects were developed over a longer time and provided many interesting ways to think about Shakespeare’s work and influence. A similar approach is something Thayer Prime suggests in her blog post too:

    As well as paid hack days, companies/brands could consider another professional alternative: do prototyping days with developers who can create apps and products for you in quick time frames at relatively low cost compared to fully developed projects, so you can fail fast on stuff you don’t like and move forward with the innovations you like.

    Because there are so many ways to both spend little money and source a variety of ideas, it is really disappointing when companies think they can get away with soliciting work for a promise of a chance to win something, exposure, and so on. For someone like Unilever, who made 5.3bn€ profit just last year, it smacks of cynical opportunism. They clearly value the work so little that they are unwilling to commit to paying a fair price for it. Either that or it has become normal to assume that developers have so much spare time and expertise that they will produce original ideas (or, as the Unilever event page puts it, ”Ideas need to be 100% original and not based on existing products”) for some beers and pizza. I doubt the event hosts, organisers and judges are working for the same remuneration.

    Many people in the industry are passionate about things they care about to the extent that they will give up their weekends to tackle some challenges, or think about problems and possible solutions, and I’m not criticising that. NHS Hackdays regularly bring together people who want to help solve real problems faced by the health system. There are others that often attracti large numbers of participants, dealing with questions of privacy, civic involvement, communication of complex issues, collaboratively learning something, making music, and so on. Participants are unpaid, but they get some clear value out of those events. Meeting new people, learning new things, thinking about challenges that interest them. These are all great motivations, as long as they are not exploited by for-profit companies to generate more value for their shareholders, at the cost of participants’ expertise and spare time.

  • Jerusalem

    August 05, 2014

    I don’t often have the opportunity to play with the latest browser technologies in client projects, unless they’re quick prototypes. When Artangel approached me about a project bringing Paul Pfeiffer’s work to the Space I realised that it would be best done using the Web Audio API. At the time, the Web Audio API was so new that the day I began building the prototype Mozilla shipped a way of inspecting the audio nodes in Firefox Aurora for the first time.

    Paul Pfeiffer’s work is based around footage from the 1966 World Cup match between England and Germany from which most of the players were algorithmically removed — done in cooperation with machine vision expert, Brian Fulkerson. Jerusalem isn’t the first piece in which Paul Pfeiffer erases elements from iconic imagery, but it’s the first to be presented exclusively online.

    Jerusalem blends the archival, reworked footage with audio and video clips that bring in context from 1966 and link it to other themes present in Paul’s work.

    In technical terms, it’s a media player built in JavaScript that plays the main track forever, and allows playback of additional audio and video clips over the top.

    When I began working on the first prototype there were a few ideas about when the audio tracks should become available to play. I decided to implement simple rulesets which could be combined together to express complex relationships between each track and the main video. For example, some tracks might appear only on the second play through, but only to 40% of viewers, as long as the viewer has already heard at least two of the other tracks. This was intended to give the work a bit of unpredictability and make it seem a little different every time it’s viewed.

    Rules were expressed as data attributes on each additional audio and video track, meaning that combining or changing them was trivial to do. It also meant that adding new types of rules was straightforward.

    As the work began taking shape most of my initial assumptions about rules were abandoned, as we realised that the experience seemed too confusing. Instead, we decided to present the viewer with an interface to play each additional audio track after thirty seconds. To help understand what the interface represents, some of the audio tracks play for a set duration at certain times, highlighting the UI elements - little previews if you like. Again, that information is communicated through rules in the markup. Here’s an example:

    <audio id="bees" preload="auto" data-offset="30" data-preview-length="19" data-on-loops="1">
      <!-- sources -->
    </audio>
    

    This track plays 30 seconds into the main track, lasts 19 seconds and only appears on the first play of the main track.

    There are also video interruptions, over which the viewer has no control. These are timed using information specified in the markup.

    Specifying rules this way turned out to be very flexible and accommodated many changes that were made since the initial prototype. Despite throwing out a bunch of sample rules I dreamed up at the beginning, the structure for defining and applying them made it easy to expand and develop the work.

    Very light browser feature detection is being done, mainly to check for the presence of HTML5 audio and video support. If it’s not possible to play media natively then the work cannot be viewed. If it is, then depending on the Web Audio API availability the viewer either has a basic or slightly richer experience.

    In the basic version the audio tracks are crossfaded using just the volume property on the audio elements. When the Web Audio API is available, then a subtle low pass filter is added during the crossfade and to the main track, which is always audible in the background, making it seem a little muffled.

    AudioParam, part of Web Audio API, allows you to do a bunch of really cool things: you can ramp the value of an effect to a specific value at a specific time in multiple ways (linear or exponential) for example, which allows fine-grained control over sound. Though Jerusalem is relatively simple when it comes to audio manipulation, I can see how powerful the new features would be in the hands of someone who really knew what they were doing.

    I don’t know very much about audio, so it’s hard for me to imagine just what is possible with the new browser capabilities, but Chris Lowis showcases interesting projects and new Web Audio API features in his weekly newsletter.

  • The Startup Game

    July 16, 2014

    Stories of startups are most often told using superlatives. These are stories of founders dedicating themselves to a single idea, prepared to risk it all for a chance of success. Teams putting in extraordinary effort: their sweat, late nights, and swapping socialising outside of work for obsessively improving their products.

    These are always heroic tales of turning talent and youth into success and money. They are so pervasive that in some circles it is automatically assumed that your goal is to either have your own startup eventually, or be a part of one destined to make you rich.

    Even outside of the tech bubble many believe the hype. The government is happy to claim to have played a part in the success of Silicon Roundabout, whatever that success may be. Young people skilled at programming are hailed “the next Zuckerberg”, their companies “the next Google”.

    The sad truth is that most startups fail, but these stories are always presented as post-mortems: we did this, and it hurt us in that way. We ran out of money. We couldn’t scale.

    As Nikki Durkin points out, “the startup press glorify hardship”. As a founder, you’re expected to put on a brave face.

    Ask any founder how they’re doing and you’ll hear something positive. Whether that’s the truth or not, that’s what we’re trained to say.

    I found postmortems of startups outlining what didn’t work and why the company went under, but I was hard pressed to find anything that talked about the emotional side of failure — how it actually feels to invest many years of your life and your blood, sweat and tears, only for your startup to fall head first off a cliff.

    Of the personal stories of my friends most of them are not the happy ones of hardship, survival and success. Mostly they’re stories of workplaces that in the name of innovation and moving fast throw out healthy working practices. I’ve heard of startups doing “agile” by having three stand ups a day. Ones which insisted they were still too small for HR departments despite complaints. Companies where managers were so scared of losing control that they had to micromanage everyone below them. Places which enforced permanent “crunch time”. Stories of unfair dismissals, unforeseen firings, burn out resulting in months off work.

    Anecdotally at least it seems to me that those stories are at least as common as the success ones, but they’re not told as often, though they should be. That’s why I made the Startup Game (mirror here).

  • Goodbye Flickr

    June 01, 2014

    Last year I felt an itch to start saving my digital records from obliteration next time a service I use gets shut down or sold. For ages I’ve been thinking of dipping the toe in the world of hosting all my own data. paralell-flickr by Aaron Straup Cope sounded like a good project to try.

    paralell-flickr is a really thoughtful piece of software. You still put your photos on Flickr, and keep up with your friends there (or rather, you used to, when your friends were still using it), but on your own server you keep a replica, a copy that keeps track of your contacts, favourites, and permissions. You have to sign into it using your Flickr account and it will know who can see which photos.

    As the readme states:

    parallel-flickr is not a replacement for Flickr. It is an effort to investigate – in working code – what it means to create an archive of a service as a living, breathing “shadow” copy rather than a snapshot frozen in time.

    Aaron wrote up a talk he gave about it, which is worth reading in full.

    I thought that having my own parallel copy of each service I use — Foursquare, Instagram, Flickr, Twitter — would be a good way to preserve my data without hassle, and if I ever decide to delete my account, or for another reason won’t be able to carry on using it, at least I’d already have a complete backup in place.

    The moment when parallel-flickr came in handy happened sooner than I anticipated.

    I created my main account in 2010, but have another two which date back to 2008 and 2006. Flickr lets you personalise your URL, so that’s what I did, every time, and I’ve grown to regret it. The URL on my main account has a name in it that I don’t go by anymore. It annoys me because it denotes gender that doesn’t match mine, so I want to change it.

    But Flickr wasn’t built with that kind of flexibility in mind. You can change your username, and all the other details, but not the URLs. Once you choose one, that’s it, game over. Permanence of the URLs is more important than your comfort, your life, your future choices and changes you cannot anticipate.

    There are a million reasons why you’d want to change the URL other people find you by. Perhaps it was funny 7 years ago when you called yourself grandma_disco_fever but now you no longer do. Maybe mother_of_three was a good moniker a while back, but now you need to increment that number. What if you called yourself wife-of-someone but you’ve since divorced? Maybe these things don’t immediately spring to mind in the first year of the service’s existence, but once you’ve spent many years building an archive, full of connections and relationships, it becomes hard to leave it behind just so you don’t have to be a prisoner of your past URL choice.

    We don’t arrive in the world as fully formed, unchangeable entities. We’re not finished and permanent, and when dealing with URLs of people it seems deliberately harsh to demand permanence. Especially when the HTTP spec accommodates redirection to new ones.

    So, Flickr, I think this is goodbye.

  • Chrome obfuscates the URLs, Google benefits

    May 05, 2014

    Chrome Canary introduced a new feature which obfuscates the website’s URL.

    A member of the Chrome team mentions that “the whole point is to prevent phishing”.

    Here’s a screenshot from Canary in which the feature is enabled: Chrome Canary and its URL hiding feature

    You’re viewing a specific blog post, but you wouldn’t know it.

    The only way to get the URL is to click on the domain name. When you hover over it you get a hint that it’s clickable, but otherwise it’s not obvious.

    Accessing the URL in Chrome Canary

    Clicking the omnibar hides the domain name entirely and prompts you to search for a phrase or enter another URL.

    So who benefits from this?

    It’s not the people using the browser; it’s the search engine makers and big social networks. The ones so well-established that people willingly put their share buttons on their own websites.

    Let me repeat that: the parties benefitting from this change are the search engine vendors and big, monolithic social networks that strive to create walled gardens.

    If this approach becomes the default, people visiting sites will be unaware of where they are. To share interesting things on the web they will have to use the actions provided by the site maker. Often these are reduced to sharing on Twitter and Facebook. Gone are the days when people were provided with more than a handful of options. Any new players on the market, any new bookmarking and sharing tools will simply not stand a chance, as only their size will affect adoption of their own sharing buttons.

    A huge slew of sharing and bookmarking tools

    Beyond options provided by each individual site creator sharing will be limited to power users and those who understand what URLs are (and how to find them). That group will get progressively smaller, as URLs become unfamiliar and rarely encountered in their full form.

    Marketing campaigns already frequently suggest searching for a specific phrase instead (or alongside) providing a URL. If you can’t simply point to a URL, then you have to maintain first position in search results in all dominant search engines to help people find your site. The search engine becomes the necessary—and only—mediator of this interaction, a buffer zone, a lobby, with total control over who gets to access to the thing you made. It’s no surprise really that it was the Google Chrome team who came up with this.

    There’s been much debate about whether the URLs are ‘ugly’ or ‘beautiful’ and whether people really understand them. This debate misses the point.

    The URLs are the cornerstone of the interconnected, decentralised web. Removing the URLs from the browser is an attempt to expand and consolidate centralised power.

    If you really care about usability, there are better ways of highlighting the domain without obfuscating the URL entirely. As Josh Emerson points out, IE has been doing this for a while:

    IE already highlights te domain part of the URL

    This the solution that Remy Sharp is proposing. Highlight the key part of the URL and truncate the rest if necessary, but keep it visible, helping the person viewing the site to see where they really are.