Windows’ cmd is disgusting

After many years working under Linux, and less than a month having a Mac as personal computer I’ve found myself in a Symfony2 training, using the Windows’ cmd program. During the final part of the training we were learning MongoDB, a non-relational database. To follow the examples we needed to make many copy/paste operations, specially with the documents’ IDs which in MongoDB are 48-bits keys, shown in 24 hexa characters.

Working with the cmd tool is technically impossible. Copy-paste operations are so hard to do that I decided to simply not follow the examples. I’m sincerely disgusted.

But I’m still happy, I know that once I’m out of this great training I’ll be back with my little black-on-white console, either at home or at the office.

BTW, thanks a lot to ManWin who paid the training about Symfony2, and to Hugo Hamon who made a great job during the training. We, programmers, tend to be very prouds and to refuse other people ideas, so training developers is not a simple task, and he made an excellent job with us.

SEO Tips: Zero Page Results

Search Engine Optimization is one of the musts for a web site. There are many ways to keep good SEO, like those that we all know: only one H1 in each document, using good keywording, etc.

For websites offering a search feature, it’s good to show some links to common or important searches. The common “cloud of tags” is a good example; they provide links to page with search results.

But when happens when we don’t have any result on such pages? Let’s say we have a tag “expired”, and we have there all the documents, products or posts that we consider expired, but we still want to be available to the users, so we keep the content on line. In this case the search for “expired” will return documents. One good day we decide to not show those results anymore, so the page /search/expired, which was already indexed by all search engines, now will simply show “Sorry, but there are no results for your query”.

To solve that we have an excellent tool that we can use. Instead of returning the “200 OK header”, we’ll return “404 Not found”, but we’ll still show a page saying “There are no results, but a lot of suggestions”. Then we’ll offer different options, but our page won’t be indexed anymore in Google.

That’s it… when no results are found for an indexed search URL, let’s say when Google tries to index a ZRP -Zero result page-, you should reply with a list of suggestions and a nice “404 not found” header:

header(“HTTP/1.0 404 Not Found”);

This will ensure that no search engine will index that page.

Local Storage

In the good old days we used to do many things to persist form information between pages. All of us have filled at least once one form to search a job, and we all needed to fill that enormous list of fields by entering education info, previous experience, etc. Imagine what could happen if for some reason (Windows?) the browser crashes, and all the form information is lost. That could be a good reason to simply not look for a job, isn’t it? So as I was saying, in the good old days pages with many fields were splitted in two or more pages, so you fill first your email, username, password, first and last name; submit that form; and in the second page you enter your age, your gender, your birthdate, etc. How were the values persisted in the second page? There were a set of options:

* Cookies, which is storage in the client side. Every request to a domain includes these cookies. This means that your browser request will contain much more info than needed.
* Sessions; which is information stored in the server-side and retrieved by a SessionId, which is a cookie in the client side. So on each request, your browser sends the session ID and the rest of the info is obtained from the server.
* Hidden fields. The first page contains a form with, as previously said, email, username, password, first and last name. The second page will contain a form with the input boxes to fill age, your gender, your birthdate, etc; but will also contain a set of hidden fields to keep the values for email, username, etc. as specified before. When the second page is submitted, all fields will be sent to the server.

All these options are valid, are still working and can be used; each one has advantages and disadvantages. But what I want to introduce now is the LocalStorage feature provided by HTML5. LocalStorage is not cookie based, is not session based and is not part of the form. LocalStorage is just, as the name says, local storage of information, on the client side, and that is not sent to the webserver on the requests. So for example I start writing an email, and the browser chrashes. When I open back the same page, I still have the form exactly as it was before the crash.

LocalStorage works as a JS array and can store only strings on each key. This means that you can not store an array in a key, except if you serialize it. Numbers will be stored as strings as well, so you must parseInt() them, or paseFloat() them. You can also iterate over the elements in the localStorage element with the help of the classic .length array property. Also content in LocalStorage will never expire, it will remain in the browser until you clear the private data.

You can see an example in http://www.nmac.com.ar/examples/localstorage.php; but basically you can operate on it in any of these ways:

Set a value
localStorage[‘country1’] = ‘Canada’;
localStorage.setItem(‘country2’, ‘Canada’);
localStorage[‘cities’] = JSON.stringify([‘Montreal’,’Toronto’,’Vancouver’]); // We can not store an array, but we can store a string

Get a value
country1 = localStorage[‘country1’];
country2 = localStorage.getItem(‘country2’);
cities = JSON.parse(localStorage[‘cities’]); // We can not store an array, but we can store a string

Just remember that before using localStorage you should check it’s availability, I suggest to use Modernizr to do so.

Links:
Reference: http://dev.w3.org/html5/webstorage/
Detailed and friendly description: http://diveintohtml5.org/storage.html

Arrived to Montréal

I started to work for ManWin Canada in my first work experience in a foreign country. ManWin is a big enterprise which works with first-line websites such as Brazzers.com, the Playboy of XXI century. As stated in my CV, I’m working (until now) with PHP, Zend Framework, PHPUnit, Phing and many other and common tools such as SVN, shell scripting and MySQL. Currently I’m mostly working in an internal tool which will be the official deploying tool. This page will allow the developers notify the QA (Quality Assurance) of new versions of each project, and they will be able to approve or disapprove the revision based on their tests. Once the application gets approved by QA, the tool can deploy it to Live servers and notify to the emails specified in the Project details. Of course many tasks are done while deploying: PHPUnit tests, syntax checking, and RSyncing are the most common.

So here’s the small welcome post in the ManWin blog which includes my name on it: http://blog.manwin.com/?p=1332

The complete and up-to-date projects and languages list whom I use is at my on-line CV.

Universal Music France launch

After a lot of time reading, rewriting, planning and more than everything coding, finally we got the new and betterh-than-ever Universal Music France launched. 21st July is the World Music Day, and that date has been selected to launch the website.

This version includes a new design, keeping in mind the V1 version (available for a few hours more at http://www.alloclips.com); includes a new search engine -now based in Exalead-, Facebook Login, includes news about the favorite UM’s artists and is much faster than the previous version.

As a member of the team who made this possible I’m very glad of working with such people. So many thanks to Sébastien Borget from Ipercast; Gastón Musante and Pablo and Juan Grandinetti from GotVertigo; who made a lot of efforts to achieve this goal.