Gnome Shell Overview with a mouse click on Ubuntu

Some weeks ago I’ve bought a new keyboard and mouse combo, the Logitech MK710 keyboard with the Logitech M705 optical mouse. This is a tremendously nice combination, that much that I’ve actually bought another combo to use at home.

Mouse Logitech M705 in Ubuntu, using all buttons

As a Gnome Shell user, I wanted to take advantage of the side-button the mouse has. Is that little line you see in the image above. So my idea was to trigger the Overview mode with this button, as if the Windows (or Super) key was pressed.

Gnome Shell Overview

So, how to do this under Ubuntu 13.10? Essentially by binding the button to a command. That command will execute an action, which is “press a key”

First, install the following packages: xbindkeys xautomation

Then, use the xev command to find the number of the mouse button. For that, run xev, press the button and take note of the value. When running xev, a little white box will appear. Put the mouse in the box and press the button. In the terminal -xev needs to be called from a terminal, did I forget to mention that?- you’ll see some output similar to this:

ButtonPress event, serial 33, synthetic NO, window 0x3c00001,
root 0x28d, subw 0x0, time 89142314, (89,93), root:(2011,171),
state 0x10, button 10, same_screen YES

ButtonRelease event, serial 33, synthetic NO, window 0x3c00001,
root 0x28d, subw 0x0, time 89142454, (89,93), root:(2011,171),
state 0x10, button 10, same_screen YES

What matters is the “button 10” part.

Now, we’ll need to bind the “button 10” to a key. This can not be done directly. What we can do is to bind a key to a command, this is execute a command when the key is pressed. Then, we can make the command to be a “send key” command.

To test it, first type this in the console

xte 'key Super_L'; sleep 1; xte 'key Super_L';

You should see the Overview mode, and after one second it will revert back. If this works, now we can bind the keys. We’re going to use xbindkeys for this. If there’s no configuration file, most likely this is the case for you, you can create a default one by typing

xbindkeys --defaults > $HOME/.xbindkeysrc

Then edit this file with your favorite editor, and add this

"xte 'key Super_L'"
b:10+release

“b:10” is the button you’ve found before. The +release is to trigger the command when the button is released and not when it’s pressed.

Last, you’ll want to execute xbindkeys on every start. For this, you might create an entry in the ~/.config/autostart folder named “xbindkeys.desktop” with the following content:

[Desktop Entry]
Name=XBindKeys
Comment=XBindKeys
Exec=/usr/bin/xbindkeys
Icon=solaar
StartupNotify=false
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

Now you can log out, log back in, and enjoy your new button!

 

 

 

BitTorrent Sync, a geeky Dropbox alternative, much better!

This tool is a bit old, but I guess it’s worth to mention it

http://www.extremetech.com/computing/161584-bittorrent-sync-released-the-secure-cloud-avoiding-sync-tool-youve-been-waiting-for

It’s about synchronizing files across many computers, using BitTorrent. You can think on this as a DropBox alternative, with the following differences:

1-No limits, the limit will be the space in hard drive. As of now, I have more than 40GB synchronized within all my computers (Music, personal documents and part of the /etc/ directories)

2-It does not need the cloud. If you have computers in a LAN and no access to the internet, they will sync internally at amazing speed.

3-BitTorrent based, so as soon as a computer has a part of the file, it can become a source for that part of the file in the network.

4-Again, it does not need the cloud. The files are never stored in a remote third-party service as DropBox, UbuntuOne, Google Drive and so on.

5-Keeps up to 30 revisions of the files

It is not open source, but the application is completely free to download and use. The free software foundation set this project as a priority to make it open source. And in the last month the developers made public the API, allowing developers to build the GUIs to manage the service, available at http://www.bittorrent.com/sync/developers

Happy syncing!

Ubuntu: solve Broadcom Sta slowdown on battery power problem

I’ve found myself forced to replace a new -9 months old- D-Link router, model DIR-835, because all it’s Ethernet ports stopped working. I will never buy another D-Link device, ever in the life. Well, past that point, I bought a Linksys N750 router, even being aware of all the security issues that the Cisco Cloud software has raised.

After doing the setup and everything, I’ve found that all YouTube videos on my MacBook 6,2 running Ubuntu Raring 13.04 were loading at a painfully slow speed. I, undoubtly, blamed the new router, who received a big amount of resets, setting changes, etc. I tried plugging in the Ethernet wire and testing the speed in that way, just to find that was working great.

After some time of testing and testing, the battery on the laptop was drained. So I plugged in the charger -while running a speed test- to find that the download speed increased at the same time I plugged the charger.

I said to myself “no, this can not be true” and kept plugging and unplugging the charger from the laptop, just to see how the speed was going up and down. I issued the iwconfig eth1 command and then, here’s what I’ve got:

With charger:

eth1 IEEE 802.11abgn ESSID:"JuanRomanV2"
Mode:Managed Channel:38 Access Point: C8:D7:19:21:3F:E0
Bit Rate=162 Mb/s Tx-Power:24 dBm
Retry min limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=5/5 Signal level=-56 dBm Noise level=-88 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

Without charger:

eth1 IEEE 802.11abgn ESSID:"JuanRomanV2"
Mode:Managed Channel:38 Access Point: C8:D7:19:21:3F:E0
Bit Rate=243 Mb/s Tx-Power:24 dBm
Retry min limit:7 RTS thr:off Fragment thr:off
Power Managementmode:All packets received
Link Quality=4/5 Signal level=-59 dBm Noise level=-88 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

Oh… so this nice card has a “Power management” feature that, in order to manage the power, manages to make me feel back in the 90’s with those DUN connections, huh?

I’ve investigated about the issue to find that Broadcom included such feature in the recent Broadcom STA Linux driver for models B43224 -and possibly many more. On my other laptop, a Dell Vostro 3360 also running Raring, I have a Broadcom wireless and seems like I don’t have that problem.

So, how to solve this?

Seems like there’s a deamon taking care of the battery status and changing the wireless card setting in consequence. After some tries, my solution was to add this content in the /etc/pm/power.d/wireless file:

#!/bin/sh
sleep 2;
/sbin/iwconfig eth1 power off;
return

Why the sleep? Because there’s somewhere another deamon taking care of the change. With the Sleep 2, I give time enough for the other deamon to be happy, to think the change has been done and to quit; and then my script saves the day by enabling the card again to eat all  the battery power. But well, YouTube works decently at least.

Last, sorry Linksys! I blamed you hard for nothing…

Gnome Shell 3.8, how to move title buttons at left?

There are some simple methods. The easiest one, in a Terminal, type:

gsettings set org.gnome.shell.overrides button-layout "close"

The alternative is with dconf-tools. Install dconf-tools with

sudo apt-get install dconf-tools

Then start dconf-editor by typing dconf-editor and change org > gnome > shell > overrides > button-layout to

close,minimize,maximize:

 

Oh, le goluboi

Nowadays anybody who knows how to open Notepad and manage to save a file as .php puts “PHP developer” on his/her CV. I’ve seen a lot of that.

 

Here I’m bringing a real example of code I’ve found on one of my projects, wrote by a junior pretend-to-be-a-senior developer, who luckily for the health of the company,  was moved to one of the less prioritized projects across the board.

See by yourself the original code and the fixed version.

doubtful quality version

Now, what are the problems on this code?

  1. Variable name inconsistency. You have $key_name in the same way you have $outputFields. Why using both camelCase and underscored variables in the code, randomly picking them?
  2. The value to be returned is in the variable $tmp_objects. The main reason to be of the function , the reason why it was wrote, is to return a value, that you call temporary? If it is the return value, it is not temporary. It should be named $result, $return, $output; whatever, even $charles or $william; but never $tmp_objects.
  3. Does the developer need to wait until the end of the function to return an empty array? Isn’t it better to return the value right away when we know nothing will be changed, before loading the configuration and the additional modules?

 

Here’s my not-tested version of the code, with some rough comments. Not the best thing in the world, not a big change -it’s not my job to fix crappy code, it is to show my guys what kind of things should be avoided.

decent version

 

So, to conclude, please foll0w these simple rules:

  1. Return the value as soon as possible, for readability reasons.
  2. Name methods, attributes and variables in a consistent and descriptive way, avoid_doing_this and then doing somethingLikeThis.
  3. Call the return variable in a descriptive way, you should know across the whole function what you are going to return.

 

Leaky bucket algorithm implemented on Redis and PHP

One of the biggest problems of web development are programmers not fully aware of the infrastructure they work with. An average PHP developer will not know about the limitations the infrastructure has, the maximum number of connections the web site can handle, etc. The PHP developer just programs on PHP, refreshes the page, runs the unit tests, and if everything is OK, considers his work as finished. However once the application is deployed, everything starts to depend on the actual infrastructure.

In many cases the pages will not be ready to handle an excessive amount of traffic; and will noticeably degrade service quality under heavy -and not so heavy- loads. For example with hihgly used APIs by external developers -who could not be aware of what caching means-, or when the site is attacked by scrapers who want to populate their site with your data, or a botnet.

This is a recurrent problem we have at my office; thus I’ve proposed to start using the well known leaky bucket system to protect our sites from scrapers. My manager liked the initiative, but he wanted to apply it at the infrastructure level and not in the application level. Even when I strongly agree with that as a definitive solution, I think there’s no reason to implement the whole thing at the infrastructure level without knowing how bad is the current situation. What would we say? “Block all IPs making more than X amount of request per minute?” Yes, that’s the idea,but what would be that X?

What I wanted to do, instead, is to apply the leaky bucket at the application level for testing purposes. That will not take too much time, and by logging how many requests and different clients would have been blocked we can get some interesting information to make the definitive implementation at the infrastructure level. Also that would allow us not only to log who will be blocked, but to put some Analytics Event tracking codes. In that way we would see in Analytics how many real users would have been blocked with the specified settings, allowing us to tune it up. Besides the server-side logging, we want to know also which percentage of that are real browsers and not scrapers.

That is how I came up with these small PHP files that make the whole implementation for testing purposes.

The code is splitted in two parts: the request script and the clean-up script. Request is basically the doorman, “yes, come in” or “no, get the f*ck out of here”. The clean-up script is who reviews the list every often and takes some drops out of the bucket. The whole thing uses the Flexihash script for consistent hashing and so splitting the data across many sets -as much as you need. The example is fully dynamic, but you can hardcode the set names to make it faster.

Said that, please get the FlexiHash files from http://github.com/pda/flexihash and set the include line to the proper place.

Then, let’s start with the initial inc.php file which should be included in both request.php and cleanup.php:

<?
include 'flexihash-0.1.9.php';

// Redis extension from nicolasff, thanks bro!
$redis = new Redis();
$redis->connect('localhost');

$hasher = new Flexihash();

$numberOfSets = 10;

for($i = 1; $i <=$numberOfSets; $i++){
	$pad = str_pad($i,strlen($numberOfSets),'0',STR_PAD_LEFT);
	$sets[] = 'set'.$pad; // To create sets like set01, set02;
	// or set0001, set0999 if $numberOfSets is 1000
}

$hasher->addTargets($sets);

This part is easy, right? We prepared the Redis connection and the hasher.

Now, let’s see request.php

<?
require('inc.php');

if(sizeof($argv) < 3){
	die('2 parameters: clientId, actionId; ie A search');
}

$id = $argv[1];
$action = $argv[2];

$id = $id.'-'.$action;

$set = $hasher->lookup($id);

$period = 30; // In seconds
$limit = 6; // How many hits allowed every $period seconds

$actualHits = $redis->zscore($set, $id);

if($actualHits >= $limit){
	echo "Not allowed.  {$actualHits} hits done, only {$limit} are allowed every {$period} seconds\n";

	// Log that this request would have been locked.
	$redis->zIncrBy('locked', 1, $id);
	die();
}


list($actualHits) = $redis->multi()->zIncrBy($set, 1, $id)->zAdd($set.':control', -1, $id)->exec();
$available = $limit - $actualHits;
echo "Approved, you have {$available} hits\n";

In short, the script will see if the requested client (in most cases will be an IP check) has enough shots to make the requested action. If it does not have shots, we log the action for statistical purposes and fine tuning. If he does have shots, we increase it’s number of requests by 1, we add a -1 to the control set -to be used later on- and we let the client know how many hits are remaining.

Now, let’s see the cleanup script; that should be executed periodically in a cronjob. We’ll go back to that subject later on, no worries.

<?
require('inc.php');

// Loop across all sets
foreach($sets as $set){
	// Remove all entries with score =< 1 from all sets. This will
	// reduce the size before furthing processing
	echo "Set has ".$redis->zCard($set)." elements, before cleanup.";
	$redis->zRemRangeByScore($set, '-inf', '1');
	echo " Now it has ".$redis->zCard($set).".\n";
	
	// Remember the control set we created on request.php? That sorted set contains all entries
	// the set has, but with score as -1. The goal of that zset is to reduce by 1 all scores
	// storing user hits, by intersecting set and set:control.
	echo "Control set had ".$redis->zcard($set.':control') . ' before cleanup, now it has ';
	$redis->zinterstore($set.':control', array($set.':control', $set), array(1,0), 'SUM');
	echo $redis->zcard($set.':control') ."\n";
		
	// Now do the interstore on set by substracting one
	// Remember in the request.php file we add the client to set with score -1?
	// That's to use it with a zInterStore and thus substracting 1 from all
	// scores. The trick here is the aggregation function SUM instead of WEIGHT.
	$redis->zinterstore($set, array($set, $set.':control'), array(1,1), 'SUM');
	echo "Control applied, all entries were reduced by 1";
}

Well, that’s all! Now the trick is how to run the PHP cleanup script every 5 seconds or so. In my case I will run it every 30 seconds at first; by adding something like this in the crontab:

* * * * * php cleanup.php;
* * * * * sleep 30; php cleanup.php;

Why are these scripts good?
First, are very light. Second, are using sharding to speed up all checks. Third, are reusable.

What can be improved?
1-Make it compatible with different buckets. You might like to have one bucket for APIs, one bucket for login attempts, one bucket for premium customers who could have right to hit the APIs more often.
2-Convert it to OOP 😉 That’s the homework guys, if you convert it to OOP, drop me a line.
3-Apply it in a lower level, so the blocked client does not even hit the web server by being stopped at the network level.
4-Use more than one server -one master and one or more slaves- or many masters in a sharded setup based by set names. Remember Redis is a single threaded server, so you will definitely have advantage by running one instance per free core in your server. In that way your limit won’t be CPU but storage and RAM.

If this small project gets approved, I will apply these checks right after the framework and routing rules are loaded. In that way I will have access to the Redis configuration files, and to set the “action” names in a per-route basis. All categories, keywords, tags and search pages will be grouped under the “listing” action name. Login, register and password reset will be grouped in the “users” action. Embed and on-site player pages will be grouped under the “player” action. Voting, commenting, adding to favorites will be grouped under the “rating” action. This will, for sure, make our sites much more stable and will give better response times to normal users.

Datos para todos

Regla No. 2 – La regla del acceso garantizado

Cada ítem de datos debe ser lógicamente accesible al ejecutar una búsqueda que combine el nombre de la tabla, su clave primaria, y el nombre de la columna.

Esto significa que dado un nombre de tabla, dado el valor de la clave primaria, y dado el nombre de la columna requerida, deberá encontrarse uno y solamente un valor. Por esta razón la definición de claves primarias para todas las tablas es prácticamente obligatoria.

Regla No. 6 – La regla de la actualización de vistas

Todas las vistas que son teóricamente actualizables, deben ser actualizables por el sistema mismo.

La mayoría de las RDBMS permiten actualizar vistas simples, pero deshabilitan los intentos de actualizar vistas complejas.

 

Reglas de Codd

New ‘tax’: Internet Explorer 7 customers will be charged extra

Kogan.com, an australian retailer; has decided to “tax” their customers using Internet Explorer 7 or lower.

Ruslan Kogan, chief executive officer of Kogan.com, explained that they need to find a way to cover the extra cost of designing their pages to look properly in a very old browser.

Every month the tax will be increased 0.1%,because IE7 is one month older.

More information at http://www.bbc.com/news/technology-18440979

“Anyone who is involved with the internet and web technology would know the amount of time that is wasted to support all these antiquated browsers,” Kogan said. “You have to make all these work-arounds all the time to make sure the site works properly on it.”

 

How does Pornhub use Redis?

Since Eric Pickup’s talk in Confoo 2012, a great and informative session, I’ve seen lots of comments and postings in many blogs and forums about Redis and different sites, many of them owned by Manwin. Redis is a very interesting technology that opens doors to different and new features, capabilities and what is even more interesting to new ways of thinking web based applications. As Pornhub developer, that’s what I am, I will clarify how Redis is used in our end, and why.

The first feature we used Redis for is counting video plays. Every time a video is played, the hit is inserted in a big hash, by increasing the value in one. Every some hundreds of plays we store the actual value in our  MySQL, which is used later on for searches and statistical information. We also keep track of monthly and daily views, among as rating and votes, so visitors can sort videos by most viewed this day or most viewed this month as well.

We use Redis also for part of the social-network features, those that keep relation between users as “friends” or “blocked people”. Let’s say you block a user, we’ll store that relation in both Redis and MySQL. When reading the relations, we’ll try to use Redis, and if the information is not there we’ll fall into MySQL.

The implemenation I’ve proposed here is a bit tricky and takes the best of Redis and the best of MemCache in a nice in-house implementation.

We have one big sorted set named blocks:lastUpdate, having the user ID as score, and a timestamp as value. And for each user we have a sorted set named “user:[userId]:blocks”, containing the blocked user ID as value and the timestamp when it was blocked as score.

When we need to know who is blocked by a certain user we first read the “last update” value from the blocks:lastUpdate hash, and we add a Time To Leave value we’ve set in our configuration file.  If that value (lastUpdate + TTL) is lower than now, we consider the user:[userId]:blocks as expired, so we clear it and we reload from MySQL. If the key was not considered as expired we simply use it, avoiding thus an access to MySQL.

Why not use Redis’ SetExp to expire the key instead?

This is a good question, and caused some discussion within the team. The main reason is that when you set the expiration time you can not increase it or decrease it as you need. In case of data loss we can recover the Redis information from the MySQL, or the MySQL from the Redis information. For example now our TTL in the configuration file is one month (!). This means that we refresh the information stored in Redis once per month per user. However if for any reason I need to reduce drastically the access to DB, I can change the TTL setting to 3 or 4 months; and from that moment on I’m 100% sure that no access will be done to DB, because all keys will sill be valid.

In case of a MySQL crash and data loss, we can change the setting to 1 year, and write a small script that will read from Redis and then repopulate the concerning MySQL table.

When investigating this I’ve found an excellent Redis command, OBJECT. It returns information about the key you’re passing as parameter, and one of the subcommands you can use is IDLE. That subcommand returns since when the key has been idle, which is actually the last read or write. This could be useful for many cases, but it’s not in ours since we would need the time when the key has been created. Having that information in the Redis level could allow us to get rid of the blocks:lastUpdate set.

Many other features are using Redis, and many more will come in the next months, so stay tuned to get info about how Pornhub evolves with this interesting technology.