Gnome Shell Overview with a mouse click on Ubuntu

Some weeks ago I’ve bought a new keyboard and mouse combo, the Logitech MK710 keyboard with the Logitech M705 optical mouse. This is a tremendously nice combination, that much that I’ve actually bought another combo to use at home.

Mouse Logitech M705 in Ubuntu, using all buttons

As a Gnome Shell user, I wanted to take advantage of the side-button the mouse has. Is that little line you see in the image above. So my idea was to trigger the Overview mode with this button, as if the Windows (or Super) key was pressed.

Gnome Shell Overview

So, how to do this under Ubuntu 13.10? Essentially by binding the button to a command. That command will execute an action, which is “press a key”

First, install the following packages: xbindkeys xautomation

Then, use the xev command to find the number of the mouse button. For that, run xev, press the button and take note of the value. When running xev, a little white box will appear. Put the mouse in the box and press the button. In the terminal -xev needs to be called from a terminal, did I forget to mention that?- you’ll see some output similar to this:

ButtonPress event, serial 33, synthetic NO, window 0x3c00001,
root 0x28d, subw 0x0, time 89142314, (89,93), root:(2011,171),
state 0x10, button 10, same_screen YES

ButtonRelease event, serial 33, synthetic NO, window 0x3c00001,
root 0x28d, subw 0x0, time 89142454, (89,93), root:(2011,171),
state 0x10, button 10, same_screen YES

What matters is the “button 10” part.

Now, we’ll need to bind the “button 10” to a key. This can not be done directly. What we can do is to bind a key to a command, this is execute a command when the key is pressed. Then, we can make the command to be a “send key” command.

To test it, first type this in the console

xte 'key Super_L'; sleep 1; xte 'key Super_L';

You should see the Overview mode, and after one second it will revert back. If this works, now we can bind the keys. We’re going to use xbindkeys for this. If there’s no configuration file, most likely this is the case for you, you can create a default one by typing

xbindkeys --defaults > $HOME/.xbindkeysrc

Then edit this file with your favorite editor, and add this

"xte 'key Super_L'"
b:10+release

“b:10” is the button you’ve found before. The +release is to trigger the command when the button is released and not when it’s pressed.

Last, you’ll want to execute xbindkeys on every start. For this, you might create an entry in the ~/.config/autostart folder named “xbindkeys.desktop” with the following content:

[Desktop Entry]
Name=XBindKeys
Comment=XBindKeys
Exec=/usr/bin/xbindkeys
Icon=solaar
StartupNotify=false
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

Now you can log out, log back in, and enjoy your new button!

 

 

 

BitTorrent Sync, a geeky Dropbox alternative, much better!

This tool is a bit old, but I guess it’s worth to mention it

http://www.extremetech.com/computing/161584-bittorrent-sync-released-the-secure-cloud-avoiding-sync-tool-youve-been-waiting-for

It’s about synchronizing files across many computers, using BitTorrent. You can think on this as a DropBox alternative, with the following differences:

1-No limits, the limit will be the space in hard drive. As of now, I have more than 40GB synchronized within all my computers (Music, personal documents and part of the /etc/ directories)

2-It does not need the cloud. If you have computers in a LAN and no access to the internet, they will sync internally at amazing speed.

3-BitTorrent based, so as soon as a computer has a part of the file, it can become a source for that part of the file in the network.

4-Again, it does not need the cloud. The files are never stored in a remote third-party service as DropBox, UbuntuOne, Google Drive and so on.

5-Keeps up to 30 revisions of the files

It is not open source, but the application is completely free to download and use. The free software foundation set this project as a priority to make it open source. And in the last month the developers made public the API, allowing developers to build the GUIs to manage the service, available at http://www.bittorrent.com/sync/developers

Happy syncing!

CTRL-R and Bash shortcuts

When in a Bash shell you can do a backwards search. This is a search through the commands you’ve typed in to re execute them easily. Try it now, go to a Bash window and press CTRL-R. Now you can start typing something.

When you find the command you want to run, just press Enter.
If you want to edit the command, move through it with the left and right arrows.
If you want to move to the next matching command, press CTRL-R again, until you find the right command.
To leave the search, press CTRL-G

Here are the common Bash key bindings that will help you on your daily work. Some of these work on the backwards search also:

Ctrl-r History reverse search
Ctrl-a Jump to BOL
Ctrl-e Jump to EOL
Ctrl-l Clear terminal
Ctrl-k Delete from cursor to EOL
Ctrl- Undo last operation
Ctrl-m Return
Ctrl-w Delete word left from cursor
Ctrl-u Delete from BOL to cursor
Ctrl-x Ctrl-e Open the default editor $EDITOR and run edited command
Ctrl-p Previous command in history
Ctrl-n Next command in history
Ctrl-f Move forward a char
Ctrl-b Move backward a char
Alt-f Move forward a word
Alt-b Move backward a word
Ctrl-d Delete char under cursor. Exit shell if empty.
Alt-d Delete forward word
Ctrl-y Paste content of the kill ring
Ctrl-t Swap current char with previous char
Alt-t Swap current word with previous word
Alt-u Uppercase word at cursor
Alt-l Lowercase word at cursor
Ctrl-s Freeze terminal
Ctrl-q Restore frozen terminal
Shift-PgUp Scroll screen up
Shift-PgDn Scroll screen down
Most of these key bindings work in Emacs too.

Ubuntu: solve Broadcom Sta slowdown on battery power problem

I’ve found myself forced to replace a new -9 months old- D-Link router, model DIR-835, because all it’s Ethernet ports stopped working. I will never buy another D-Link device, ever in the life. Well, past that point, I bought a Linksys N750 router, even being aware of all the security issues that the Cisco Cloud software has raised.

After doing the setup and everything, I’ve found that all YouTube videos on my MacBook 6,2 running Ubuntu Raring 13.04 were loading at a painfully slow speed. I, undoubtly, blamed the new router, who received a big amount of resets, setting changes, etc. I tried plugging in the Ethernet wire and testing the speed in that way, just to find that was working great.

After some time of testing and testing, the battery on the laptop was drained. So I plugged in the charger -while running a speed test- to find that the download speed increased at the same time I plugged the charger.

I said to myself “no, this can not be true” and kept plugging and unplugging the charger from the laptop, just to see how the speed was going up and down. I issued the iwconfig eth1 command and then, here’s what I’ve got:

With charger:

eth1 IEEE 802.11abgn ESSID:"JuanRomanV2"
Mode:Managed Channel:38 Access Point: C8:D7:19:21:3F:E0
Bit Rate=162 Mb/s Tx-Power:24 dBm
Retry min limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=5/5 Signal level=-56 dBm Noise level=-88 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

Without charger:

eth1 IEEE 802.11abgn ESSID:"JuanRomanV2"
Mode:Managed Channel:38 Access Point: C8:D7:19:21:3F:E0
Bit Rate=243 Mb/s Tx-Power:24 dBm
Retry min limit:7 RTS thr:off Fragment thr:off
Power Managementmode:All packets received
Link Quality=4/5 Signal level=-59 dBm Noise level=-88 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

Oh… so this nice card has a “Power management” feature that, in order to manage the power, manages to make me feel back in the 90’s with those DUN connections, huh?

I’ve investigated about the issue to find that Broadcom included such feature in the recent Broadcom STA Linux driver for models B43224 -and possibly many more. On my other laptop, a Dell Vostro 3360 also running Raring, I have a Broadcom wireless and seems like I don’t have that problem.

So, how to solve this?

Seems like there’s a deamon taking care of the battery status and changing the wireless card setting in consequence. After some tries, my solution was to add this content in the /etc/pm/power.d/wireless file:

#!/bin/sh
sleep 2;
/sbin/iwconfig eth1 power off;
return

Why the sleep? Because there’s somewhere another deamon taking care of the change. With the Sleep 2, I give time enough for the other deamon to be happy, to think the change has been done and to quit; and then my script saves the day by enabling the card again to eat all  the battery power. But well, YouTube works decently at least.

Last, sorry Linksys! I blamed you hard for nothing…

Gnome Shell 3.8, how to move title buttons at left?

There are some simple methods. The easiest one, in a Terminal, type:

gsettings set org.gnome.shell.overrides button-layout "close"

The alternative is with dconf-tools. Install dconf-tools with

sudo apt-get install dconf-tools

Then start dconf-editor by typing dconf-editor and change org > gnome > shell > overrides > button-layout to

close,minimize,maximize:

 

Oh, le goluboi

Nowadays anybody who knows how to open Notepad and manage to save a file as .php puts “PHP developer” on his/her CV. I’ve seen a lot of that.

 

Here I’m bringing a real example of code I’ve found on one of my projects, wrote by a junior pretend-to-be-a-senior developer, who luckily for the health of the company,  was moved to one of the less prioritized projects across the board.

See by yourself the original code and the fixed version.

doubtful quality version

Now, what are the problems on this code?

  1. Variable name inconsistency. You have $key_name in the same way you have $outputFields. Why using both camelCase and underscored variables in the code, randomly picking them?
  2. The value to be returned is in the variable $tmp_objects. The main reason to be of the function , the reason why it was wrote, is to return a value, that you call temporary? If it is the return value, it is not temporary. It should be named $result, $return, $output; whatever, even $charles or $william; but never $tmp_objects.
  3. Does the developer need to wait until the end of the function to return an empty array? Isn’t it better to return the value right away when we know nothing will be changed, before loading the configuration and the additional modules?

 

Here’s my not-tested version of the code, with some rough comments. Not the best thing in the world, not a big change -it’s not my job to fix crappy code, it is to show my guys what kind of things should be avoided.

decent version

 

So, to conclude, please foll0w these simple rules:

  1. Return the value as soon as possible, for readability reasons.
  2. Name methods, attributes and variables in a consistent and descriptive way, avoid_doing_this and then doing somethingLikeThis.
  3. Call the return variable in a descriptive way, you should know across the whole function what you are going to return.

 

Leaky bucket algorithm implemented on Redis and PHP

One of the biggest problems of web development are programmers not fully aware of the infrastructure they work with. An average PHP developer will not know about the limitations the infrastructure has, the maximum number of connections the web site can handle, etc. The PHP developer just programs on PHP, refreshes the page, runs the unit tests, and if everything is OK, considers his work as finished. However once the application is deployed, everything starts to depend on the actual infrastructure.

In many cases the pages will not be ready to handle an excessive amount of traffic; and will noticeably degrade service quality under heavy -and not so heavy- loads. For example with hihgly used APIs by external developers -who could not be aware of what caching means-, or when the site is attacked by scrapers who want to populate their site with your data, or a botnet.

This is a recurrent problem we have at my office; thus I’ve proposed to start using the well known leaky bucket system to protect our sites from scrapers. My manager liked the initiative, but he wanted to apply it at the infrastructure level and not in the application level. Even when I strongly agree with that as a definitive solution, I think there’s no reason to implement the whole thing at the infrastructure level without knowing how bad is the current situation. What would we say? “Block all IPs making more than X amount of request per minute?” Yes, that’s the idea,but what would be that X?

What I wanted to do, instead, is to apply the leaky bucket at the application level for testing purposes. That will not take too much time, and by logging how many requests and different clients would have been blocked we can get some interesting information to make the definitive implementation at the infrastructure level. Also that would allow us not only to log who will be blocked, but to put some Analytics Event tracking codes. In that way we would see in Analytics how many real users would have been blocked with the specified settings, allowing us to tune it up. Besides the server-side logging, we want to know also which percentage of that are real browsers and not scrapers.

That is how I came up with these small PHP files that make the whole implementation for testing purposes.

The code is splitted in two parts: the request script and the clean-up script. Request is basically the doorman, “yes, come in” or “no, get the f*ck out of here”. The clean-up script is who reviews the list every often and takes some drops out of the bucket. The whole thing uses the Flexihash script for consistent hashing and so splitting the data across many sets -as much as you need. The example is fully dynamic, but you can hardcode the set names to make it faster.

Said that, please get the FlexiHash files from http://github.com/pda/flexihash and set the include line to the proper place.

Then, let’s start with the initial inc.php file which should be included in both request.php and cleanup.php:

<?
include 'flexihash-0.1.9.php';

// Redis extension from nicolasff, thanks bro!
$redis = new Redis();
$redis->connect('localhost');

$hasher = new Flexihash();

$numberOfSets = 10;

for($i = 1; $i <=$numberOfSets; $i++){
	$pad = str_pad($i,strlen($numberOfSets),'0',STR_PAD_LEFT);
	$sets[] = 'set'.$pad; // To create sets like set01, set02;
	// or set0001, set0999 if $numberOfSets is 1000
}

$hasher->addTargets($sets);

This part is easy, right? We prepared the Redis connection and the hasher.

Now, let’s see request.php

<?
require('inc.php');

if(sizeof($argv) < 3){
	die('2 parameters: clientId, actionId; ie A search');
}

$id = $argv[1];
$action = $argv[2];

$id = $id.'-'.$action;

$set = $hasher->lookup($id);

$period = 30; // In seconds
$limit = 6; // How many hits allowed every $period seconds

$actualHits = $redis->zscore($set, $id);

if($actualHits >= $limit){
	echo "Not allowed.  {$actualHits} hits done, only {$limit} are allowed every {$period} seconds\n";

	// Log that this request would have been locked.
	$redis->zIncrBy('locked', 1, $id);
	die();
}


list($actualHits) = $redis->multi()->zIncrBy($set, 1, $id)->zAdd($set.':control', -1, $id)->exec();
$available = $limit - $actualHits;
echo "Approved, you have {$available} hits\n";

In short, the script will see if the requested client (in most cases will be an IP check) has enough shots to make the requested action. If it does not have shots, we log the action for statistical purposes and fine tuning. If he does have shots, we increase it’s number of requests by 1, we add a -1 to the control set -to be used later on- and we let the client know how many hits are remaining.

Now, let’s see the cleanup script; that should be executed periodically in a cronjob. We’ll go back to that subject later on, no worries.

<?
require('inc.php');

// Loop across all sets
foreach($sets as $set){
	// Remove all entries with score =< 1 from all sets. This will
	// reduce the size before furthing processing
	echo "Set has ".$redis->zCard($set)." elements, before cleanup.";
	$redis->zRemRangeByScore($set, '-inf', '1');
	echo " Now it has ".$redis->zCard($set).".\n";
	
	// Remember the control set we created on request.php? That sorted set contains all entries
	// the set has, but with score as -1. The goal of that zset is to reduce by 1 all scores
	// storing user hits, by intersecting set and set:control.
	echo "Control set had ".$redis->zcard($set.':control') . ' before cleanup, now it has ';
	$redis->zinterstore($set.':control', array($set.':control', $set), array(1,0), 'SUM');
	echo $redis->zcard($set.':control') ."\n";
		
	// Now do the interstore on set by substracting one
	// Remember in the request.php file we add the client to set with score -1?
	// That's to use it with a zInterStore and thus substracting 1 from all
	// scores. The trick here is the aggregation function SUM instead of WEIGHT.
	$redis->zinterstore($set, array($set, $set.':control'), array(1,1), 'SUM');
	echo "Control applied, all entries were reduced by 1";
}

Well, that’s all! Now the trick is how to run the PHP cleanup script every 5 seconds or so. In my case I will run it every 30 seconds at first; by adding something like this in the crontab:

* * * * * php cleanup.php;
* * * * * sleep 30; php cleanup.php;

Why are these scripts good?
First, are very light. Second, are using sharding to speed up all checks. Third, are reusable.

What can be improved?
1-Make it compatible with different buckets. You might like to have one bucket for APIs, one bucket for login attempts, one bucket for premium customers who could have right to hit the APIs more often.
2-Convert it to OOP 😉 That’s the homework guys, if you convert it to OOP, drop me a line.
3-Apply it in a lower level, so the blocked client does not even hit the web server by being stopped at the network level.
4-Use more than one server -one master and one or more slaves- or many masters in a sharded setup based by set names. Remember Redis is a single threaded server, so you will definitely have advantage by running one instance per free core in your server. In that way your limit won’t be CPU but storage and RAM.

If this small project gets approved, I will apply these checks right after the framework and routing rules are loaded. In that way I will have access to the Redis configuration files, and to set the “action” names in a per-route basis. All categories, keywords, tags and search pages will be grouped under the “listing” action name. Login, register and password reset will be grouped in the “users” action. Embed and on-site player pages will be grouped under the “player” action. Voting, commenting, adding to favorites will be grouped under the “rating” action. This will, for sure, make our sites much more stable and will give better response times to normal users.

Good Developer vs Bad Developer

From Guy Nirpaz‘s blog

Good developer is an artist, a craftsman who enjoys the process of creation. Bad developer considers himself as a programmer, responsible for generating lines of code.

Good developer understands the problems of the customers. Bad developer understands only the technical problem at hand. Good developer does not define the why, but constantly strives to understand why. He’s responsible for the how, and still sees the big picture. Bad developer is focused on building classes and methods and configuration files, but does not get the big picture.

Good developer understands the complete architecture of the product. Bad developer knows only the components he’s written. Good developer fully understands the technologies that are used within the product. He understands what they are used for, and how they work internally.

Good developer is not afraid of new technologies but embraces them by quickly getting a grip. Bad developer only sticks to what he knows. His immediate reaction to any technical change is negative.

Good developer is constantly learning and improving his skills. Good developer reads technical articles, and finishes several technical books a year. Bad developer does not have time to learn. He’s always too busy with other stuff.

Good developer cares about the product quality. He is also very much concerned with the process quality. Good developer pushes himself to create bug-free code; bad developer leaves it to QA to find bugs to fix.

Good developer develops features which create value for customers. Bad developer completes tasks. Good developer will never claim the requirements are incomplete, and will make sure to fully understand the features he’s working on. Bad developer will wait until the finest details are available. To emphasize: good developer is the CEO of the feature – he’s going to make sure he always has the information needed to accomplish the feature, and in case information is missing he’ll make sure he gets it.

Good developer is not afraid to go into anyone’s code. Bad developer is afraid of others looking into his. Good developer understands that it shouldn’t take more time to write self-explanatory and well-documented code. Bad developer always needs to allocate extra time to document and simplify.

Good developer will never feel his code is good enough, and will always continue to clean and fix. Good developer always strives to create elegant solutions but understands that his job is to deliver value to customers. Bad developer thinks only about the elegance of his code and leave the job of delivering value to others.

Coding with style. Excessive style, actually.

We were having a bug where a DIV was being hidden by a asynchronous bar with Javascript on top of the page. So one of my front end developers came up with this solution: put the DIV a bit lower to show it completely. This is coding with a lot of style, isn’t it?

Double style tag? Are you joking me, bro?