Thursday, June 30, 2011

Google+ Has Some Pluses, But Facebook Needn’t Worry

The threat to Facebook posed by the Google+ project became a reality Tuesday with its official rollout — or did it?




While reaction to the debut of Google+ was mixed, the consensus was that Mark Zuckerberg will still have a job when he wakes up Wednesday morning.


Here are some highlights of the online reaction to Google+ and how it stacks up against Facebook, starting with the official introduction of the project from the official Google blog:


Among the most basic of human needs is the need to connect with others. With a smile, a laugh, a whisper, or a cheer, we connect with others every single day.


Today, the connections between people increasingly happen online. Yet the subtlety and substance of real-world interactions are lost in the rigidness of our online tools.


In this basic, human way, online sharing is awkward. Even broken. And we aim to fix it.


We’d like to bring the nuance and richness of real-life sharing to software. We want to make Google better by including you, your relationships, and your interests. And so begins the Google+ project.


Not all relationships are created equal. So in life, we share one thing with college buddies, another with parents, and almost nothing with our boss. The problem is that today’s online services turn friendship into fast food — wrapping everyone in “friend” paper — and sharing really suffers:


It’s sloppy. We only want to connect with certain people at certain times, but online we hear from everyone all the time.


It’s scary. Every online conversation (with over 100 “friends”) is a public performance, so we often share less because of stage fright.


It’s insensitive. We all define “friend” and “family” differently — in our own way, on our own terms — but we lose this nuance online.


In light of these shortcomings we asked ourselves, “What do people actually do?” And we didn’t have to search far for the answer. People in fact share selectively all the time — with their circles.


From close family to foodies, we found that people already use real-life circles to express themselves, and to share with precisely the right folks. So we did the only thing that made sense: We brought Circles to software. Just make a circle, add your people, and share what’s new — just like any other day.





From GigaOM:


I don’t think Facebook has anything to worry about. However, there is a whole slew of other companies that should be on notice.


One of the reasons why I think Facebook is safe is because it cannot be beaten with this unified strategy. Theoretically speaking, the only way to beat Facebook is through 1,000 cuts. Photo-sharing services such as Instagram can move attention away from Facebook, much like other tiny companies that can bootstrap themselves based on the Facebook social graph and then built alternative graphs to siphon away attention from Facebook. Google could in theory go one step further — team up with alternative social graphs such as Instagram, Twitter, and Tumblr and use those graphs to create an uber graph.


From TechCrunch:


“We believe online sharing is broken, and even awkward,” Google senior vice president of social Vic Gundotra said. “We think connecting with other people is a basic human need. We do it all the time in real life, but our online tools are rigid. They force us into buckets — or into being completely public. Real-life sharing is nuanced and rich. It has been hard to get that into software.”


From the little that I’ve seen so far, Google+ is by far the best effort in social that Google has put out there yet. But traction will be contingent upon everyone convincing their contacts to regularly use it. Even for something with the scale of Google, that’s not the easiest thing in the world — as we’ve seen with Wave and Buzz. There will need to be compelling reasons to share on Google+ instead of Facebook and/or Twitter — or, at the very least, along with all of those other networks. The toolbar and interesting communication tools are the most compelling reasons right now, but there will need to be more of them. And fast.


From Silicon Alley Insider:


Yes, it has been hard to get sharing into software. That’s why Facebook was created seven years ago. That’s why Facebook CEO Mark Zuckerberg has been trotting around the world for the past five years telling everyone that the company’s mission is to facilitate “sharing.” That’s why Facebook is now used by nearly 700 million people worldwide. That’s why Facebook is basically subsuming the Internet.


From AdAge Digital:


The major difference between Facebook and Google+ is that instead of having a massive friend list, users collect each other into groups — called “Circles” — like family, work, and friends. This context has been missing from Facebook and has gotten some people in hot water — for example, those who post their wild weekend party photos that may be seen by family and colleagues. And on Google+, there are no friend requests. People do not need to agree to be friends with one another and can view updates without sharing their own.


From paidContent:


Circles: This seems a clear poke at Facebook’s groups and lists features, which are not the easiest thing in the world to use. Google has created a way to let Google+ users create groups of friends, colleagues, and family members that’s almost exactly like creating a new folder on your hard drive and adding pictures. Simply drag the name of a friend or connection into a newly created circle to assign them to that group, and when you create a new post, you can select which circle will receive that update, allowing you to share the latest off-color South Park clip with your close friends (but not your uptight boss) and your goofy family reunion pictures with those who won’t judge (and not that first date that you’re hoping will turn into a second).


From Mashable:


Circles is well-implemented. It’s far easier than creating a Twitter List or a Facebook Friend List. The drag-and-drop functionality is a welcome addition, and the cute animations that appear when you perform actions give the product personality. That doesn’t necessarily mean users will take the time to create friend groups.


From Wired Epicenter:


Parts of it certainly seem to appear similar to what we’ve seen before. One significant component is a continuous scroll called “the stream” that’s an alternative to Facebook’s news feed — a hub of personalized content. It has a companion called “Sparks,” related to one’s specified interests. Together they are designed to be a primary attention-suck of Google users. Google hopes that eventually people will gravitate to the stream in the same way that members of Facebook or Twitter constantly check those continuous scrolls of personalized information.


The Buzz disaster came just as Facebook began to look like it may make good on its goal of signing up every human on the planet — creating a treasure trove of information inaccessible to Google’s servers. People at Google began to worry that Facebook could even leverage the information its users shared to create a people-centric version of search that in some cases could deliver more useful results than Google’s crown jewel of a search engine.


From Silicon Alley Insider:


Apparently, Facebook got wind of the Google+ feature, now called Circles, that allows users to share information with only select groups of friends, rather than their entire Facebook network.


Mark Zuckerberg took a personal interest in meeting this threat from Google, and put a team on it last summer. The result: Facebook Groups, which launched in October.


It doesn’t seem to have taken off — at least not like hugely popular features like Facebook Photos — which suggests maybe this is a solution to a problem most people don’t worry about. That doesn’t bode well for Circles.




Tuesday, June 28, 2011

How to Use the Google +1 Button Callback Parameter to Unlock Exclusive Content

Google +1 Button Callback Parameter


Since the release of the Google +1 button for websites in early June, many webmasters have been trying to figure out the best ways to implement it across their sites. In its most basic form, the +1 button is relatively easy to add to a webpage. You can grab two line of code, add them to your webpage, and be on your way. That said, Google has provided several parameters you can use with the +1 button that control how the button looks, what is displays, which URL should receive the +1, and which function you want to call when someone clicks the +1 button. Wait, did you catch that last part? Google added a mechanism for webmasters to trigger a JavaScript function when someone clicks a +1 button. The mechanism I’m referring to is the “callback” parameter of the +1 button, and it opens up a world of opportunity for webmasters. Let’s explore the parameter in greater detail, including what it is, how to use it, and how to avoid problems down the line.


What is the Callback Parameter?

As I mentioned earlier, you can implement the basic +1 button on your site with just a few lines of code. You need to include a JavaScript tag and then the +1 button tag. It’s essentially two lines of code and you’ll have a +1 button on a webpage. But, if you review the Google Code page for the +1 button, you’ll notice several other parameters. You have count, size, and href, which control the display of the +1 button, as well as identifying the URL that should receive the +1. Then you have the callback parameter, which takes the name of a JavaScript function as the value of the parameter. The JavaScript function you trigger can do anything you want (ok, not anything), and I’ll cover more about this soon.


Here is what the google +1 button code would look like when using the callback parameter:

  1. <g:plusone callback="helloWorld"></g:plusone>


When you include the callback parameter in the +1 tag, you provide the name of a JavaScript function that will be triggered when someone clicks the +1 button. In this example, the function called “helloWorld” will be triggered. Note, helloWorld() needs to be part of the global namespace, meaning it needs to be included in the page or referenced in the html file via a script tag. The function will receive a JSON object, which includes both an “href” value and a “state” value. “href” will include the URL that received the +1 and “state” is either on or off (where on represents a +1 and off means someone removed a +1). That information is good to know and you can handle each situation separately. More about this soon.


Example: A Simple JavaScript Function

Below, I have included a very basic JavaScript function that’s called when someone clicks a +1 button. It simply throws an alert displaying the state of the button when clicked. Note, this function could either reside in the page itself or it could reside in an external JavaScript file that’s referenced in your html page (via a script tag).


  1. function helloWorld(plusone) {
  2. window.alert('+1 Triggered, State=' + plusone.state);
  3. }


How the Callback Parameter Can Be Used

Based on adding the callback parameter to the +1 button, Google is enabling webmasters to creatively use the functionality to interact with users. For example, you could reward users that +1 a page on your site. There are some rules, though. Remember, +1’s impact rankings, so you don’t want to “buy” rankings. I attended a Google webinar last week that covered best ways to implement the +1 button and Google made it very clear that you should not pay for +1’s. That means you shouldn’t incentivize users with money, product, or services based on those users clicking a +1 button on your site. Here is the actual language from Google’s policy page:


“Publishers should not promote prizes, monies, or monetary equivalents in exchange for +1 Button clicks.”


The reason Google doesn’t want publishers incentivizing users with prizes or money is simple. +1’s impact rankings, rankings should not be manipulated in any way, and paying for +1’s is like paying for links. Don’t do it.


Unlocking Content is OK

Although you can’t provide products or services, Google explains that you can unlock exclusive content. Here is the language in Google’s policy regarding enabling content and functionality:


“Publishers can direct users to the +1 Button to enable content and functionality for users and their social connections.”


If someone +1’s your new blog post, you could unlock exclusive content for that user (and you can use this approach creatively, depending on your specific industry, business, etc.) For example, you could provide a study that goes deeper into a topic, you could provide additional tutorials on the subject matter, provide additional news about a topic, etc. Just make sure you wouldn’t ordinarily charge for that content. Yes, this seems like a slippery slope, since exclusive content might already have a price tag associated with it. As a webmaster (or marketer), you might need to build new content that could be part of your +1 program.


An Alternative Approach – Catching +1 Removals

Earlier in this post, I mentioned the “state” value that gets passed to your JavaScript function in the JSON object. That value will tell you whether someone +1’d a page or removed a +1. Knowing that someone just removed a +1 is important information, and you can act on it using the callback parameter of the +1 button. For example, maybe you can ask the person why they removed the +1, ask them to reconsider their +1 removal, or redirect them to a page that provides a more creative approach to catching +1 removals. Now, you don’t want to go overboard here. If someone just removed a +1, they obviously had a reason. You don’t want to add fuel to the fire and push the limits of getting that +1 back. That said, the right messaging could act as a legitimate confirmation that a user will be removing a +1, which could potentially save some of those votes. It would be interesting to test this out to see how many +1’s you can gain back by using the callback parameter.


Unlock Content, Get More +1’s?

As you can see, the callback parameter can be a helpful addition to the +1 button code. Depending on the “state”, you can either reward users with exclusive content, or you can address the removal of a +1. Remember, +1’s impact search rankings, so they can be extremely valuable to your organic search traffic. Just be careful about what you’re giving away to users that +1 content on your website. Make sure you aren’t giving away prizes, money, or services. The last thing you want is for a creative use of +1 to get you penalized. And if history has proven anything, you can bet that some webmasters are going to try and manipulate the system to gain more +1’s. As I said earlier, don’t go down this path. It’s not worth it. Play by the rules, be creative, and gain more +1’s the right way.


By the way, have you +1’d this post yet? :)

Tuesday, June 14, 2011

The Photo Strip: Facebook’s Most Underused Free Ad

Many brands trick out their Facebook pages with flashy apps while ignoring some of the most valuable (and free!) tools available. Case in point: the often-neglected photo strip that came as part of the new Facebook Pages format rolled out in March. When done wrong, the photo strip makes an otherwise impressive page fall flat. When done right, the photo strip creates a stunning page design. A little bit of creativity and upkeep can transform the photo strip into a powerful branding tool.


How it Works


By default, the most recent five photos uploaded to a page – either as wall photos or in a photo album – are displayed in the photo strip. Although only 5 images will show at one time, an unlimited number of photos can be set to appear in the photo strip. New images uploaded to the wall or photo albums can be hidden from appearing in the photo strip by clicking on the X in the top right corner of the image. Hiding all but a select 5 photos enables page administrators to control which photos appear in the photo strip.


Clicking the X in the corner of an image hides it from the photo strip.





Unlike with personal profiles, the images on a page’s photo strip appear in random order. Every time the page is refreshed, the photo order is shuffled. Presumably, Facebook did this to keep brands from using this as static advertising space. Many pages get around this challenge by embracing the randomness and using images that work regardless of order.


The Tropical Northern Queensland tourism board's photo strip has Nemo move each time the page is refreshed.


Here’s where it gets tricky: the image that appears in the photo strip is actually a thumbnail cropped from a section of the uploaded image. Even more tricky: the cropped area used in the thumbnail cannot be chosen (unlike with a profile picture). Instead, Facebook automatically crops an off-center part of the image as the thumbnail. Figuring out exactly which part of the photo is cropped is difficult, but properly formatting images is crucial since some photos appear unrecognizable when resized as thumbnails.


Dairy Queen's Photo Strip: When Bad Cropping Happens to Good Pages


.


Alternatively, making making photos the same dimensions as the thumbnail (98 x 68 pixels) prevents them from being cropped. The downside to this method is the image appears very tiny in the photo viewer.


Kool-Aid uses images already sized as thumbnails to prevent cropping


.


After much weeding through Facebook pages, I’ve found some brilliant ways brands are using the photo strip.


1. Incorporate the Profile Picture


It takes a great concept and well-designed photos to pull this off, but the results are awesome.


Secret’s creative use of inner tubes melds the profile picture into the photo strip in this celebration of reaching a million fans.



Lysol’s banner tying in the profile picture is on-point with their “Mission for Health” initiative that promotes weaving healthy habits into communities.



2. Product Placement


This one’s obvious: feature products in the photo strip! Another no-brainer: including a description and link to the product in the photo caption. However, making the product photos too promotional may turn off fans. Focus should be on adding aesthetic value to the page and clarifying what the brand is about.


A white background and consistently sized images make Nikon’s cameras pop across the top of their page.



Chevrolet uses the same color cars and labels each model in these attractive, brand-focused photos.




3. Show Gratitude


When reaching a milestone, such as X number of fans or overwhelming participation in a contest, the photo strip space can be used as a thank you.


Dove said thank you in different languages when they reached a million fans. This is also an example of selecting more than 5 photos to use in the strip, since this design can be used to spell out “thank you” in many languages.



Nutella also used the space as a thank you when they reached 10 million fans.



4. Highlight New Stuff


Placing upcoming products or services in the photo strip builds buzz and awareness around a new product launch.


Chicken McNuggets swimming in sauce isn’t the most appetizing visual, but these colorful, consistent images are still a nice plug for McDonald’s new dipping sauces.



In a more indirect approach, Panda Express advertised their new extended hours with a night sky.



5. Be Useful


Since this is the first thing fans notice on the wall, why not make it a quick reference tool?


Redbox’s photo strip allows fans to check the page each week for new releases.



Dole Bananas features photos of recipes made with bananas. The photos’ captions are links to the recipe on Dole’s site.



6. Say it with Words


Using words instead of pictures can effectively showcase brand messaging, services and products, or compel fans to take action.


Involver uses compelling words to promote their Social Markup Language.



These simple words entice to fans looking for discounts and coupons.




7. Play with Color


Creatively using colors can really make the photo strip pop.


UNICEF USA uses splashes of their signature cyan color to pull together these photos of children from around the world.



Crystal Light’s same photo in different colors has a powerful effect.



8. Sequential Randomness


In contrast to playing up randomness, using images that belong in sequential order is a playful way to get fans refreshing the page continuously to put the photos in the correct order.


Seattle’s Best Coffee is not only nicely using product placement, but their numbered levels of coffee make for a great out-of-order set of pictures. I may have spent a few minutes trying to put these in order (unsuccessfully).



9. Do It All


Why use the photo strip to promote one thing when you can promote everything?


Kraft Macaroni & Cheese uses the photo strip as a cross-promotional tool for their site’s recipes, new products, other social profiles, and a Facebook app. This manages to not feel overly promotional since it is so well-designed and subtle.


Saturday, June 4, 2011

Facebook’s New Ads Power Editor Replaces the Bulk Uploader with a Streamlined GUI

This week, Facebook launched its new Ads Power Editor desktop software for buyers who work directly with a Facebook ads representatives. The new multi-pane graphical user interface presents a streamlined way to create and manage multiple Facebook ads simultaneously. It also integrates with Excel, and replaces the Facebook’s bulk uploader ads tool which will be deprecated on June 30th, 2011.


Alloffacebook has provided a functionality overview of the Facebook Ads Power Editor and present some questions regarding how the enhanced native tool impacts third-party tool provider working off of the Facebook Ads API.



Until now, Facebook provided four main ways of purchasing and managing ads. The public self-serve graphical user interface ads tool, the bulk uploader for managing ads through Excel, the Facebook Ads API for programmatically managing ads, and a direct relationship with Facebook ad sales representatives for the site’s biggest advertisers. Now, those working with ads reps have access to the Power Editor that combines and strengthens the features of the self-serve and bulk uploader tool.


While Facebook has continued to augment the self-serve ads purchasing tool and Ads Manager with more conversion and reach metrics, new ad units such as Sponsored Stories, and new targeting options such as broad category targeting, the design of the graphical user interface has for most part remained stable over the past few years. For those that needed to create and manage large scale optimized ad campaigns, the self-serve tool and bulk uploader were a bit too clumsy.


While the Power Editor doesn’t support Sponsored Stories, it makes generating and editing multiple ads at once much simpler.


New Features


Downloading the Power Editor


For now, the bookmark for the Power Editor only appears in the ads accounts of ad buyers who work with Facebook ad sales representatives and are running the Google Chrome internet browser. Those who qualify can download and run the software locally from their Windows, Mac, or Linux machine. Users then download their existing ad account and campaigns into the software from Facebook.


Multi-Pane Interface


Power Editor users are shown three panes shown in the image above:



  • Left pane (A)- Select between ads accounts and their campaigns

  • Main pane (C) – Use tabs (B) to view all the campaigns or ads from the account or campaign selected in the left pane

  • Bottom pane (D) – View editable fields for the campaigns or ads selected in the main pane


This tiered interface makes it easy to navigate between and edit a huge number of ads from different accounts and campaigns. The old Ad Manager required many more clicks and page loads to access all of this information.


Performance Metrics Settings


Users can check boxes to select which metrics will appear in the main pane. These include standard metrics such as clicks, impressions, and bid, as well as new metrics such as Facebook content and errors, and basic targeting attributes such as age and sex. Users must set a date range with the stats drop-down to load the new metrics.



Creating New Campaigns


Users can create a new campaigns in three ways:



  • ‘Create Campaign” flow – Fill out various fields inline

  • Duplicate – Clone an existing campaign and then edit fields

  • Copy from Excel – Copy a campaign from the Power Editor into Excel, edit it, and paste it back into the Power Editor


Creating New Ads


Users can create new ads in four different ways:



  • ‘Create’ Ad flow – While in the ‘Ads’ tab in the main pane of the desired campaign, click ‘Create Ad”. Fill out fields inline using typeahead functionality, and select an image from the image library or upload a new one

  • Duplicate – Select an ad in the ‘Ads’ tab of the main pane and click ’Duplicate’, then edit fields

  • Copy from Excel – Copy an ad from the Power Editor into Excel, edit it, and paste it back into the Power Editor

  • Import from Excel – Create multiple new ads or new campaigns in Excel, import the spreadsheet by copying it into the Power Editor or clicking the ‘Bulk Import’ button, and upload a zip file of images



The Power Editor is backwards compatible with the Bulk Uploader, so spreadsheets from the Bulk Uploader can be imported the same way as they are from Excel. Whenever edits are made in the Power Editor, the ‘Upload’ button must be clicked to sync the changes with a Facebook Ads account. Changes since the last Facebook account upload or download can be undone using the Revert Changes button.


Power Editor and the Facebook Ads API


The Power Editor provides some of the basic functionality offered by tools built by third-party developers on the Facebook Ads API. Specifically, the ability to create and manage multiple ad variants for A/B testing can now be accomplished through Facebook’s native tools. This to some degree commodifies a core selling point of third-party tools — namely that a significantly level of efficient A/B testing could not be achieved without an Ads API tool.


However, many Ads API tools provide better ad creation than the Power Editor, with visual trees and the ability to cross several creative and targeting variables to instantly produce permutations. Third-party tools also provide deeper analytics, cost per fan and conversion-based optimization models, auto-optimization algorithms, and support for Sponsored Stories. This means that for now there should plenty of value for Ads API tool developers to offer big ad buyers.


alloffacebook is following with Facebook about the direction of both the self-serve tools including the Power Editor, and the Ads API. We’ll return with insights into how advertisers should choose the solution that’s best for them, and how the tools of Ads API developers should look to differentiate themselves from Facebook’s native tools.

Site Speed For Dummies Part 1 – Why Bother?..

Since starting in SEO, I have followed countless numbers of blogs and read a huge amount of information (most of which I don’t understand). What I always found irritating was reading a blog post, knowing the actions you need to take but lacking the technical know-how to put those actions in place. Take site speed, for example.

Site speed is a funny old thing. Ever since Google first announced in April 2010 that they were using it as a ranking signal, everyone has known that they need to optimise it. The problem is that most people just don’t know how to do it (like me: personally, as soon as I see code, I run for the hills).

To that end, this blog post is for all those SEOs out there that know they should be improving site speed, but instructions like “combine images into CSS sprites” means nothing to them. Originally this was going to be one blog post, but after researching it’s turned into what will likely be three posts. Part one will cover the reasons behind wanting to improve site speed, part two will be getting down and dirty with all the techy code stuff and part 3 will be how you can actually get developers and customers to buy into it and how to influence change.

So let’s get started!

How Important Is It?

When looking at how important site speed is, I decided it was important to look at it from various points of view:

  1. Is it better for the user?
  2. Is it better financially for the business owner?
  3. Does it result in higher natural search engine ranks?

Let’s take these one at a time…

1. Relationship Between Site Speed & User Satisfaction

I think it’s fair to say that common sense applies here; people don’t like browsing at slow speeds, that’s why we don’t use dial-up any more. (Plus the dial-up noise was really annoying.)

Site Speed For Dummies

Image Source: Here

So we already know that people prefer to surf as fast as possible, but since we are geeks, we need proof, right?

For starters, Google, and in particular Larry Page, are obsessed with speed. Have a look at this article on the BBC (from 2009). The “Fast Flip” concept is mentioned, where we Twould be to be able to flip through online content as quickly and easily as we can with a physical magazine.

Additionally, Google themselves ran a test to see if slowing down the search process effected users’ behaviour. They essentially made Google slow down very marginally for a set period of time. They had two groups: a control group, who use Google as normal, and a test group. The test group’s searching habits were monitored over a period of 6 weeks while delays between 100 – 400 milliseconds were applied to their search environment. The test simply showed that the longer the delay was, the fewer searches people were likely to do.

The impact ranged from 0.2% to 0.59%, and what’s even more interesting was that in the longer delays, even after they returned to Google’s regular fast speed, they maintained their newly modified search habits.

Put simply, the effect of a slow user experience changed their search habits for an extended period of time. Bing also found that a 2-second slowdown changed queries per user by -1.8% and revenue per user by -4.3%. That’s pretty amazing. These numbers might not seem like a big difference, but think how quick Google is already: if we were dealing with a site with delays of 4 or 6 seconds, this could have a much bigger impact.

So I think that answers the first part. Yes, a fast site is definitely better for the user… duh.

Oh, yeah – Google also like to remind you just how quick they are every time you do a search:

Site Speed For Dummies

2. Will a fast site make you more money?

So we now know that it’s better for the user, but do happier users actually make us more money? Is improved site speed worth implementing?

I dug around a little and found a great article, Velocity & The Bottom Line, which has some great case studies that are worth having a look at. I picked a couple that are great examples of how site speed can affect your bottom line.

AOL

AOL conducted experiments to see how their site speed effected the way people viewed their site. For each visitor, they monitored the average load time of the pages. They then broke this down into percentiles of the highest to lowest and examined the number of page views of each group from fastest to lowest. This showed some nice numbers across a range of industries.

Site Speed For Dummies 3

Image Source: Here

As you can see from the image above, the faster the site is, the more pages the user navigates through. This is true across all the areas of the site shown. More site views on a site like AOL is very important as more page views = higher advertisement revenue.

Shopzilla

Another site which is used as a case study in that article is Shopzilla. If you are not familiar with Shopzilla (I wasn’t until writing this), they are a product comparison site which allows you to compare prices of products and help you get the best deal. They got some great results from speeding up their site and even in some aspects that you would never expect. Before the change, their site was averaging a load time of between 4 and 6 seconds per page.

After making the changes they were averaging less than 1 second consistently, and this had a dramatic overall effect.

Site Speed For Dummies 4

This proves that if your site is fast, people are:

  1. more likely to want to spend time there,
  2. less likely to leave it, and
  3. feel more secure. Since they don’t have to wait about as much, they don’t have time to panic when they hit the buy button.

Can you imagine the testing you would need to do to achieve a conversion increase of 7-12%? Then what about the infrastructure costs? 50% is huge, especially with a site this size. Most small businesses won’t have such high infrastructure costs, but it’s still interesting to know that money can be saved in areas you don’t expect. This is the ideal situation, as sales increase + operational costs decrease = WIN.

I think you would agree this shows that faster sites can definitely increase your bottom line.

3. Does it increase your search rankings?

The short answer to this is yes. Google have publicly said it’s a ranking factor, so improvement here will definitely increase your rankings.

What’s not to clear is how high up your to-do list it should be. According to Google, in April 2010 search speed affected only 1% of queries. That’s not a huge result, but I think we can assume that this number will increase over time.

But if there’s still doubt, I always refer to common sense. Browsing at speed is better for user experience, therefore Google should prefer fast speeds. Building on that, the following is based on my opinion and has not been tested, although I welcome anyone to test and let me know the results.

Personally, I think the increase in ranking may not be a direct effect of increasing site speed. I don’t think Google place enough weight on this single factor to make dramatic rises in the SERPS. Still, consider all the positive things that happen as a secondary effect of updating the sites:

  • Users use the site more
  • They view more pages
  • Conversions increase
  • Less down time
  • Lower abandonment rates

These are all excellent things from a user point of view, which is obviously Google’s number one priority.

And a close second is making money. When you look at the size of Google’s Adsense network, for example, do you think they would be in favour of getting a 25% page view increase across the network as a whole by way of increasing speed? Of course they would. Results that big could mean millions of pounds every year.

It also makes sense that Google would track site user stats as well. If a site gains all the above stats, that site deserves to rank higher.

I think the secondary increase in relevancy and user experience would drive the increase in rankings, rather than the primary increase in site speed.

It’s all about the big picture in my opinion.

Site Speed For Dummies Part 2 – How To Do It

Let me first apologise for the size of this post. It’s massive, but learning ain’t easy. A lot of this does get pretty techy, and you may have to accept that you just can’t do some of the stuff unless you’re a developer. So to make this post as actionable as possible, I wanted people to know what things are achievable for them, based on their knowledge and ability. For this reason I developed an extremely sophisticated algorithm that will automatically let you know which stuff you can do and what you will get the highest ROI from. Without further adieu, I give you “The Dummy Scale”:

This little guy means that almost anyone should be capable of doing the task. It involves mostly copy and paste and will give you the tasks you can do with the least amount of time. Examples would be installing analytics code.

Next up is this guy with the sticky up hair. He’s not quite as simple as the previous character, so if you have a little bit of knowledge of code you should be able to handle it.

Lastly, this guy has glasses: need I say more? This stuff is pretty heavy-handed. Don’t go near these ones without at least six cans of Red Bull and a couple of all-nighters planned. OK, it’s not that bad, but they do involve things like the .htaccess file, which I wouldn’t recommend going near unless you know what you’re doing!

OK, let’s get cracking…

I’m going to run through an example, use some tools that Google recommends and see what the outcome is. Once I have a list of recommendations from the tools I’m going to talk through actually implementing them.

I have picked three sites that are all in the same (kind of) market. The three sites are:

Compare Site Speed

Step 1

To compare site speed, I used a great tool at http://www.webpagetest.org which allows you to add a URL and time how long it takes to load the page. What’s cool, though, is that it allows you to compare multiple sites side-by-side. Once a site is fully loaded, the screen goes grey to help easily identify which sites load first and last. So have a look at the video below comparing the three sites:

  1. WHSmith with a load time of 9.9 seconds
  2. Waterstones at 10.3 seconds
  3. Penguin with a whopping 31.4 seconds

To be clear, even though it won, WHSmith is by no means the Usain Bolt of the internet. Amazon, for example, renders in a cool 4.4 seconds! So what can we do about this? Let’s analyse each of them using some cool tools.

Step 2

Download Google’s Page Speed tool: http://code.google.com/speed/page-speed/

Step 3

Go to the site you want to analyse. In this case, we’ll look at Penguin. Open Page Speed and click Analyse. You will then get an output that looks something like this:

Woo-hoo! All you need to do now is use efficient CSS selectors, combine images into CSS sprites and enable compression! Ehh… what?! If you are like me, that means nothing to you. I don’t know how to do a single thing on that list. But I’m going to find out how.

So in no particular order….

Use Efficient CSS Selectors

This took me a while to get my head around, so I’ll try to explain it the best I can. Apparently, when you load a page, the browser scans that page looking for information that it can put into a tree-shaped diagram. This is known as an “internal document tree” and looks like this:

This helps the browser break the page down into its simplest elements and organise them in a way it can read. Reading the image above, we can see that on this page there is a body element and within that there are another two elements, the <H1> and the <P>. Then under each of those there is an <em> element.

Now let’s assume that I tell you I want to make the words in the <em> element within <P> red. If you write the code: Em {colour : blue} not only would it make the <P> <em> blue, but it would also make the <em> element under the <h1> blue, as well as any other <em> elements on that page. In order to select only the <em> under <p>, I would need to write: p em {colour : blue}. That would only colour the <em> element in the <p> area. What we have just done is written a descendant selector.

Put simply, a descendant selector means isolating one element within another element. This is done most efficiently when you don’t make the browser keep looking for something when it’s already found it. To be honest, though, it’s pretty techy, and unless you’re a developer, you’re not going to be able to correct your code. This site explains the process pretty well.

Prefer Asynchronous Resources

Doing things “asynchronously” is a strange concept. You would think that in order to make your site load quicker, you’d want to do as many tasks as possible at once, so the page could load faster. Although multi-tasking generally makes things happen faster, when it comes to loading a web page, the more things done at the same time actually slows you down. It’s much better to load pages asynchronously, simply means not doing things at the same time.

Doing things asynchronously allows you to prioritise the items which you would like to load first. When opening a new page, the only information you need to load immediately is the information that’s above the fold. The rest of the stuff isn’t visible to the user until they scroll down, so it makes sense that the priority should be given to making the stuff above the fold load first. Likewise, some items like tracking scripts are never visible to the user anyway, so it would make sense to prioritise all visible content ahead of that.

So our original Penguin site speed report shows two items which should be changed:

The image above shows that the Facebook info and the Google Analytics could be changed to load asynchronously and allow the rest of the page to load more quickly. The Analytics one is an easy win: all they need to do is update the code to the latest Analytics code, and it will be asynchronous. As for the Facebook part, you would need to go the Google-suggested route and use a script DOM element.

Get Site Speed Stats In Google Analytics

This doesn’t speed your site up, but the news broke as I was researching this post, so I thought it was best to include it (plus it’s really easy to do). It’s possible to get site speed tracking data straight from within your Google Analytics account.

  • Step 1 – Install the latest version of the Google analytics asynchronous code on your site.
  • Step 2 – Add the line: “_gaq.push(['_trackPageLoadTime'])” to your analytics code.
  • Step 3 – Enjoy all your new juicy data.

Specify Image Sizes

This one is nice and simple and actually makes sense. When loading a page with no image dimensions in the source code, the browser needs to guess where to put everything else around that image until it’s finished downloading. Then when the image does download, it needs to go back and do reflows and repaints (kind of like loading it again) and place it at the correct size and reshuffle the page to make it fit.

If, however, you specify image dimensions, the browser doesn’t have this problem. It’s kind of like saving a seat for your mate in a really busy pub. When he finally gets to the pub, he already knows where his seat is and can go straight to the bar and get a nice cold beer! Mmmm, beer!

OK, so how can we get these poor images on the Penguin site some cold beer? Let’s look at the current page and source code for one of the images.

One of the images that the test highlighted as not having dimensions was the image of Jeremy Clarkson. So let’s look at the source code for it.

<a href=”http://itunes.apple.com/gb/app/iclarkson/id406162322?mt=8″ alt=”iClarkson” title=”iClarkson” target=”_blank”><img src=”http://www.penguin.co.uk/static/cs/uk/0/penguin_homepage/images/0311/panel_03.3_bg.jpg” height=”81″ alt=”" border=”0″ /></a>

The highlighted part above shows that the image height is specified but not the width. This is an easy fix and just needs a small tweak as shown below.

<a href=”http://itunes.apple.com/gb/app/iclarkson/id406162322?mt=8″ alt=”iClarkson” title=”iClarkson” target=”_blank”><img src=”http://www.penguin.co.uk/static/cs/uk/0/penguin_homepage/images/0311/panel_03.3_bg.jpg” width= “67” height=”81″ alt=”" border=”0″ /></a>

Can you taste that beer?

Obviously, this is a really simple example, and to do all the images on a site the size of Penguin’s would take a long time, but it does show these things should just be done right at the start.

One more thing to note is not to resize images on the fly. If, for example, we wanted a large version of Mr Clarkson’s face (don’t know why), it’s not best practice to simply scale up the numbers in the highlighted section above. Instead, use an image editing software to adjust the image to the size you want and then save that version.

Combine Images into CSS Sprites

Let me start by explaining when sprites are useful, as this will help the explanation later seem easier. Social media buttons are a good example of this. Lots of social media buttons are animated, meaning when you hover over them they do something, they might light up, move, get bigger, etc. This is mainly done to let you know you can interact with the object.

Whatever they do, this is achieved by having two separate images. One shows when you are not pointing at it, then another image shows when you hover over it. Think of it like the little flip animations you used to make as a kid:

By changing quickly between all the images in the flip book, it gives the appearance of character movement, and the same thing happens with lots of hover features. The trouble with this is that it uses lots of images (if you’ve ever drawn one of these books, you know how much of a pain it is drawing 50 images that are pretty much identical). Well, to put it simply, browsers can’t be bothered requesting all these pictures from different places, either. Fifty images means fifty URLs that the browser needs to go to and pull that image from, and that takes time!

The whole purpose of sprites is based around the 80/20 rule of optimisation. Apparently the majority of time spent rendering a page is not down to downloading the image. The main cause of slow rendering is excessive HTTP requests. In other words, stop referencing so many places to pull images from. Sprites solve this problem because instead of having 10 images with 10 separate locations, you combine those images into one big image using a sprite, and then just reference the part of the image which you want to show at that particular location or time.

So sprites are created by ripping all the pages out the flick book and sticking them to one big sheet of paper in an organised order. This then becomes one big page instead of 50 individual ones. Now the browser only needs to go to one URL to get all the images.

So how does this actually work? Well, you tell the browser which part of the bigger image to show by referencing an area of pixels. A really basic example would be changing the colour of a square when you hovered over it. Look at the three images below.

If you wanted to make this change from red to blue without using sprites, you would need to request two different URLs. You would tell the browser that by default, it should show the image “http://www.mydomain.com/red-image”, then when someone hovers over it, show the image “http://www.mydomain.com/blue-image”.This causes two HTTP requests.

If, however, you want to use sprites, you’d create the bigger rectangle image, which is the two smaller ones stuck together with one pixel gap. You would then tell the browser grab the image URL “http://www.mydomain.com/new-bigger-image” but only show pixels “0 to 50” x 50 (the red part) as a default. But when someone hovers over the image, you would tell the browser to show pixels “52 to 101” x 50 (the blue part). This means that while the total number of pixels is only slightly increased, the requests have been reduced from 2 to 1. Obviously, this is a very simplistic example, but if you do this across a whole site with lots of images, it can make a considerable difference. Check out one of Amazon’s sprites for example:

Doing this can be difficult, but thankfully the tool which Google suggests, SpriteMe, is really good at talking you through the process. Sprites can seem counter-intuitive because logic tells you that making big images slows pages down, but based on reducing the number of requests from say 10 to 1, the benefits of the reduced calls outweigh any increases in the image size.

Leveraging Browser Caching

Thank God for something that was easy to understand. Learning most of this has been a challenge, so when I saw this it made me smile. While easy to understand, it’s not too easy to do, so it’s mega geeks only for this one, I’m afraid. Like previously mentioned, making lots of requests to external sources – whether its images, CSS, JavaScript – takes time, and if they can be reduced or avoided, it can only speed your site up.

Browser caching is great for doing this, and I have to say it is one of the quickest wins I’ve seen so far. Essentially, leveraging browser caching is a cross between giving your browser a better memory and a camera. If there was no browser cache, then every time you went to a website, you would need to download everything again. Thankfully, that’s not the case. There are ways to make your browsers memory last longer. Most sites have a lot of content that either never changes or very rarely changes. It therefore doesn’t make sense to keep making your browser download the same stuff time and time again. Instead, if you know what items on your site are not going to change for, say, a year, you can tell browsers to remember things the way they are now until a year’s time.

This means that for the next year, instead of downloading stuff, the first time you visit that site those items are stored locally in the browser cache and allows the browser to load the page much quicker.

This isn’t suitable for all sites, of course. e-Commerce sites, for example, have a lot of changing products. So if you are going to be updating the product range regularly, it’s perhaps not worth your while to set the browser cache to a year, though certainly setting it to a month can help. This is probably most likely why the Penguin site does so badly in this area: the site is updated quite regularly with new books and special offers.

To actually do this, you need to use the .htaccess file. Unless you know what you’re doing, I would recommend getting a developer to do this. There is a post here on how to do it, but read the comments at the bottom as some people had some issues with this method. I could write the code out, but I’d just be repeating what’s on that post.

Combine External JavaScript

The theory of combining JavaScript is the same as using sprites. The point is to reduce the number of calls that the browser needs to make to external sources. In this case, it’s JS files rather than images, but the idea is the same. Rather than calling lots of different JS files, let’s just put all the JS code together into one file and reference the correct part for the job.

Points to note – sometimes JS need to be completed in a certain order, so don’t just throw it all together willy-nilly. Look at the page you are optimising, and take a note of the order of the JS files and the location. This should be used as the order in which to paste the code into the new document. So let’s see an example. If we look at the Penguin site again, one of the recommendations is:

So first, we had better make sure this is the order in which they are loaded. To do this, view the source (Ctrl+U) then find (Ctrl+F) and search for “gettopup”.

By searching I have found the order of the JS is slightly different:

I also can’t seem to find the file that ends in “/jquery/all.js”. But for the purposes of this example, let’s assume it’s a perfect world. We could then create a new document in a text editor and call it something like “newjsdocument.js”. Then we would paste the JS code (in the correct order) into that document, save it and re-upload it. Now any time the other documents are referenced, the browser will refer to the one document instead of three or four.

Additional notes to consider

  1. Always make copies of your JS code before you go mixing it up.
  2. If you use JS resources that are constantly changing, this may not be applicable.
  3. Many times there are good reasons for having separate JS files, none of which I understand, but I’m told they are good enough reasons.
  4. Additional savings can also be made by minifying the new big JS document (just about to explain what this means).

Minifying CSS

Minifying in general is good practice. When websites are written using CSS, the actual CSS document can be pretty large. Depending how fancy the site styling is, there can be thousands of lines of code. Unlike people, browsers don’t need text to be spaced out nicely and formatted in an easy-to-read and user-friendly manner. If the code is correct, it can all be jammed together by removing unnecessary spacing and commas. To use this blog post as an example, how much space do you think would be saved if I didn’t use spaces, commas, line breaks, etc.? The answer is lots. When it comes to code, Space = Speed.

According to the Firebug speed test, sites could save 20% on load times by minifying CSS files. That’s pretty cool for something that’s really easy to do. So how do you do it? Well, thankfully some clever people have made quite a lot of tools that are really easy to use and do the work for you. All you need to do is paste your code into them, hit the Compress button, save the output in a new document and upload it.

A word of warning: if you minify CSS and then want to change anything in the CSS file later, it can be very hard to find the correct parts you need. Always make a nice, easy to work on version before the compression is done. That way, if you ever need to make any other changes, all you need to do is use the saved version and make a new compressed copy! A good tool for doing this is http://www.csscompressor.com/.

Enable Compression

Enabling compression is one of my favourite optimisation tips, not least because it’s one of the easiest to understand, though admittedly it’s not the easiest to implement unless you know what you’re doing. Enabling compression pretty much works the same as regular compression on your computer. If you have lots of files to email someone, you could attach them one by one and clog up your poor mate’s email inbox, or you could put them all in a folder, zip them up and send it as one small file.

In web design this can be done by the servers. It should be noted that if the user doesn’t have this enabled on their browser or are using a really old browser, this won’t work. To be honest, though, most modern browsers today do support it, so I wouldn’t worry so much.

So if it’s done at server side, what do we need to actually do to speed our site up? We need to tell the server to send the compressed version if the user’s browser supports it. This is done in the .htaccess file again. I would strongly advise against going anywhere near your .htaccess file unless you really know your stuff, as it’s easily the fastest way to ruin your site. In the interest of not adding on another 1000 words to the blog post, I won’t go into the hairy details, but enabling compression is definitely into Dev territory. This post covers it in more detail if you want to know more.

There are definitely big gains to be had by enabling compression so if you have a developer it should be top of your to do list. The firebug plug-in for our Penguin example shows that a 426.7KiB (75% reduction) in transfer size could be achieved by enabling compression, and that would be a huge win

Minimize DNS Lookups

Although they are separate items on the list, I thought I would talk about minimising DNS lookups and parallelising downloads across hostnames together because there are conflicting arguments to each.

Let me explain minimising DNS lookups first. DNS stands for Domain Name Servers, and they work like the phone book. When you tell a browser to go to www.mydomain.com, the browser essentially uses a kind of phone book to look up your domain. Beside your domain will be the DNS code/number, which gives the browser the location of the files on that domain. Looking up those numbers takes time, so the more sites that you need to look up, the longer it takes. Doesn’t that sound much simpler than ‘minimise DNS lookup’? #DummiesFTW!

So when the speed tip says “minimise the number of DNS lookups”, it essentially means try to limit the number of different websites you list. So if your website requests information from four different URLs like:

That would be a total of three DNS lookups. Why not four? Because, http://twitter.com/ and http://twitter.com/#!/CraigBradford have the same DNS. Anything on the same domain has the same DNS. Easy, right?

So when would you request information from URLs? If you want to pull in style sheets, JS, social API data, those are all DNS lookups. So how do we minimise the number of DNS lookups? Well, the recommendations I’ve read are to essentially do either of the following:

  1. Anything that’s on a sub-domain, change to a directory. In other words, if you reference “something.mywebsite.com”, change it to www.mywebsite.com/something. This is confusing because it’s still on the same domain, so it shouldn’t be an extra DNS lookup, right? Wrong. Apparently the browsers treat these as separate lookups, even though they are not.
  2. If you are pulling in stuff from several websites, anything that can be put onto the same domain should be. This would be applicable if you had lots of images getting pulled from different sources. If you could stick them all on the one site (the same DNS), like Flickr for example (*wink wink), you have just limited the number of images being hosted on your site and also reduced the number of DNS lookups.

The following example is completely unrealistic but makes it easy to understand. Let’s say you have an e-commerce site with 1000 images on it. But because you’re slightly cheap and dodgy, you just use other people images from a thousand different sites. That means that when loading your huge page, the browser needs to go do 1000 DNS lookups. But if, however, you set up a Flickr account and hosted all your items there, you would only have 1 DNS lookup as every picture would be Flickr.com/image1, Flickr.com/image2, etc.

So if that’s the case, why doesn’t everyone just put all images on Flickr? Well, that brings me to parallelising downloads across hostnames. As you probably guessed from the name, it has something to do with doing things in parallel.

Parallelise downloads across Hostnames

When your browser loads a page, it looks at all the files it needs to download and then the number of places it needs to get those things from. To use the example above, let’s say your page lists:

  • 20 Facebook profile pages
  • 20 Twitter profile pages
  • 20 YouTube videos
  • 20 images from Flickr

At this point, the browser does some quick sums and says I have 80 files to download, but there are four places I can get them from, so I need to get 20 from each! Now comes the catch. Most browsers only allow 2 connections to any one host (DNS) at one time. So for the browser to download the 20 files it would need to do 2 at a time and put the rest in a queue. TSo there would be four queues of 20. From this it’s clear to see why hosting 1000 images on Flickr wouldn’t be a good idea as it would be a huge big queue.

This is where the idea of parallelising downloads across hostnames comes in. I think I’ll compare this to toilets at music festivals….

If there are 80 people waiting to go to the toilet and there is only one toilet and these festival goers don’t mind going two at a time, then it would take 40 toilet sessions to clear the queue. If there are 4 toilets and again people don’t mind sharing, it would only take 10 toilet sessions to clear the queue. So it’s easy to see the advantage having more than one toilet (host). Common sense really, once you get past all the jargon. This is where browsers thinking that a sub-domain is a different DNS can be useful. By hosting items of “something.mydomain.com” instead of “mydomain.com/something” you have given the browser an extra toilet to reduce the queue. Just in case I didn’t explain the toilet metaphor correctly, the queue of people represent the time it takes to download the page, so you could consider each person as a second if you like.

I hope you can see the conflict of interest here; I literally just corrected myself from the previous recommendation. Here is a breakdown of the pros and cons of each.

Minimize DNS Lookups

  • Advantage – Speeds up page by reducing the time it takes to find all files.
  • Disadvantage – Increases the bandwidth on each location, so if there are loads of files, it can make a queue, which can actually slow your page down even more.

Parallelise downloads across Hostnames

  • Advantages – Reduces queues at each of the file sources, spreads the bandwidth load
  • Disadvantages – Increases the number of DNS lookups.

This is a balancing act and is down to the number of files on any page. Doing either one of these to an extreme will likely slow down your site, as it would have a negative effect on the other. To use another extreme example, if you had only four files it doesn’t make sense to make two subdomains. Assuming these files are not massive, you wouldn’t stand to gain very much. Where this becomes useful I think is more when it comes to using JavaScript and ordering the way your page loads.

“A lot of site speed comes down to setting priorities.”

A page fully loading and the user perceiving the page as fully loaded are two different things. If you have a lot of images, for example, it is really obvious to the user if there are big gaps all over the page. Therefore it might be worthwhile putting the images which are above the fold in a short queue to make sure that the first thing the user sees when they land on the page at least looks like a fully loaded page. The rest, meanwhile, can load below the fold. From my research, a lot of page speed comes down to setting priorities.

If I had to pick one to focus on, I would recommend parallelising downloads as you stand to gain the most if you do correctly.

Conclusion:

I started this series in the hope of being able to make speed optimisation a realistic and achievable goal for almost anyone with little web or code expertise. The reality is that this should really be left to the clever guys. Undoubtedly some of you will finish reading this and feel that a lot of it is still unachievable (myself included), but I hope there are enough quick wins that you feel you can achieve something and have at least learned the areas that you will require assistance in achieving. By learning about the tasks above, it also allows you to delegate the tasks to developers without them saying they can’t do it because it will take them two years to complete. So if anything, you can at least be in a knowledgeable position to outsource.

So what is the take away from site speed?

1- Optimising for site speed should not be a priority. Unless you are a big site, the ROI will be pretty low. Only have developers working on it if they have nothing else to do.

2- A lot of the recommended tweaks that the software suggests will not be suitable for every site. Often there are reasons why things are that way, and it’s best to leave them the way they are. What you gain on one thing could have a negative effect on others.

3- New sites should get this stuff right at the start. It’s easier to do it correctly the first time than go back and pick out tiny pieces of code.

4- Always make copies of all code before making changes. This makes updating much easier.

5- Speed is a project, not a task, so plan to make the changes over time and do them in order of gains. Also keep in mind the consequences of any changes.

6- Lastly, my little bit of wisdom for the day would be, if you want to learn something, teach it to others; it’s the best way to learn.

I hope this post has made a very techy subject actionable for some people. The plan was to have a third part, but I really don’t feel it’s necessary. If you have a big site, the examples in Part 1 speak from themselves. For big sites, there are big wins; for smaller sites, put it on your to do someday list. Thanks for reading, feel free to shoot me questions and comments and I’ll help where possible. You can also get Craig on Twitter: @CraigBradford

Disqus for ully's online marketing

Disqus for ully's online marketing