Are you as fed up with battery life claims as I am?
In the quest to make lighter and slimmer devices there’s a trend for manufacturers to trim the battery when releasing newer models of phones or tablets. They all do it, but it just so happens that I have a couple of iPads on the desk in front of me that I can use to demonstrate what happens when batteries get smaller. This is something that has come up recently in various online conversations so I thought I’d write a few words on the subject, but please don’t read this as an attack on Apple. What I’m about to write could apply to just about any tech manufacturer.
The two iPads on my desk are an iPad Air 2 (currently top of Apple’s ten-ish inch range), and an iPad 4th Generation, which is two generations back. They both have the same screen size, the same memory capacity, the same connectivity options, and both have exactly the same apps installed. They are even signed in to the same Apple account, and have the same email accounts and notifications enabled. To all intents and purposes one is an exact mirror of the other.
I do exactly the same things with them too: Watch a bit of Netflix, iPlayer or Amazon Prime, or stream other content from the Plex server running on my NAS; check emails, Twitter and Slack; play Scrabble with my parents or play Threes by myself; flick a few Angry Birds; check the news and weather; set stuff to record on my Sky+ box, etc. Each iPad probably gets a similar amount of use, too. So why two? Well, one is upstairs and one downstairs. Yes, I really am that lazy!
Oh, and before I continue, I should say that for the tasks I’ve listed above I don't notice any performance difference between the two machines. I realise that the Air 2 has a much faster CPU and graphics system, but I think it’s wasted on people like me. If your game playing includes the latest high-resolution driving games or immersive FPV then you might spot a difference. Or if you use your tablet for rendering tasks or re-encoding audio (why would you?) then again you might appreciate the newer, faster hardware. But in my own real world usage, watching streamed video, checking web pages, playing simple games and using productivity apps, I can't notice any difference at all. None. Nada. Zilch.
So what else is different? Let's start with some specifications: Both iPads have the same size screen, and the same number of pixels. In fact, the screens are pretty much indistinguishable. But the newer iPad has a smaller bezel and in somewhat thinner. You wouldn't think so by looking at it but the older iPad is almost 1cm thick. The rounded edges are a brilliant visual/psychological trick used by product designers to make things look thinner. But if we ignore those for a moment (just to make the maths a bit easier), the iPad 4th Generation clocks in at 421 cubic centimetres. Do the same calculation for the newer iPad Air 2 and you get 248cc. So almost half the volume of the older model.
The new one is lighter too at 444g (the 437g in the image on this page is for the Wi-Fi only model), the older model is 662g. So where have the volume and weight savings been made? Well, some of it is obviously down to improved packaging. You can tell that from the relative density of the two models, 1.52 grams per cubic centimetre in the old model, 1.79 in the newer one. Perhaps some of the electronic components are a bit smaller too. But dig deeper into the specs and by far the biggest change is the battery.
Apple doesn't publish the battery capacity of its portable devices, but various third party websites such as ifixit.com pull the things to pieces (technically known as a 'teardown'), and they found that the 4th generation iPad contains a 3.7V 42.5Wh battery. It's actually this Wh (Watt hours) figure that's important when comparing batteries, as that's the totally power available. Confusingly, most batteries are advertised using Ah (Ampere hours, or more usually Amp hours) specifications. In this case, as we're working with simple DC, Volts times Amps equal Watts, so we get 11.48 Ah, which more marketing purposes is usually expressed as 11,480 mAh. Eleven thousand sounds better than eleven!
If we compare this with the iPad Air 2, the battery capacity has dropped from 42.5Wh to 27.62Wh. That's quite a drop, isn't it? In fact it didn't just drop like this – the intervening iPad Air (no 2) had a 32.9 Wh battery, making the reduction more gradual. So with every generation the thing has been getting smaller, allowing the tablet to get smaller, thinner, and lighter.
And yet for each of these models a shiny grey man has stood on a stage at a ‘keynote’ event in California and told us that the battery life hasn't been affected. "Still the same great ten hour battery life!" is the claim that's usually made, and if you've ever been to one of these events, or watched it streamed over the Internet, you'll know that the claim is always meet with an embarrassing cacophony of whoops and air-punching from the audience.
So do these claims stack up? Let's start with benchmarks. PC Pro uses a looping video test to measure battery life, as indeed do many other printed and online tech journals. It seems to be the standard way to do things. PC Pro of course does it better than most, by making sure that the screens are calibrated to exactly the same level of brightness, and setting the tablets to 'flight' mode. And in these benchmarks both iPads came within a gnat's whisker of each other, at between 12 and 13 hours. So maybe Apple's claims are true, then? Maybe they've somehow managed to optimize the hardware and software of the tablet to squeeze more performance from a smaller battery?
Well, obviously yes they have. If you are watching a looped video. But I have to tell you that in my real world usage the newer iPad lasts around 30% less time than the older one before needing a trip to the charger. And let’s not forget that the older one is, well, ‘older’. It has been recharged many times, so its battery has probably started to fade slightly. Yet it still has very much more stamina than the newer, lighter, smaller model.
I really don’t think anyone should be surprised by this. There is no magic that can make smaller batteries last longer than bigger ones. And if there’s optimisation available in software it’ll apply to older models as well as never ones as they both get OS updates.
As I said at the start, this really isn’t a rant directed at Apple, as all of the manufacturers do the same thing. The tech companies seem to think we want ever thinner devices (I’m not entirely convinced that we do), and as a result we see battery life suffering. I just wish they’d be more honest about it, rather than providing special optimisation to make sure that their new models with smaller batteries sail through typical benchmark tests without problems. I can’t help but be reminded of car emissions…
Connected Tech World
A blog of all things technology related by Paul Ockenden
Tuesday, 2 August 2016
Bluetooth audio without the delay
I often get people asking my questions about Bluetooth headphones and speakers.
Typically they'll say something such as "My new headphones sound great, but there’s a noticeable processing delay, which means I see ‘lip sync’ issues – when people talk their words don’t quite match up with their mouth, it’s very frustrating and makes watching TV quite difficult" or "Can you recommend a wireless speaker which doesn’t delay the sound". People seem just as interested in reducing that delay as they do in the overall sound quality.
Pretty much any digital signal is going to suffer this effect to a great or lesser extent. As those of you who’ve worked in networking or electronics will know, this delay is known as ‘latency’. It’s caused by a number of factors including the original analogue to digital conversion, buffering, compression and other signal processing, processing the data into packets and transmitting it, and then pretty much the same at the other end, in the reverse order.
If you’ll allow me a slight diversion for a moment, the biggest digital latency problem that I can think of is the Greenwich Time Signal, also known as ‘the pips’ when played out across DAB radio. They are often wrong by several seconds, meaning that as a ‘time signal’ they are pretty much useless. You can hear this delay yourself if you place a DAB radio next to an FM one, and compare the output on the same station. What you might not also realise, though, is that for a few years now most FM stations have had a delay too, due to the way that the signal is delivered to the transmitters using NICAM encoding. FM will typically have about one second delay, which is wrong, but probably just about usable for most people. It would work in an “OK chaps, let’s synchronise watches” scenario. DAB, though, can be around 6-8 seconds out – the delay is variable, depending on the brand and age (and thus processing speed) of your DAB receiver. Any such delay obviously makes the time signal pretty useless. I once made an official suggestion to the BBC that they add a ‘warble’ to the time signal on DAB (and indeed internet streams) so that listeners would know that the signal isn’t accurate, but this fell on deaf ears.
Anyway, back to wireless headphones and speakers. As Roy points out, most of these seem to be Bluetooth driven these days. In fact, I just did a quick search for Wireless Headphones on the Currys website: At the time of writing it lists 88 products, of which 84 use Bluetooth. I’m not going to get into arguments here about the audio quality of Bluetooth audio – especially as my cloth ears seem to max out at around 192kbps VBR MP3 these days – I really can’t detect any improvement at higher signal rates, or when compression is removed. So for me at least, quality wise, Bluetooth based headphones or speaker can sound plenty good enough, although I can appreciate the better sound quality of some models over others. Although by ears are poor I still hate tinny, or over-boomy sound. But that’s more about construction, speaker size, etc. than the transmission medium or compression used. For me it’s all about balancing good quality sound and convenience/portability. With headphones in particular some can feel very uncomfortable when used for extended periods.
Let’s look again at audio latency: With Bluetooth speakers or headphones the delay will typically be in the 100-200ms range, although some products have a delay of a whopping half a second. But if we take a typical UK video source running at 50Hz, each frame lasts 20ms. Incidentally, it’s a common misconception that TV is captured at 25Hz because of interlacing. That may have been the case some years ago, but these days many of us watch 1080P (or better) sources, which aren’t interlaced and which have a proper 50Hz or even 60Hz frame rate. Hardly anything is filmed for TV at 25Hz these days, except when people are aiming for a deliberate ‘archive’ look.
So at 20ms per frame, audio which is delayed by 200ms will be ten frames out. That’s very noticeable.
There have been various studies into how bad an audio delay needs to be before it causes so-called ‘lip sync’ problems, but one of the most respected studies is from the American ‘Advanced Television Systems Committee’ and it recommends an absolute maximum of 45ms – see http://www.webcitation.org/60UbU5Ziv. An earlier study by the European Broadcasting Union had suggested that 125ms was acceptable. The point at which the delay becomes unnoticeable appears to vary from person to person – there’s no fixed point, so all of these studies try to adopt a sensible “most of the population” recommendation.
The problem with these studies is that they allow broadcasters to work within these tolerances, and so in many cases the material you are watching will already be pushing at the limits. So even if your own equipment is also within these limits, the compound effect of both delays added together can push things into the detectable range. This means that you need to do everything you possibly can to keep it as low as possible.
So what can you do? Well, for starters there’s equipment out there which doesn’t use Bluetooth for audio. In fact it doesn’t use digital processing at all. I mentioned above that 84 of the 88 headphones on sale at Currys use Bluetooth – well, that means there’s four that don’t. I’m pretty sure they all use 868Mhz analogue RF channels. That means they essentially have a zero milliseconds delay.
When it comes to wireless speakers there are also a few that use analogue RF channels, but they tend to be at the lower end of the market, and are pretty dire. More expensive wireless speaker systems usually have ‘better than Bluetooth’ latency. For example, with Sonos speakers it’s around 70ms. That’s still enough to sometimes notice lip sync problems, but it’s better than most.
So what’s the answer? Well, the best one I’ve found so far is Bluetooth. But not normal laggy old Bluetooth. What many people don’t realise is that Bluetooth Audio isn’t always the same. For starters there are a number of different ‘profiles’ which can be used to transmit audio. In the early days these were mainly designed for telephony applications, but then along came A2DP or Advanced Audio Distribution Profile which offered much better sound quality. By default, A2DP uses a codec called SBC, or Low Complexity Subband Coding (no, I can’t work out how you get from this to SBC either). In fact there are several different versions of SBC, but like most things Bluetooth the transmitter and receiver do an initial handshake and then choose the highest standard acceptable to both.
But the spec allows for other codecs to be used too, and this is where we can start to shave a shedload of time from the standard Bluetooth latency. In particular, there’s a commercially licenced codec called aptX Low Latency which comes from a company called CSR. Using aptX LL (as it is usually called), the latency is reduced to 32ms. That’s just about the best you are going to get for wireless digital headphones or speakers, without stumping up for products at the megabucks end of the market, using custom audio protocols.
The aptX LL codec has to be specially licensed by product manufactures, so you won’t find it available on a massive range of kit. In fact there’s a fairly up to date list on CSR’s website : http://www.aptx.com/products-low-latency/browse/categories. Oh, and please don’t confuse aptX LL and normal aptX. The latter is all about getting sound quality from Bluetooth, but it doesn’t address the latency issue in the same way that the LL version does.
In terms of speakers, I really love Denon’s Envaya range. There’s a ‘mini’ version, and I reckon that in many respects it actually sounds better than its bigger and more expensive sibling (although bear in mind what I said about my hearing). You can pick up the Envaya Mini for around £70-£80, and that has to be something of a bargain for such a quality product. Mine is probably one of my most used gadgets, and I don’t only use it for low latency applications. It also regularly plays music streamed from my phones and tablets.
Bear in mind that if you want to banish lip sync problems, for aptX LL to work it needs to be supported at both ends. This means that if you’re going to use one of these speakers of headphones for listening to an audio-visual source such as TV you’ll also need a transmitter which supports aptX LL too. As you’ll see from the CSR website there are quite a few of these, but I’m a huge fan of the Avantree Saturn Pro. It’s a small puck-like device – one will easily fit into the palm of your hand. Powered by a rechargeable battery it’ll go around 10 hours between charges. Or if your telly has a USB socket you can use this to keep the device permanently powered. It’s important to get the Saturn Pro – the non-pro version only supports the normal latency version of aptX.
The great think about the Saturn Pro is that it’ll work as both a transmitter or a receiver (there’s a little switch on the side). And that’s a really good thing because if you take a look at the range of headphones with built-in aptX LL on the CSR website you’ll see that they are all quite high-end, with price tags to match. But with the Saturn Pro running in Receiver mode you can simply plug your favourite wired headphones. If you need them, you can get a pack with two Saturn Pros, ready paired. The packs are usually about 20% cheaper than buying the devices individually, which is always nice.
Typically they'll say something such as "My new headphones sound great, but there’s a noticeable processing delay, which means I see ‘lip sync’ issues – when people talk their words don’t quite match up with their mouth, it’s very frustrating and makes watching TV quite difficult" or "Can you recommend a wireless speaker which doesn’t delay the sound". People seem just as interested in reducing that delay as they do in the overall sound quality.
Pretty much any digital signal is going to suffer this effect to a great or lesser extent. As those of you who’ve worked in networking or electronics will know, this delay is known as ‘latency’. It’s caused by a number of factors including the original analogue to digital conversion, buffering, compression and other signal processing, processing the data into packets and transmitting it, and then pretty much the same at the other end, in the reverse order.
If you’ll allow me a slight diversion for a moment, the biggest digital latency problem that I can think of is the Greenwich Time Signal, also known as ‘the pips’ when played out across DAB radio. They are often wrong by several seconds, meaning that as a ‘time signal’ they are pretty much useless. You can hear this delay yourself if you place a DAB radio next to an FM one, and compare the output on the same station. What you might not also realise, though, is that for a few years now most FM stations have had a delay too, due to the way that the signal is delivered to the transmitters using NICAM encoding. FM will typically have about one second delay, which is wrong, but probably just about usable for most people. It would work in an “OK chaps, let’s synchronise watches” scenario. DAB, though, can be around 6-8 seconds out – the delay is variable, depending on the brand and age (and thus processing speed) of your DAB receiver. Any such delay obviously makes the time signal pretty useless. I once made an official suggestion to the BBC that they add a ‘warble’ to the time signal on DAB (and indeed internet streams) so that listeners would know that the signal isn’t accurate, but this fell on deaf ears.
Anyway, back to wireless headphones and speakers. As Roy points out, most of these seem to be Bluetooth driven these days. In fact, I just did a quick search for Wireless Headphones on the Currys website: At the time of writing it lists 88 products, of which 84 use Bluetooth. I’m not going to get into arguments here about the audio quality of Bluetooth audio – especially as my cloth ears seem to max out at around 192kbps VBR MP3 these days – I really can’t detect any improvement at higher signal rates, or when compression is removed. So for me at least, quality wise, Bluetooth based headphones or speaker can sound plenty good enough, although I can appreciate the better sound quality of some models over others. Although by ears are poor I still hate tinny, or over-boomy sound. But that’s more about construction, speaker size, etc. than the transmission medium or compression used. For me it’s all about balancing good quality sound and convenience/portability. With headphones in particular some can feel very uncomfortable when used for extended periods.
Let’s look again at audio latency: With Bluetooth speakers or headphones the delay will typically be in the 100-200ms range, although some products have a delay of a whopping half a second. But if we take a typical UK video source running at 50Hz, each frame lasts 20ms. Incidentally, it’s a common misconception that TV is captured at 25Hz because of interlacing. That may have been the case some years ago, but these days many of us watch 1080P (or better) sources, which aren’t interlaced and which have a proper 50Hz or even 60Hz frame rate. Hardly anything is filmed for TV at 25Hz these days, except when people are aiming for a deliberate ‘archive’ look.
So at 20ms per frame, audio which is delayed by 200ms will be ten frames out. That’s very noticeable.
There have been various studies into how bad an audio delay needs to be before it causes so-called ‘lip sync’ problems, but one of the most respected studies is from the American ‘Advanced Television Systems Committee’ and it recommends an absolute maximum of 45ms – see http://www.webcitation.org/60UbU5Ziv. An earlier study by the European Broadcasting Union had suggested that 125ms was acceptable. The point at which the delay becomes unnoticeable appears to vary from person to person – there’s no fixed point, so all of these studies try to adopt a sensible “most of the population” recommendation.
The problem with these studies is that they allow broadcasters to work within these tolerances, and so in many cases the material you are watching will already be pushing at the limits. So even if your own equipment is also within these limits, the compound effect of both delays added together can push things into the detectable range. This means that you need to do everything you possibly can to keep it as low as possible.
So what can you do? Well, for starters there’s equipment out there which doesn’t use Bluetooth for audio. In fact it doesn’t use digital processing at all. I mentioned above that 84 of the 88 headphones on sale at Currys use Bluetooth – well, that means there’s four that don’t. I’m pretty sure they all use 868Mhz analogue RF channels. That means they essentially have a zero milliseconds delay.
When it comes to wireless speakers there are also a few that use analogue RF channels, but they tend to be at the lower end of the market, and are pretty dire. More expensive wireless speaker systems usually have ‘better than Bluetooth’ latency. For example, with Sonos speakers it’s around 70ms. That’s still enough to sometimes notice lip sync problems, but it’s better than most.
So what’s the answer? Well, the best one I’ve found so far is Bluetooth. But not normal laggy old Bluetooth. What many people don’t realise is that Bluetooth Audio isn’t always the same. For starters there are a number of different ‘profiles’ which can be used to transmit audio. In the early days these were mainly designed for telephony applications, but then along came A2DP or Advanced Audio Distribution Profile which offered much better sound quality. By default, A2DP uses a codec called SBC, or Low Complexity Subband Coding (no, I can’t work out how you get from this to SBC either). In fact there are several different versions of SBC, but like most things Bluetooth the transmitter and receiver do an initial handshake and then choose the highest standard acceptable to both.
But the spec allows for other codecs to be used too, and this is where we can start to shave a shedload of time from the standard Bluetooth latency. In particular, there’s a commercially licenced codec called aptX Low Latency which comes from a company called CSR. Using aptX LL (as it is usually called), the latency is reduced to 32ms. That’s just about the best you are going to get for wireless digital headphones or speakers, without stumping up for products at the megabucks end of the market, using custom audio protocols.
The aptX LL codec has to be specially licensed by product manufactures, so you won’t find it available on a massive range of kit. In fact there’s a fairly up to date list on CSR’s website : http://www.aptx.com/products-low-latency/browse/categories. Oh, and please don’t confuse aptX LL and normal aptX. The latter is all about getting sound quality from Bluetooth, but it doesn’t address the latency issue in the same way that the LL version does.
In terms of speakers, I really love Denon’s Envaya range. There’s a ‘mini’ version, and I reckon that in many respects it actually sounds better than its bigger and more expensive sibling (although bear in mind what I said about my hearing). You can pick up the Envaya Mini for around £70-£80, and that has to be something of a bargain for such a quality product. Mine is probably one of my most used gadgets, and I don’t only use it for low latency applications. It also regularly plays music streamed from my phones and tablets.
Bear in mind that if you want to banish lip sync problems, for aptX LL to work it needs to be supported at both ends. This means that if you’re going to use one of these speakers of headphones for listening to an audio-visual source such as TV you’ll also need a transmitter which supports aptX LL too. As you’ll see from the CSR website there are quite a few of these, but I’m a huge fan of the Avantree Saturn Pro. It’s a small puck-like device – one will easily fit into the palm of your hand. Powered by a rechargeable battery it’ll go around 10 hours between charges. Or if your telly has a USB socket you can use this to keep the device permanently powered. It’s important to get the Saturn Pro – the non-pro version only supports the normal latency version of aptX.
The great think about the Saturn Pro is that it’ll work as both a transmitter or a receiver (there’s a little switch on the side). And that’s a really good thing because if you take a look at the range of headphones with built-in aptX LL on the CSR website you’ll see that they are all quite high-end, with price tags to match. But with the Saturn Pro running in Receiver mode you can simply plug your favourite wired headphones. If you need them, you can get a pack with two Saturn Pros, ready paired. The packs are usually about 20% cheaper than buying the devices individually, which is always nice.
Monday, 14 March 2016
Why won't my kindle connect to my Wi-Fi?
Here's a Kindle tip, the seeds of which were sewn by a friend that works at Tesco in their technology department. It seems that although the supermarket used to sell a lot of Kindles (the 'proper' ones with e-ink displays, not the silly tablet Kindles), they also get a lot of them returned. It seems people had trouble connecting Kindles to their Wi-Fi. Now of course with mobile connected Kindles this was less of a problem, but it was a real killer with Wi-Fi only devices. And it still seems to apply to some devices, so this tip remains relevant.
Search the Kindle help pages on Amazon and you won’t find the answer, strangely. But it’s really very simple (and very stupid). The Kindle was designed in the US, and so it only uses the US Wi-Fi channels 1 to 11. If your home or office wireless network is set to use channels 12 or 13 (or 14 if you’re being really naughty) your Kindle simply won’t see the network. This despite it being a UK supplied Kindle, with UK settings. The answer is to change the channel on your Wi-Fi router to be in the 1 to 11 range. It’s really not ideal, and frankly I’d class this as a bug. Somehow though, I doubt Amazon is either reading this, or cares.
I don’t want to alarm you - a tale of caution.
Mobile technology allows us to carry round astonishing levels of computing power and communication ability, and use them wherever and whenever we want. Yet there’s one item from the office environment that doesn’t yet have a viable mobile equivalent, and that’s the printer. At this point I know some of you will be asking “What about the paperless office”, but I’m sure that by now most of us realise that that’s simply never going to happen. It was first promised decades ago, and yet we still often need to print out various things. Have you ever bought something on an e-commerce site and not been presented with the final “we suggest you print this page”?
Most business printers are networked these days. And many employees will have a VPN connection back to their office. So the easy solution is that while out and about you simply print to your office (or indeed home) printer. It’s a simple enough solution, and usually works very well.
However, there is a potential gotcha. Although this remote printing works well during the day, I’ve seen several cases reported where someone has printed out something at night and it has set off the office burglar alarm. I’m not sure whether this is single sheets appearing in a printer tray causing this, or one of those instances where a large document causes a paper tray to fill and start scattering pages all over the floor.
Either way, there are two solutions: either site your printer in a location which isn’t ‘seen’ by the alarm sensors, or else get your alarm company to swap the PIRs for ‘pet friendly’ alternatives. They might tell you that you’re bonkers for wanting pet-proof sensors in an office, but I’ve discovered that these sensors are also A4 paper proof!
Most business printers are networked these days. And many employees will have a VPN connection back to their office. So the easy solution is that while out and about you simply print to your office (or indeed home) printer. It’s a simple enough solution, and usually works very well.
However, there is a potential gotcha. Although this remote printing works well during the day, I’ve seen several cases reported where someone has printed out something at night and it has set off the office burglar alarm. I’m not sure whether this is single sheets appearing in a printer tray causing this, or one of those instances where a large document causes a paper tray to fill and start scattering pages all over the floor.
Either way, there are two solutions: either site your printer in a location which isn’t ‘seen’ by the alarm sensors, or else get your alarm company to swap the PIRs for ‘pet friendly’ alternatives. They might tell you that you’re bonkers for wanting pet-proof sensors in an office, but I’ve discovered that these sensors are also A4 paper proof!
Facebook scams - listen to Granny
Those of you that are on Facebook will no doubt have seen your friends and family posting all kinds of stupid things. Quite often it’ll be a silly competition such as “Name a band that doesn’t have an A in their name – it’s harder than you think”.
Of course it isn’t hard at all – there are millions of bands without the letter A in their name, but what these competitions generate is a lot of activity. They’ll often generate hundreds of thousands of comments.Instead of the question you might see a photo, with instructions like “Click on the photo, post a comment, see what happens”. Of course, nothing happens. Another variation involves liking a page and sharing a photo, with the promise that you might win something valuable. Quite often it’ll be a photo of iPhones or iPads, but I’ve also seen a trend towards expensive shoes and handbags. Sometimes the name of the account running the competition will be something that looks quite genuine too.
I’m sure that most people probably realise that it might be a scam, but think “What the heck, it isn’t doing any harm, and there’s always the slightest chance that I might actually win something”.
Actually, in both of these instances you are doing harm. For starters you’re sharing this rubbish with your friends, so likely to be luring them into it too, but mainly because you’re helping scammers. The business model behind these competitions is that they are looking to get as many likes or comments as possible, and that’s all. There are no prizes. You’re never going to win an iPhone, or a pair of Ugg boots.
What all of these scams are about is getting as comments and in particular as many likes as possible. Facebook uses an algorithm called edge rank for its news feed optimisation. A bit like Google with its page rank, edge rank is used to prioritise how often things appear in peoples’ Facebook feeds. Pages with lots of likes and comments get a much higher edge rank, which is why so much of this guff shows up in your news feed!
But where’s the business model? Well, a page with hundreds of thousands of likes is a very valuable commodity. Lots of brands, large and small, are starting to venture into social media. The first thing they learn is that it can take months or even years to build up an online community. As a result, many are prepared to buy a pre-built community from a scammer (although of course, being novices, they are probably unaware of the scam element).
Someone wanting a quick win (it might even be another scammer!) buys the page and they’ve instantly got a huge following, lots of likes and comments, and a long established edge rank capable of pushing out updates to hundreds of thousands of Facebook users. It’s a marketing manager’s dream!
To be fair to Facebook, it has tried to tighten this whole area up slightly – it used to be that the scammer would change the name of the page when they sold it, to reflect the new owner. A while back Facebook changed the rules so that you couldn’t change the name of a page after it had received more than 200 likes. But as I mentioned above, the scammers get around this by using fairly official looking names in the first place.
I must admit I tend to get annoyed when I see friends and family taking part in these scams – especially those that really ought to know better. The other day I even saw a friend with a senior position in IT security sharing a ‘Win an iPad’ photo. But I guess that until someone has explained what the scam is, and how it works, it’s hard to see what problems it might cause.
A good rule of thumb is to look for terms and conditions – any genuine competition will have them, but these scams usually don’t. Also look for things like spelling mistakes and bad grammar – you’ll often find these in the scam postings, whereas any genuine competition from a large brand will have been through several stages of proof reading.
The best advice is to listen to granny. She may not be on Facebook, but your gran’s old ‘if it looks too good to be true it probably is” adage is as true today as it has ever been, and is especially important in the social media world.
Scorched screens
Paul Ockenden revisits a screen damage problem from the past.
I just glanced up at the calendar on my office wall and it said 2016. Then I glanced down at the phone on my desk and suddenly it was the 1980s all over again.Why? Has this column suddenly turned into an episode of the BBC's Ashes to Ashes? No, the reason for my mental journey back to the ‘80s was screen burn. Anyone old enough to remember the big and bulky CRT monitors we had back then (especially the pre-colour green screen VDUs) will probably recall the phenomenon of screen burn, or burn-in as the Americans liked to call it. It was when the constant display of a particular thing (a pattern, a logo, an icon, or even a command prompt) damaged the phosphor used to coat the surface of the screen causing a ‘ghost’ of the image to be permanently displayed.
I’m sure everyone is aware of how the efficiency of a fluorescent light tube will fade over time – well, exactly the same is true of cathode ray tubes. The efficiency of the phosphor coating behind the glass at the front of the screen decreases with usage, and if a particular part of the screen is constantly lit-up the image is permanently ‘burned’ into this light emitting part of the screen. I remember almost crying the first time this happened to one of my very expensive first generation Sony SVGA monitors.
It’s an effect that many of us have now, thankfully, forgotten about because modern LCD displays aren’t subject to the same effect (although, strangely, some monitors and TVs do still include the various tricks used to combat screen burn in their circuitry or firmware, particularly detecting an image with a constant part – such as an on-screen logo – and then shifting the image a pixel or two in a random direction now and again). I've no idea why LCD screens still feel the need to do this.
So screen burn is a thing of the pant then? Unfortunately not. If you look at the phones and tablets on sale in the high street – actually who am I kidding? Nobody buys tech in the high street any more. If you look at the phones and tablets on sale in one of the big out-of-town technology sheds, or perhaps your local global-mega-hyper-mart, and if you look carefully at the specifications listed on the shelf tickets, you’ll notice that the screens come in two basic flavours, IPS (standing for In-Plane Switching) and AMOLED (the acronym breakdown of which we'll come to in a moment). IPS uses a backlight behind a variation on the traditional LCD panel, AMOLED on the other hand is a light emitting panel of organic LEDs. There are various flavours of both technologies, such as Samsung’s Super PLS (a type of IPS), and Super AMOLED Plus, but the underlying technology is pretty much the same in all cases.
In terms of currently popular phones, Samsung tends to use AMOLED panels on most of its phones, and it's becoming increasingly popular elsewhere. Apple uses IPS screens, although it is sometimes criticised for using an ‘older’ technology in its devices. But is this criticism fair?
Both types of screens have their pros and cons. IPS displays have better colour accuracy, and in particular are capable of showing whiter whites. On the downside, though, they can be quite power hungry. AMOLED screens are much kinder on the battery, and can be thinner because there’s no backlight required. The blacks are better with AMOLED screens too, because the pixels are actually turned off, rather than black pixels being formed by trying to obscure the backlight. The screens aren’t as sharp though, and can be harder to read in bright sunlight. To my mind, though, the biggest problem with AMOLED displays is that they bring back that long-forgotten problem of screen burn.
The problem is the O in the AMOLED acronym – It stands for Active-Matrix Organic Light-Emitting Diode. The organic compounds used in AMOLED displays, substances such as polyphenylene vinylene (more commonly known as PPV) or Polyfluorene (PFO) polymers or co-polymers, can all degrade with use. This is caused by a number of factors, firstly because, at least in part, the chemistry involved in the electroluminescence process is irreversible, so just like with a battery the devices will degrade as they are used. Secondly, the organic materials tend to crystallise, an effect which can be exacerbated at higher temperatures. Something to remember next time your phone gets warm while you are playing a game or watching a video.
There are two main types of AMOLED displays, some with traditional RGB stripe layouts like you'll find on an LCD monitor (using three subpixels per pixel), others having a PenTile layout which uses a two subpixel layout or red-green and blue-green. Because of this structure PenTile screens have twice as many green subpixels, and fewer red and blue subpixels. It’s the blue subpixels which degrade most quickly, so as a result PenTile displays are a bit less susceptible to screen burn than other AMOLED displays, but they are still vulnerable.
Incidentally, PenTile is a patented matrix layout, owned by Samsung, although some other manufacturers have licensed it. Knock-off clones don't seem to be much of a problem, although that's perhaps not surprising given the highly litigious nature of the mobile devices marketplace.
So how does this degradation affect a typical smartphone or tablet user? Well, if you leave your AMOLED screen on when charging, for example (it’s one of the options available in the Android settings screen), after a few weeks you’ll find things like the icons on your home screen and the Android soft keys burned into your display. In normal day to day use this burn might not be noticeable – at least, not until it starts to get really bad. But if you’re looking at a screen with a blank white background – using one of the minimum chrome word processors for example – you’ll notice some yellow/brown marks on the screen.
It’s not only leaving the screen on while charging – things like car cradles, docking stands, and even SatNav applications can all cause the same problem.
It’s very annoying, especially when you’ve paid hundreds of pounds for the latest top of the range phone, and yet it’s a problem that hardly ever gets talked about. When was the last time you saw a phone review where it mentioned the possibility of screen burn? When was the last time you saw phone packaging or a user manual which warned that the particular screen technology used in the device made it susceptible to screen burn problems? Probably never. I think it’s an issue that needs much greater publicity – consumers need to be told about the relative fragility of AMOLED displays, and that such phones and tablets shouldn’t be left with their screens on for long periods of time. So, readers of the Real World, please spread the word!
Healing the burn
There are various apps available that claim to fix displays which have screen burn problems - I just found at least a dozen across the various Android App stores. These work by flicking through a range of solid colours across the whole of the display. You're supposed to leave them running for hours or even days. I've tried several, with minimal success. I suspect they are simply wearing the rest of the screen out too.Radio Soup
Paul Ockenden explores a world of wireless signals.
Listen very carefully. Can you hear that noise? Can you hear the radio?No, I don’t mean the FM radio booming from the car driving past. Nor do I mean the mediocre sound of DAB wafting from the kitchen. I’m talking about all of the other radio signals buzzing around your head.
Of course you can’t hear them – not if you’re mentally stable, anyway – I like to assume that we don’t have any readers from the tinfoil hat brigade. However, you can’t even hear ‘normal’ radio without some kind of receiver. The right apparatus allows you to watch and listen to broadcast stations, and exactly the same is true for all of the other wireless signals in the air – with the right kind of kit you can start to see and hear them.
In order of increasing frequency, the electromagnetic spectrum goes Radio, Microwaves, Infra-Red, Visible light, Ultra-violet, X-rays, Gamma rays. My old physics teacher taught me a good way to remember this: Rabbits Mate In Very Unusual eXpensive Gardens. Well, I say good way, whenever I try to remember this I’m never sure whether it’s ‘very unusual expensive gardens’ or ‘very expensive unusual gardens’. Perhaps I’ve spent too much time visiting National Trust properties.
It’s the radio bit that we’re really interested in, and that’s generally thought of as sitting between 3kHz and stretching up to 300GHz, although the ITU (International Telecommunication Union – the UN agency responsible for information and communication technologies) splits the space into 12 bands stretching all the way up to 3THz (or 3,000GHz). Each band is an extra zero wide (so 3kHz - 30kHZ, 300MHz - 3GHz, etc.), which keeps things nice and simple.
The first three ITU defined bands ELF, SLF and ULF, Extremely super and ultra low frequency can be pretty much ignored, as they are mainly generated by natural phenomena such as lightening and earthquakes. ELF has been used for submarine communications because the signal penetrates a fair distance through salt water. It can take hours to send a simple message (we’ll see why in a moment), but it gets delivered to boats operating hundreds of meters below the surface. The logistics are very complex though – the wavelength will typically be around a tenth of the circumference of the planet!
Obviously nobody is going to build an antenna that bit (or even a ¼ wave dipole), so instead these systems use parts of the earth itself as an antenna. Huge poles are sunk tens of miles apart in areas of low ground conductivity, so the current penetrates deep into the earth. It’s really mind boggling engineering, and only the US and Russians have ever built such systems (Britain once planned its own system in Scotland, but it was abandoned). Oh, and because the transmitters are so huge it’s a one-way system – there’s no way that submarines can transmit back.
The first band that you might think of as ‘radio’ is VLF (band 4, very low frequency, 3-30kHz) which has such a low frequency that it can’t be used for voice based communications as the carrier always has to be higher than any frequency that you want it to hold, true whatever kind of modulation (amplitude modulation AM and frequency modulation FM being the most common, although there are others too). The same “carrier must be a higher frequency than the message” rule holds true whether we’re dealing with analogue or digital signals, although of course it’s possible to bend the rule slightly by compressing digital data before transmission. Because of this rule, though, VLF is only really suitable for slow, low bandwidth data signals.
Next we find LF (band 5, low frequency, 30-300kHz) whose main use is for aircraft beacon signals and weather systems, although you’ll also find good old long wave radio (familiar to those who like cricket or church services, neither of which are particular favourites of mine) sitting at the top end of the band. Remember, low frequency and long wavelength go together – as one number goes down the other goes up. Just visualise kids creating standing waves in a skipping rope – as they wiggle their hands faster (higher frequency) an additional wave is introduced, so the distance between the peaks is reduced (shorter wavelength).
The MF (ITU band 6, medium frequency, 300kHz-3MHz) band comes next. Its main use is for medium wave radio (does anyone still listen to MW?). MF also contains the 160m amateur radio band, and there are also a few navigation and global distress system uses. Next up is HF (ITU band 7, high frequency, 3-30MHz), and this is what many people think of as shortwave radio. You’ll find both broadcast radio stations and amateurs using the band, as well as military uses and aircraft to ground communication. Also, because of the way HF propagates (it reflects, or more accurately refracts off the ionosphere and bounces back to earth) the band is also used for over the horizon radar systems. Although the crude resolution of these systems makes them useless for targeting, they are still used (despite all of our modern satellite wizardry) for defence early warning systems.
After HF comes VHF (ITU band 8, very high frequency, 30-300MHz). You’ll find the FM radio band here, alongside amateur radio bands, air traffic control and instrument landing systems. And of course we used to have TV in this band too, but that moved in the 1980s freeing up the frequencies now used by our woefully inadequate DAB radio system. Actually, DAB appearing at the top end of the VHF band is important, as it shows that we’re now getting to the part of the spectrum which is most useful for data communication. The so-called ‘digital sweetspot’.
A major part of that sweetspot is the UHF (ITU band 9, ultra high frequency, 300MHz-3GHz) band, and it’s here that you’ll find TV broadcasts (now fully migrated to digital, of course), mobile phone signals (GSM, 3G and most of the 4G flavours), good old fashioned wi-fi, the TERTA trunked radio system used by the emergency services, DECT cordless phones, bluetooth, wireless sensors for things like weather stations and energy monitors, and a few amateur radio bands. We’re starting to get into the microwave spectrum at the top end of this band. It’s a very crowded space, but as you can see, most of it is digital signals these days, which makes it so much easier to pack more stuff into the available bandwidth. These are the radio frequencies that usually concern the things I write about in this column.
But anyway, onwards and upwards, we might as well be complete. Next comes the SHF (ITU band 10, super high frequency, 3-30GHz) band. You’ll find 5GHz wi-fi here, and satellite TV downlink signals too. Almost all modern radar systems use SHF, and a massive chunk (almost a third) of the band will be used by wireless USB, as it becomes widespread. The band is great for very directional low range data, and recent developments in microwave integrated circuits mean that the signal processing can happen directly in silicon, rather than a processed signal having to be mixed with a high frequency carrier. Where UHF is the band for ‘now’, I expect that SHF will very much be the band of the future, with more and more of our data signalling moving into this space.
The last but one of the official ITU bands, and the last really usable one, is EHF (band 11, extremely high frequency, 30-300GHz). The wavelengths in this band are between one and ten millimetres, and the signals suffer extreme attenuation in the atmosphere, so the band isn’t generally considered suitable for long range communication. There are some holes in this attenuation though – the problem is caused because we’re starting to hit the resonant frequencies of particular molecules. Oxygen, for example, has a huge absorption peak at around 60GHz. Despite that, the upcoming Wi-Fi standard 802.11ad is actually designed to work at 60GHz, because at LAN scale distances the oxygen absorption is less of an issue.
In fact, the attenuation is a benefit because it means that 60GHz can only be used for short distance links, and so we don’t have to worry about interference – at least, not with terrestrial applications. The same frequencies can be re-used nearby. As a result some countries allow unlicensed use of 60GHz.
Move slightly away from the Oxygen absorption peak and the attenuation quickly drops off. These frequencies are starting to be deployed for very high bandwidth communication links. Because the frequency is so high it’s possible to pack much more data in than you could with a longer wavelength carrier.
Those famous airport scanners that see through your clothes also work in the EHF band, but more worrying that that is a reported use of this band as a weapon. The US is alleged to have a weapon which fires a directional bean of 3mm radiation at high power. This is reported to cause an extremely painful burning sensation, as if they were on fire, and yet no physical damage is caused. I used to work in the defence industry, but defence is really a euphemism - it’s really offence, and I found stuff like this American weapon very offensive. No physical damage maybe, but just imagine the long term psychological damage if you’d been subjected to it. Sorry, rant over!
Finally we come to THF (ITU band 12, tremendously high frequency, 300GHz-3THz). We’re almost getting into the light spectrum here – just above THF sits infra-red (remember Rabbits Mate In…) THF is used mainly for medical imaging, and although there has been a ‘proof of concept’ experiment to transmit data in this band, any real world application will be decades away, if not longer.
Subscribe to:
Comments (Atom)
