Recently I have had cause for complaint with our Samsung LE40M86BD LCD Television. It had been working just fine, and then all of a sudden it developed a power-cycling problem. The symptom is as follows:
From power-on, the TV works perfectly for around 10 minutes.
After a while the TV switches itself to standby, waits a few seconds, and then switches itself back on.
Once this starts happening the cycle repeats itself every 30 seconds.
I decided to take a look, and that’s what this blog post is about!
A Peak Inside
I took the back off, and it never ceases to amaze me how little there is inside modern TVs. They are for more complicated than older TVs of course, but all the technology is packed into densely populated embedded systems.
In this photo you can see the two main parts of the Television. Near the centre is an off-white coloured circuit board; that’s the main Power Supply. To the right of the Power Supply is a similarly sized green circuit board. This board is the heart of the Television. It’s basically a custom computer!
Straight away I noticed something suspicious on the Power Supply board; nasty looking electrolytic capacitors! Let’s check them out:
The Power Supply Board
Here’s a photo of the power supply, on the bench. Now, any time you see electrolytic capacitors mounted right next to a heatsink as they are here, you simply have to be suspicious of them, especially in older equipment. That big old heatsink pumps heat into those capacitors day in, day out. And if there’s one thing electrolytic capacitors don’t respond very well to, it’s long term heating.
Two of these capacitors, highlighted in the image above, are showing the classic signs of dielectric degradation. The top of the cans are bulging at the seams.
Sometimes this type of capacitor will also leak electrolyte, which can be very bad news indeed. In this case, it’s just the classic bulging.
At this point I decided to replace all the capacitors in the local area, as sometimes an electrolytic capacitor can be bad without displaying any obvious physical signs and they all will have been subjected to the heat pumped out by the nearby components.
After this I was quite hopeful of a quick and easy repair. But my hopes were dashed when I discovered that the TV was still power-cycling after a few minutes of use.
So, what to do? Well, I decided to be a bit more scientific about it from now on. I got my ‘scope out and checked each of the power supply rails generated by the PSU board. I discovered two things:
The Power Supply rails were now rock-solid. They probably weren’t before I changed those nasty capacitors, but they definitely were now.
I could run the Power Supply into a load, away from the rest of the TV, and it never power-cycles.
So… the original fault was not on the Power Supply board then.
As a point of interest, I discovered an input control pin on the Power Supply called “ON/OFF”, which is driven from the main system. I decided to take a capture of it and I discovered that my estimate of ~30 seconds power-cycling was almost spot on:
You can see here that the TV stays on for 28.7 seconds, then switches OFF, and immediately back ON. The cycle repeats with exactly the same period over and over again.
So, this got me thinking. The fault is surely on the main circuit board, as this appears to be deliberately instructing the Power Supply to turn OFF at regular intervals. Let’s take a look!
The Embedded System
So, at first glance, there isn’t really much for me to get my teeth into here. There was next to no information about this board on the internet. I found a schematic, but it was more block-diagram level than anything else.
The photo above shows the board with the screening can removed, revealing the microprocessor underneath. I took the screening can off because I noticed a bunch of SMD electrolytic capacitors and I wondered if they had been getting a little hot under the collar over the years.
An inspection of all the SMD electrolytic capacitors didn’t reveal anything suspicious; no bulging or evidence of leakage at all.
However… that isn’t particularly definitive. Let’s see what happens if I try to measure the ESR (Equivalent Series Resistance) on some of these capacitors!
So, the ‘scope capture above shows the voltage drop across C1104 when stimulated with a 100kHz square wave at 1V peak-peak (50Ω output impedance). C1104 is a 100uF capacitor, so the voltage drop at this frequency on a healthy capacitor should be close to zero. What do we see instead? 286mV!
If you do the math, you’ll find that this comes out as approximately 5.5Ω ESR. A horrendously bad capacitor!
At this point I went all around the board measuring ESR on the SMD capacitors. I was able to measure the majority of them in-circuit, and I found a whole bunch of bad caps. I replaced them all.
After this, the TV is finally working properly. The fault has not re-appeared in over a week.
Last Christmas I bought a Velleman MK170 Christmas Star project for my Wife to build. She often shows interest in my various electronics projects and I thought it would be a good way for her to build something that she could show off to friends and family, whilst at the same time teaching her a thing or two about electronics, components, and soldering.
After the build was completed we connected it to a 9V D.C. supply, as instructed by the documentation, and…
I was quite disappointed! The project was definitely functional, but the display was extremely dim and underwhelming. At first I was convinced we’d done something wrong during the assembly, but nope; everything was as it should be!
The camera flatters the result. To the naked eye the brightness of the LEDs is completely unsatisfactory.
I decided to revisit the project for this year. I downloaded the schematic from the Velleman website, and took a look to see what was going on.
Looking at the schematic (click the image for a larger version) you can see that we basically have three sets of LEDs, each of which is split into sub-groups of series and parallel LEDs.
Now, one thing that jumps out straight away is the value of the series resistors; they’re HUGE!
Take, for example, the series chain consisting of LED1, LED2, LED3, LED4. I measured one of these individual LEDs and they don’t even begin to conduct until ~1.7V. So to work out the current through these resistors we have:
(9V – (1.7V * 4)) / 2200R = 1mA!
I looked up the datasheet for the LEDs and I found that the typical forward current should be around 10mA. Practical tests showed that these LEDs conduct 10mA at around 2V forward voltage.
So, to work out more appropriate resistor values:
Four Series LEDs
For the chains with four series LEDs, the forward voltage drop will be 2 * 4 = 8V.
This means the drop across the current limiting resistor will be 9 – 8 = 1V. In order for 10mA to flow, we’ll need a 100R resistor.
100R!! The 2k2 resistors that Velleman fitted are not even in the ball-park!
Two Series LEDs
The same process can be used to determine the appropriate value for the chains with two series LEDs. The drop across the resistor ends up being 5V, so you need around 500R for the current limiting resistance.
I really don’t know what Velleman were thinking with this project. Their choice of current limiting resistor values are way off. I can only imagine that their intension was to reduce the current down to the absolute minimum, so that the project could be powered by a standard PP3 battery for relatively long periods of time.
That’s all very well, but the result is a project with totally unsatisfactory LED brightness.
I changed all the resistor values on ours, and now we are able to show off the project along with all our other Christmas decorations.
The current consumption on my unit, after my modifications, is 120mA when all LEDs are lit. Not all the LEDs are lit all of the time, but I guess it would be reasonable to assume >90mA average current consumption.
A standard 9V PP3 battery is going to be wasted very quickly at this current consumption. I would estimate 1-2 hours use before the battery drains flat!
That’s probably why Velleman used such high resistor values. But my take on this is that it was a poor design decision to build the project around a PP3 battery in the first place. These batteries have very low capacity, so they were on to a loser from the beginning.
We are powering ours from a 9V DC adaptor, which is the only realistic way forwards for a project like this.
Recently I was asked to produce a circuit to create a variable ‘dim’ control for an existing LED based exhibition.
I decided to go with a simple 555-timer based design which provides control of the pulse width of an output, whilst keeping the oscillation frequency fixed.
The circuit is shown below. It’s an unusual design, because the output is taken from pin 7. This ‘output’ is not capable of driving current (at least not without affecting the circuit behaviour) so you have to be careful with the connections you make to it.
The circuit works because of some very simple rules:
When pin 7 is high, the output pin (3) is low.
When pin 7 is low, the output pin (3) is high.
Pin 7 will be high when the trigger/threshold pin is > 2/3 of VCC.
Pin 7 will be low when the trigger/threshold pin is < 1/3 of VCC.
When the circuit is first powered on, pin 7 is low because C2 is discharged. This causes pin 3 to go high, and C2 begins to charge via D1. When C2 reaches about 2/3 of VCC pin 7 goes high, causing pin 3 to go low. Now C2 discharges through D2 until its voltage falls below 1/3 of VCC and then the cycle repeats as an oscillation.
The resistance in the charge and discharge cycles is controlled by RV1. When the resistance is increased for the D1 side of RV1 the resistance falls by the same amount on the D2 side of RV1. This changes the relative ‘speed’ of the charge/discharge cycle, which has the practical effect of pulse width adjustment.
Since the total resistance of RV1 (for the total charge/discharge cycle) is constant, the frequency of the output is stable and is determined by the values for RV1 and C2.
The frequency of the output is governed by a simple formula:
FREQUENCY = 1.44 * RV1 * C2
The power switch is a PMOS FET. It controls the switching of a 12V feed onto the existing LED array at the exhibition. RV1 is adjusted to provide the desired brightness for the display, and LED D3 provides an indication of the level of ‘dimming’ that the circuit is currently providing.
The circuit oscillates at 100Hz, so there is no perceived flicker.
Freerouting was a web application created and maintained by Alfons Wirtz at his website, here. It allowed us to import a design file from kicad and then auto-route the project based on some design preferences. It was basically a free auto-router for kicad. Hence the name!
For “various reasons” the author has decided to drop the project, and it is no longer available as a web application at his site. However, he has been kind enough to open-source the project and he encourages users to run it as a local application.
There are some wikibooks instructions for installation of the Freerouting application here. I have recently attempted to follow these instructions, but there are some stumbling-blocks which prevented me from being able to run a working application. I believe these same stumbling blocks will face lots of people; at least those who decide to install it on a recent Ubuntu variant.
This blog will detail all of the instruction steps necessary to run a working local copy of Freerouting. My instructions are based on those at Wikibooks, but with some extra steps to get around the stumbling blocks.
You can skip this part if you already have installed a recent version of:
Otherwise, read on!
Installation of Git
Use your package manager, as follows:
sudo apt-get install git
Installation of Java JDK & Netbeans
This post used to direct you to install the JDK and Netbeans packages separately, but all of a sudden this method stopped working. When I queried it with the Netbeans team they refused to admit any problem, but suggested the “combined package” instead. Well, I don’t agree there isn’t a problem but the combined package does work so let’s do that instead!
First you need to locate the JDK & Netbeans combined package. At the time of writing this post, you can get it here.
Once you’ve downloaded it, you need to make sure that you have permissions to execute it. To do that, ‘cd’ to the directory where you downloaded the installer, and then type the following command, substituting for your version of netbeans if it’s different from mine:
chmod u+x jdk-8u111-nb-8_2-linux-x64.sh
Then execute the installer:
You should see a screen something like this:
Walk through the installer dialogue to completion.
Installation of Freerouting Dependancies
The freerouting application depends on a couple of things to work. It requires jh.jar and netx.jar. To get those, install the following:
Now you need to download freerouting. It’s available on github, so you just need to ‘cd’ to a directory where you want to download the project, and then enter the following command:
git clone https://github.com/nikropht/FreeRouting
Loading FreeRouting into NetBeans, Compiling and Running
Now you need to start netbeans (should be under ‘Development’ in the application launcher) and import the FreeRouting project.
With netbeans open, select File -> New Project or use the keyboard shortcut CTRL-SHIFT-N.
In the Categories window, select ‘Java’. In the Projects window, select ‘Java Project with Existing Sources’.
Give your project a name. e.g. FreeRouter, and choose a path. e.g. $HOME/programming/netbeans_projects/
In the Source Package Folders area, select Add Folder and browse to the place where you downloaded the FreeRouting sourcecode from git.
Select Finish. Netbeans will create the project.
Select File->Project Properties.
In the categories area, select Libraries. Then, with the compile tab displayed in the area on the right, select ‘Add JAR/Folder’. Browse to and choose /usr/share/java/jh.jar.
This next part is not explained in the wikibooks instructions I linked to at the beginning of the post. But it is absolutely necessary in order for FreeRouting to work. While still in the Libraries area, Select ‘Add JAR/Folder’ again. Now browse to /usr/share/icedtea-web/netx.jar.
Now, contrary to the instructions in the wikibooks link, we are NOT going to use Web Start:
While still in project properties (File->Project Properties), select from the categories area, underneath Application, ‘Web Start’. Make sure this check box is NOT enabled. Then Select ‘Run’ from the categories area and make sure the Configuration pull down menu is set to <default config>
Select OK to exit Project Properties.
Now from the netbeans main menu, choose Run and select “clean and build project”. If all has gone well, it will conclude with “BUILD SUCCESSFUL”. Now you can run the application: select the green triangle or press F6 to run the project. You should see the following:
Now you just need to learn how to use Freerouter! I got it routing a PCB pretty quickly without instructions so I think it is pretty intuitive to run. However, it may not be so intuitive to choose all the correct options for your PCB. That’s for another post, maybe.
A few years ago I created a little valentines day project, shown in the video at the end of this post. It’s basically just a multi-vibrator circuit with two groups of LEDs, arranged as an inner and outer heart:
The full schematic is shown here, click on it for a full enlarged display:
Finally, if you want to download all the project information, you can get it here. This project is considered to be in the public domain.
I’ve thought about doing a new version actually, with micro-control of the LEDs for brightness fade in/out etc.
In the tinkering cave this week we have a Panasonic AE700 HD Projector. These units are pretty old now, but they come highly recommended in many online reviews and the HDMI support means they remain a useful home cinema option.
The unit was first presented to me by a work colleague who complained that the unit would not illuminate. It powers on just fine, but returns to standby after about 30 seconds without displaying any picture. My colleague had already completed some diagnosis of his own and had determined, correctly as it would turn out, that the power supply is not generating a 15V rail – which is required for the lamp circuitry to work.
With the cover removed, it can be seen that this unit is a bit of a beast! It’s packed with a mixture of high-tech electronics, complicated optics, and extensive cooling components. Dismantling is slightly tricky; you have to remove the top sensor board first (the one with the buttons on it) and then you need to very carefully remove the dark grey cowling. The power supply sits underneath; which is why the cowling must be removed for further diagnosis to take place. There is a wiring loom which connects the PSU to the main board, and this will need to be disconnected temporarily (at the main board end) because it feeds through an opening in the cowling. You can connect it back after the cowling has been removed. With the cowling removed, you can now take a look at the PSU. “Let the dog see the rabbit”, as they say.
The first rule of electronics diagnosis is “Thou shall inspect!” In this case there were no signs of explosion, trauma, or electronic stress of any kind to be found on the board. So it’s time to get the schematic out. Fortunately, the service manual for this unit was ‘freely’ available on the interweb. You can download my local copy of the service manual here. You might see a warning about “untrusted connection” – that’s because I can’t justify the expense of an SSL certificate. But I assure you there are no harmful files in my repository!
The schematic for the power supply is a little bit limited; it shows some discrete components but other circuitry is presented in block diagram form. Still, there’s enough information to start a line of diagnosis. The circuit I am interested in is shown to the left; click on it to open up a larger copy.
The second rule of electronics diagnosis is: “Thou shall measure voltages!” The circuit diagram shows P3 as the main connector – this is the wiring loom that had to be disconnected to remove the cowling. With the wiring loom connected back in, I measured the following voltages on P3:
[table id=1 /]
Nothing to be too concerned about there. Here’s the voltage measurements for P2:
[table id=2 /]
As can be seen – no 15V! Looking at the circuit, we have an output from the photoisolator which drives transistor Q107. The transistor is switching a voltage (which I measured to be 18V) to IC102 which is a 15V regulator.
One thing that I found really suspicious about this, is that we have a tiny transistor (only a small signal device, as it turned out) driving a much larger linear regulator. Going by the circuit we are looking at, the small transistor has to pass the same current as the linear regulator. I don’t know what current the linear regulator is supposed to drive, but it definitely has a much larger current carrying capability that the transistor which supplies it!!! It stinks of poor design choices to me. And Q107 has now become prime suspect for this failure.
And, sure enough, a quick measure of transistor Q107 revealed that it had indeed succumbed to its (inevitable?) destruction. 18V in, 0V out.
To remedy this, I replaced the component with one I had in my junk bin. The choice of transistor is not too critical here; it’s just switching a voltage. So I chose a ‘beefier’ device, which will happily support the current carrying requirements of the linear regulator – hopefully for the rest of this projector’s life.
A brief triumph – and a gift!
The projector worked after this, and I handed it back to its owner. Unfortunately it was returned to me a few weeks later with the same complaint. I was told that I could have the projector to play with, and keep if I could get it working! 🙂
I re-visited the power supply expecting a repeat problem, but everything checked out. Since the 15V supplies a H.T. power supply circuit for the lamp, and the symptom was that the lamp doesn’t illuminate, I began to wonder if the H.T. power supply needed some attention. Unfortunately this supply is cocooned inside metal shielding which does not seem to want to come apart very easily. As I was poking around with my screwdriver, I noticed that the projector had suddenly begun to illuminate, but she only fired up for a few seconds before there was an arcing sound and the projector switched back off. At the time I had been poking around a flat-cable signal loom which comes from the H.T. power supply and connects to the main board. I noticed that this loom was routed inside the H.T. leads for the lamp! This is almost certainly where the arcing had occurred!
I re-routed the small signal loom so that it gives the H.T. leads a wide-berth, and tucked it into the scart PCB. I imagine that the cable had become disturbed through repeated dismantling of the unit and had found itself tangled in with the H.T. leads somehow. You know cables, right? They tangle themselves up when you’re not looking!
After this I switched on the projector and voila! We have illumination. I took this opportunity to play Karate Kid: Classic.
Result! One working projector, saved from the scrap-heap.
I noticed, recently, that KiCAD footprint libraries now carry the .pretty extension, and that these files cannot be read by the latest stable release of KiCAD. In order to make use of these libraries it is necessary to install a recent build of KiCAD, which has undergone a significant overhaul of its footprint library support.
As a new Ubuntu user, it was not clear to me how one would go about installing a daily build of KiCAD. The instructions on the KiCAD website state:
Old stable should be in the official Ubuntu repo. Daily builds are available in js-reynaud’s PPA.
But how can you go about adding this PPA? And, once the PPA has been added, how do you then use it to install a new-build of KiCAD?
Here are the steps you need to take.
Adding the PPA
First you need to add the PPA. You are going to run a command that will add a new PPA for KiCAD to your linux sources list, so that apt-get will use it to install future versions of KiCAD. To add the PPA, open a terminal and type:
sudo add-apt-repository ppa:js-reynaud/ppa-kicad
Follow the on-screen instructions, and your result should look something like:
Install daily build
Once the PPA has been added, you first need to update apt so that it knows about the new versions of KiCAD in your PPA. To do that, type:
sudo apt-get update
You will see a lot of output in the console. Don’t worry; the apt service is just busy hitting all of your sources and updating itself so that it knows about all the latest software. When it gets to the PPA you just added, it’ll update its knowledge of available KiCAD versions.
Once apt has updated, installing the daily build is then as simple as typing the following into your terminal:
sudo apt-get install kicad
Follow the instructions, agreeing where necessary, and then you should be up and running with a new version of KiCAD
I’ve owned the JVC TH-S5 home theatre system for many years. It could be as much as 8 or 9 years. In all that time it’s been a reliable machine, although to be fair it’s also had quite an easy life; I don’t watch much TV (what self-respecting Engineer and tinkerer has time for that?) and even when I do I rarely give the system a run for its money.
About 6 months ago the system developed a problem. When started from cold the base unit (i.e. the DVD player and system controller) would fail to power on. I also noticed that if I listened very carefully I could hear a [tick-tick-tick-tick] sound coming from the unit. I recognised this straight away – it’s the switch mode PSU trying (and failing) to start. I was about to take the unit away to my tinkering cave for some diagnosis but I further noticed that the [tick-tick-tick] sound would gather pace, getting faster and faster, until eventually it was possible to bring the unit into life after a power-cycle.
The unit would work perfectly from that point onwards, provided that it was left connected to the mains.
Of course, I always knew this was a temporary solution. This problem was not going to go away. It was going to deteriorate for sure, and eventually I would be faced with a completely dead system. And sure enough, 6 months down the line, that’s what I’m faced with now!
First thing’s first, I needed to get the cover off for an inspection. With an old system like this I already had some preconceptions about what I thought I’d find. Bad electrolytic capacitors were absolute top of my list for this kind of symptom. Dry joints were a close second.
As can be seen here, the power supply is self-contained on the right hand side of the unit. This is where the focus of the attention should be. An initial inspection yielded a disappointing result. With bad electrolytics at the height of my suspicion, I was hoping to spot one or two displaying the classic physical symptoms of bulging or weeping. This would be a sure-sign of trouble, and usually an easy fix. The capacitors all looked physically healthy, though. Time to whip the board out of there for a closer look.
Looking more closely at the PSU board itself, there were no obvious (physical) causes for concern. I didn’t spot any dry joints on the underneath of the board either. The focus of this investigation should be the primary side of the PSU because the symptom is that the PSU completely fails to start. The primary side is marked clearly on the board and consists of everything to the left of the yellow-banded transformer. The transformer bridges the electrical gap between primary and secondary sides of the PSU. It’s an electrical gap because there is actually no direct electrical connection between primary and secondary sides. The two sides are said to be electrically ‘isolated’. This is important because we have dangerous high voltage mains A.C. on one side of the transformer, and then low voltage rectified D.C. on the other side of the transformer. Never the twain shall meet!!!
Anyway, back to the fault diagnosis. Since I am concentrating for now on the primary side of the PSU, and I have electrolytic capacitors as #1 on my suspicion list, it makes sense to have a look for some. There’s only two of these on the primary side of the board; there’s the big fat one, which is a reservoir capacitor (and certainly not the cause of my trouble) and then there’s a small skinny one next to the chopper transistor heatsink. The chopper transistor drives the transformer. It switches high voltage, high frequency A.C. and dissipates significant power. If you look at the surrounding board, it’s darkened brown from the heat that is generated by these components. This is where all the electrical stress is to be found on this circuit.
Since the skinny electrolytic capacitor is mounted close to the chopper transistor heatsink, it will have been subject to the heat that has been pumped out of the high energy part of the circuitry over the last 8 years or so. Electrolytic capacitors do not respond well to heat. Often they bulge and weep, effectively holding their hands up to say “I’m faulty!!!”. Other times they violently explode. And other times still, they just die quietly without any fuss or obvious signs of defect. They’re a bit like vampires in this regard; many different dying behaviours.
So the fact that it’s not showing signs of death doesn’t rule it out of my suspicion. I’m going to whip that little guy out of there and subject it to some electrical tests which will reveal once and or all whether it’s healthy or not.
Capacitor ESR Test
One thing you can measure on an electrolytic capacitor, which generally gives a good indication of its health, is its ‘ESR’. This stands for “equivalent series resistance”. An ideal capacitor would have properties of capacitance only, with no ESR and no inductance. But it’s impossible to manufacture the ideal capacitor. In practice all capacitors have some small amount of ESR and inductance. Electrolytic capacitors have relatively high ESR compared to some other types of capacitor, but when it is healthy the ESR is still pretty low. When it is unhealthy, however, the ESR increases significantly. And then we end up with a very poorly performing capacitor indeed, and that’s when they begin to prevent SMPS power supplies from starting.
The test I am going to perform here is going to tell me approximately what this capacitor’s ESR measurement is. I don’t actually own an ESR instrument, so I’m going to have to measure it in another way, using SCIENCE. 🙂
Here’s the circuit which shows how I will complete the measurement. Click on it for a larger version.
So what I’ve got here is as follows:
A signal generator which I will use to apply a measurement stimuli to the capacitor under test. I will be applying a 100KHz square wave @1V amplitude. At 100KHz most electrolytic capacitors appear as close to a short circuit, provided that their ESR is low. This particular device has a capacitance of 39μF. If you do the math, the impedance that it should prevent to a 100KHz signal is:
The 100Khz signal is injected via a known output impedance of 50Ω. This is the output impedance of the signal generator. I then measure the resulting signal that appears across the capacitor under test with an oscilloscope. The 50Ω signal generator and electrolytic capacitor effectively form a ‘potential divider’. The capacitor becomes ‘R2’ in the equation:
where Vout is the voltage measured across the capacitor with the oscillpscope, Vs is the supply voltage (1V in this case), R2 is the capacitor’s impedance, and R1 is the signal generator output impedance.
If the capacitor’s ESR is low, then at 100KHz it will only present 0.04Ω resistance. If you plug that into the formula above, you’ll see that we should only expect a miniscule voltage at 100KHz.
If, however, the capacitor’s ESR is high, then the resistance it presents will be far far greater than 0.04Ω and in that case we would see a fraction of the 1V square wave voltage being developed across it. From this measurement we can solve the equation for R2, effectively providing us with a measurement of the capacitor’s ESR.
First let me do some measurements without the bad capacitor fitted, so that I can show you what to expect to see in a good scenario
Measurment with no capacitor fitted.
With no capacitor fitted at all, the resistance of the capacitor (R2 in the potential divider equation) is effectively infinite, so the voltage dropped across it will be the open-circuit voltage. Basically the off-load output of the signal generator. We should therefore expect to see the full 1V, 100KHz output as shown below:
Measurement with brand new (good) electrolytic capacitor fitted:
Here’s what happens when you insert a good electrolytic capacitor into the circuit. It presents close to 0Ω resistance at 100KHz, so the fraction of voltage that appears at the oscilloscope is also close to zero:
If you look closely here you can just about see a square wave appearing on the flat-line. If I zoomed into this I could calculate the capacitor’s ESR, which would be small. But it does have some ESR.
There are also some rather large spikes. This is because the step change from 0V to 1V is relatively high speed, and this signal is reflecting back down my cabling as a result. Suffice to say I don’t need to concern myself with the spikes – they are interesting, but they are not related at all to this measurement.
Measurement with capacitor removed from system
Here is what happens if I put the capacitor I removed from the system in the circuit. You can see that a very large square wave is being developed across it! Almost 500mV, in fact. I could do the math here, but I really don’t need to. If you see a square wave of this sort of amplitude being developed at 100KHz, then it’s clear that the capacitor has excessive ESR and is no longer performing very well. As a quick mental approximation, we’re seeing a fraction of the 1V square wave being developed which is almost equal to 0.5. So this means that the impedance of the capacitor must be close to 50Ω, since a potential divider with the same value resistance for R1 and R2 would give us an output of 0.5Vs (or 500mV in this case).
Replace the capacitor!
With this in mind it’s definitely time to replace the capacitor. I didn’t have any 39μF capacitors in my home stash, but I had a 47μF so I decided to try that. It did in fact work fine, and my system is now up and running again.
If you suffer a power supply problem with an aged piece of electronics, electrolytic capacitors are a good place to start. They can fail on the primary or the secondary side, but will show different fault symptoms in the product depending on their function in the circuit. Sometimes it is plainly obvious which ones are faulty; they’ve either exploded or they’re bulging at the top or leaking fluid. Other times, as I’ve just shown here, they look okay physically but are hiding a nasty decline in capacitance performance internally. In that case you can use this method to gauge whether they require replacement or not. You can actually do this test in circuit, but it’s not recommended. It’s always better to remove them first if possible.
In KiCAD, a problem occurs when you try to create a new schematic symbol for a custom library and you happen to give it the same name as another part that exists in a different library.
For example, let’s say you decide to create a part called “MAX3232”, and there happens to be another part in a different library already called “MAX3232”. This can happen for a number of reasons; you may not have noticed that this part already existed in the standard library set, or you may have seen it but decided that you’d rather make your own. This happens to me quite often – for example, I have a bit of a personal standard for the way I like ICs to look on my schematic. I like my ICs to have a thicker border than on normal components, and I like them to be filled in yellow. Additionally, I often have preferences about the pin layouts. It is unlikely that an existing part in a different library will satisfy all of my personal preferences, so usually I’ll cook my own.
When you first set up the custom part, KiCAD will create your part for you and add it to your custom library using the name you chose at the beginning. If this name is the same as an existing part in another library, KiCAD doesn’t complain – it adds it to your custom library as requested and initially there is no obvious conflict.
The trouble starts when you come to add the component to your schematic. You select the “Place a component” button, choose your custom library, and select the part you just created. KiCAD highlights the part and shows a preview of it, which matches the custom symbol that you intend to place. So all looks well. However, when you actually try to place the part you’ve selected something unexpected can occur. It places the part without complaint, but sometimes it will place the existing part from the standard library instead of the custom part you intended!
Clearly there is a conflict here that KiCAD hasn’t warned us about. When you actually try to place the part, you’d think that KiCAD would dive straight into the appropriate library and pick out the part you actually selected. But this is not what happens. Instead, KiCAD searches through its entire set of libraries and picks the first part whose name matches the one you selected. If your custom library is scanned first, it picks the part you intended. If the other library is scanned first, it picks the existing part that you didn’t want!
I haven’t managed to find a completely satisfactory solution to this problem. But it is possible to get close to a satisfactory solution using one of two optional workarounds that I will describe here.
#1 The nomenclature workaround
The first workaround, and the easiest (though neither are difficult), is to simply be careful when you’re choosing names for your custom parts. For example, I could decide to append all of my custom parts with “_BJH” for the rest of eternity. Then, when I want to create a custom MAX3232, I would create it with the name “MAX3232_BJH” instead. It’s unlikely that another part in some different library is going to match this name, so I can feel pretty confident that no conflict will ever occur and the custom part I intend to use will be placed on my schematic every time.
An alternative that I’ve seen suggested on other websites is to append your library name to each of your parts. So if your custom library was called “Devices_BJH”, then you’d call your part “MAX3232_DevicesBJH”.
A problem with the nomenclature workaround
This works fine, but when you place your part it ends up being titled “MAX3232_BJH”. There are two problems with this. One is that it’s just plain messy; who in their right mind would want all their component names appended with workaround text? The second problem is that somebody else, reading your schematic, may be fooled into thinking that the appended text has some other importance – for example, they might think the “BJH” is a specific variant of the part which they need to pay attention to. So clearly we will want to do something about this.
Fortunately it’s quite easy to solve this problem, with one exception which I will talk about in a minute. If the part you’ve created is a regular schematic part (resistor, IC, diode, custom component, etc) then you can select the component on your schematic, select “edit”, (or hover over the component and press e on your keyboard) and then you will be presented with the following dialogue box:
As you can see, all you have to do in order to change the way the title looks for your custom component is change the Value. This allows you to call your part “MAX3232” while KiCAD still refers to it by the conflict-free name that you chose earlier. The annoying thing with this workaround is that you have to repeat this process for each part you add to your schematic, and you leave yourself open to the possibility of accidentally naming multiples of the same part differently. For example you might end up with two MAX3232 parts on your circuit, one with a Value of “MAX3232” and the other with a value of “max3232”. A trivial issue, yes, but one that would annoy the heck out of me!
There is at least one exception to this workaround which I have discovered. The other day I decided that I wanted to create my own power symbols. KiCAD already has a suite of power symbols to choose from, but I find that the supply rail symbols (the ones which are a pin with a circle on top like that shown to the left) are too small for my liking. Power supply rails are quite important connections, and I think they should be displayed more prominently. I also have issues with the legacy “VCC, VDD, VSS” power rail nomenclature, which is a rant for another day.
Having created my new power rails, and taken care to append them all with “_BJH”, I thought I would be able to use the edit dialogue box in the same way as for my other components to change the Value field so it was displayed, for example, as “+3V” instead of “+3V_BJH”. Unfortunately, if you try this for yourself, you will discover that the Value field is greyed out for power pins. You can’t change it! To solve this problem we must move on to workaround #2, which is now my preferred method which I am going to use for all my custom symbols from now on (or until the KiCAD team fix the naming conflict problem).
#2 The Custom Field Workaround
Since we can’t change the Value field on power pins, we need some other way of changing how KiCAD displays the name of our custom parts. Thankfully, we can do it quite easily using custom fields. You have two options here. The first option is to apply a custom field manually to each custom part after placing it on your schematic. The second option is to generate the custom field when you’re actually creating or editing the custom symbol in the library editor. The second option makes the most sense by far, because this way you can set up exactly how you want your symbol’s title to be displayed, position it just-so, and then it will be automatically repeated in exactly the same way for every single part you lay down with no need to go about maintaining it retrospectively. The second option also shields you from the horror of accidentally naming your parts in slightly different ways (capitals here, dashes there, etc). For these reasons I am only going to talk about the second option, but the principle is exactly the same anyway so you can pick your poison for yourself.
Generating custom fields in the library editor
Assuming you know how to edit components in the library editor (if you don’t then I guess your problem starts at how to create symbols, not how to tweak their names), launch the library editor and open up your custom component.
Along the top toolbar you will find a T-shaped button for editing custom fields. Select this button and then you will be presented with the fields dialogue box shown below.
Now, at the moment KiCAD is using the Value field – which on power pins you can’t edit – to display a title for your component on the schematic. So the first thing you want to do is select this field and change its visibility setting so that the “show” checkbox is no longer checked. This will stop it displaying on your schematics.
The next thing you want to do is create your custom field. Select the “Add Field” button, and then set the Field Value to something sensible that denotes your component in some pedantic way of your choosing. Set the text size, its style, and set the position. Once you’re happy, make certain that you set its visibility checkbox to “show” so that it’ll display on your schematic. Then you can exit the dialogue box.
That’s it! You’re done. Save your component back to the library, and update it. You’ll probably also need to close the schematic editor and open it back up again. From now on, you can place your custom component without any nasty surprises and it will display with a sensible name of your choosing.
The StorCenter ix2-200 is a RAID network drive supplied by iomega. I have used the 2TB version for about two years now to keep secure (backed up) copies of my precious data. Any data I write to the device is mirrored on its paired 2TB drive inside the unit, so one drive can fail and I’ll still keep my data.
Recently I’ve had cause for complaint with this unit’s default network setup routine. When you switch the device on it goes through a boot-routine which involves setting up the network address and subnet. If possible it does this via dhcp so if you’ve got it connected to your router it’ll be assigned an appropriate IP and will be instantly visible on the network.
The problems start when, for whatever reason, the device is not able to obtain network settings via dhcp. In that case it assigns itself an address in the range 169.254.x.x with subnet 255.255.0.0. In that case the network drive could end up with one of 65536 possible IP addresses in that range. How is one supposed to know what IP address it’s assigned itself?
I had two choices. Set my computer to scan all of the 65536 possible IP addresses until it finds an active one. Or, take the unit apart and see what hardware hacking can be done. The former is probably quicker, but the latter is more fun. Hence, this hardware hacking blog was born.
With the unit apart, I found a conspicuous looking pin header called JP1. A few pokes around with my ‘scope revealed what looked like microprocessor level (3.3V) RS232 comms on one of the pins.
Completing the hack…
The next task was to try and see if I could view these signals on a PC. The main problem here is the fact that the data output is 3.3V logic levels (basically it’s the raw output from a microprocessor) and the RS232 input to a PC is +/-12V standard RS232 logic levels. It’s easily solved though, you just need to get yourself an RS232 level-shifter chip such as a MAX3232 and rig-up a circuit as per my schematic shown below, and then connect it to JP1 (as shown on the schematic) according to the pinout in the photo.
I only had an SMT version of the MAX3232 part in my junk bin so I soldered it onto some proto-board with the 0.1uF capacitors tacked on top and then I wired it up to JP1 as shown in the photo below.
Viewing the data on the PC.
In order to view the data on a PC you simply need to put everything back together, connect the ends of your cables to a DB9 connector as shown in the schematic, and then connect the DB9 connector to your PC’s serial port via a standard 9-way serial cable. Then fire up a terminal (I recommend PuTTY) and enter the following settings:
Once you’ve entered in the settings, select connect, and power on the NAS. If all goes well some boot-time debug data should start spitting out on the terminal. Something like that shown below:
After 2-3 minutes you should be presented with a login prompt. If you want to gain root access to the NAS over your PC terminal simply log in with the following credentials:
That’s it – you’re in with root privileges. You can now enter the standard Linux commands and change whatever you wish. My main reason for going to all this trouble (apart from enjoying hardware hacking) was to find out the boot-time network settings it was assigning itself. Once I knew those I was able to gain access via the standard PC based web interface and change the settings to suit my home network.
I hope you enjoyed! Here is a quick video of the entire boot process and logging in:
The digital multimeter is the most widely used test instrument in the electronics industry. It is the standard tool for electronics Technicians and it’s usually the first test/diagnosis tool that a newcomer to electronics will purchase.
Despite this, multimeter capabilities and especially the concepts of multimeter accuracy are often misunderstood or ignored. I have worked in the electronics trade for 14 years and it has been my experience that surprisingly few people actually understand (or care about) their multimeter specifications. In particular, I have discovered that a large number of Technicians and even Engineers are ‘blissfully’ ignorant of their instrument’s capabilities and the implications for the measurements they make.
If you don’t know and understand your instrument specifications, how can you choose the right tool for the job? And, more importantly, how will you know when you’re using the wrong tool for the job?!
Digital Multimeter Specifications Explained
Modern digital multimeter accuracy specifications are actually quite easy to understand once you become familiar with all the jargon. It is important that you fully understand what is meant by counts, digits, and the effects they have on instrument resolution and accuracy. In terms of resolution and accuracy, there is an important distinction to be made here as well – in my experience lots of people get them confused.
In this tutorial we’ll tackle counts and digits first, and this will allow us to very easily interpret the accuracy specifications afterwards.
Digits, Counts and Resolution
When we talk about resolution we’re talking about the smallest possible change that the instrument can detect. This means we’re looking at the least significant digit. The resolution at any given time is the amount that a single count of the least significant digit is worth. So, for example, if the display is showing us ‘4.0005‘ volts, then one count of the least significant digit is worth 100µV (0.0001V). This means that the instrument’s resolution for that particular measurement is 100µV. The resolution will change depending on what range you select, but for the most accurate results you should always use the lowest possible range, which gives maximum resolution. I’ll show you why this is important for accuracy (accuracy is a different concept) later.
My Fluke 28II multimeter is a twenty-thousand count, 4½ digit instrument. This refers to my instrument’s resolution, but what does it mean? Well, the counts and digits are effectively two ways of saying the same thing, but both terminologies are in common use so it’s good to have a handle on both. I’ll tell you my personal preference and offer justification for it later. In this section let’s deal with the counts first.
To start with, it should be noted that the practical count figure is almost always one count less than the naming convention we use to refer to it. For example, in my case (for a Fluke 28II), the practical resolution of my instrument is 19,999 counts. That is what the instrument is actually capable of. However, when we refer to the counts by name we call this “twenty-thousand count”, and this is purely because a round number is easier to say! What we mean in practice is one less than that. The instrument specifications will usually quote you the practical counts as an actual figure, so with a well written specification there should be no ambiguity:
Fluke 28II Resolution Specifications
The implications in terms of multimeter resolution are that the Fluke 28II is capable of displaying a maximum of 19999 on its screen. A point to note here is that the most significant digit can ONLY be a 0 or a 1. It can of course move a decimal point to indicate different orders of magnitude. So if we’re measuring <2V, the instrument can display up to 1.9999V. What happens when we try to measure voltages higher than this? Well, the instrument has to abandon the most significant digit because it can’t display a ‘2’. This has the following consequences:
In the case of a 1.9999V measurement the least significant digit being displayed is worth 100µV per count (0.0001V), and therefore the instrument has 100µV resolution up to 1.9999V. Once we enter the 2V realm the instrument has to sacrifice some resolution because the most significant digit cannot display a ‘2’. Therefore in order to display 2V it has to shift the displayed measurement to the right, and the current least significant digit gets bumped off the end of the display in the process (i.e. we lose it).
The displayed voltage would be 2.000V, and the least
significant digit is now worth 1mV per count. It’ll then maintain this 1mV resolution all the way up to 19.999V after which it’ll be forced to drop a least significant digit again and the resolution will become 10mV per count.
You can see, then, that once you know your instrument’s maximum number of counts you can use this information to determine what the maximum resolution will be for any measured voltage. The resolution will decrease in discrete steps as the measured voltage increases. The point that the steps occur and their effect on the resolution are determined by the maximum number of counts.
So how does all this relate in terms of digits? Very simple. The multimeter is a 4½ digit instrument because it is capable of displaying four full digits (0-9) plus one half digit. The most significant digit is called a half digit in this case because it is only capable of displaying 0 or 1.
Some instruments are capable of displaying higher numbers in their most significant digit. Commonly you will see a ¾ digit quoted, and this usually refers to a digit that can display up to and including a numeric value of 3. So, for example, a 4¾ digit multimeter could display up to 39999 on its display. This would be called a “forty-thousand-count” instrument, and it is an improvement over the 19999 count display because it can go further into its range before it has to compromise its resolution by dropping a least significant digit.
There is a caveat here though – although a ¾ digit typically refers to a digit capable of displaying values between 0 and 3, this is not a safe assumption and in fact it can mean any digit up to 6. This means that there is some ambiguity surrounding the use of fractional digits to define resolution.
Counts And Digits Are Equivalent And Interchangeable
Counts and digits effectively mean the same thing. A twenty-thousand-count instrument is capable of displaying practical values of up to 19999 which is four full digits plus one half digit = 4½ digit.
Due to the uncertainty of meaning surrounding fractional (in particular ¾) digits, it is my opinion that the use of counts to define resolution is preferable because it accurately defines the instrument’s capabilities and leaves no room for ambiguity.
The Display is not the limiting factor!
Before I leave my explanation of multimeter counts, digits and resolution, I want to clear up a common misconception. Some might reasonably question why the instrument manufacturer would choose to hamper themselves with a most significant digit that can only display a 0 or a 1. Would it not be easier to have a full digit there as well, thereby avoiding the complications and maintaining better resolution for more of the range?
Well, the answer is that the display is not the limiting factor here. The display itself is almost certainly quite capable of indicating numerals from 0-9. The limiting factor is the measurement circuitry in the instrument itself . All instruments obviously have a finite resolution, and it is this limiting factor that causes the instrument manufacturer to be tied to a smaller MSD.
The Meterman 37XR, for example, has a ten-thousand-count display (actual counts 9999). The ten-thousand-counts refers to the resolution capabilities of the instrument itself (the lower the number, the less resolution the instrument provides), and in this case the consequence for the display is that it can indicate up to 9999V + decimal point. So in this case the most significant digit really can display 0-9, and there is no fractional digit there to complicate matters. But we only have 4 digits of displayable resolution across the range. We don’t have access to an extra ½ digit or ¾ digit at all, so we never get to exploit the extra resolution that this part-digit would provide. A part-digit that offers an order of magnitude better resolution for part of the measurement range, is better than no digit at all.
Multimeter Accuracy Specifications
Now that we fully understand the meaning behind counts, digits and resolution, we can quite easily interpret a digital multimeter’s accuracy specs.
What does ‘accuracy’ mean?
The accuracy of a measurement refers to how closely it reflects the true value of the property being measured. Whenever you measure something in real life, the measurement you take is always an approximation of the actual property itself, and therefore there’ll be some uncertainty involved. Today’s digital multimeters are very accurate instruments – the uncertainty in their measurements is extremely low – but there will always be some uncertainty in the measurement.
What will the error be? Well, it’s impossible to quantify the error exactly. If you think about it, if we could determine the exact magnitude of the measurement error then we’d just correct for it in software and then we’d have no error at all! That’s why we refer to it as “uncertainty” instead of “error”.
In practice all we can really do is provide a figure of uncertainty about the measurement which gives us a range for which the measurement can potentially be in error. The multimeter specifications give us these limits, and they’re called the accuracy specifications.
So we have dispelled the jargon, and this makes our life easy. Let’s now look at some practical accuracy specifications and determine what they mean. Staying with the Fluke 28II, let’s have a look at its accuracy specifications for the VDC range:
Fluke 28II DC Specifications
As you can see, the Fluke 28II’s DC voltage range is quoted as being accurate to “±0.05% of the reading +1”. The ‘+1’ refers to an additional uncertainty in terms of ‘numbers of counts’. Some manufacturers refer to this uncertainty as ‘numbers of digits’, but they both mean exactly the same thing – it’s basically the number of counts in the least significant digit. In this case we’re only talking about one count of uncertainty but some instruments suffer more than that. I prefer the former terminology (counts) because it sounds less confusing! Notice that the +1 count is contained within the ± bracket so the actual uncertainty in terms of counts is plus or minus 1 count. The easiest way to understand what this means in terms of measurement uncertainty is to take an example.
Example: Measurement uncertainty for a known 1.8000V source with the Fluke 28II.
Let’s imagine we decide to measure a voltage reference whose true voltage is known to be 1.8000V. If we measure this with the Fluke 28II using the most appropriate range (more on this later!) we can expect that the instrument’s measurement uncertainty will be:
This means we should expect a measurement of somewhere between 1.7991V and 1.8009V. However, this isn’t all of the uncertainty we can expect to see on the display because we also have an additional uncertainty (which is due to ADC errors, offsets, noise etc) of ±1 count, and this gets added on to the least significant digit being displayed. So, adding that to the measurement uncertainty we get 1.7990V to 1.8010V. We should expect to see a measurement on the display that is somewhere between these two values. Easy! Let’s have a look at what this means for an instrument with slightly lower resolution and accuracy specifications:
Example: Measurement error for a known 1.8000V source with the Meterman X37R
Let’s try this same task with the Meterman X37R. The specifications for the VDC range are:
ACCURACY: ±(0.1% Reading + 5 digits)
RESOLUTION: It’s a 4 digit instrument (no partial digits) which is 9999 count so our maximum resolution when the most appropriate range is used for this particular measurement will be 0.001V = 1mV.
Using all this information, the uncertainty in our measurement will be:
This means we should expect a measurement somewhere between 1.798V to 1.802V. But then we have the additional uncertainty of 5 counts on top. Not only is there a greater uncertainty of counts to add in this case but now they’re more meaningful too because the least significant digit is more significant than it was for the same measurement with the Fluke 28II – the 37XR has less resolution. The 5 counts get added to the 1mV column, where as the Fluke’s ±1 count uncertainty only got added to the 100μV column!
This gives us an overall expectation of a displayed reading on the 37XR of somewhere between 1.793V to 1.807V. You can see how an instrument with lower accuracy and lower resolution can start to make a difference.
Always use the most appropriate range!
There’s a consequence for all this here that we haven’t talked about, and it refers mainly to the count (or digit) errors quoted in the specifications. You must always use the most appropriate (highest resolution) range for the property being measured. If you don’t, the resulting measurement errors can end up being quite large because the count uncertainties carry more weight. Let’s say we do the same experiment with the 37XR, but this time we use the 1000V range to take the measurement. The displayed measurement will then be somewhere around 1.8V – we’ll be wasting the other two digits that are set to take tens and hundreds units, because there are no tens or hundreds to measure! We’ll still end up with the same measurement error in this case (it’s still ±0.1% Reading + 5 digits), but the 0.1% uncertainty is too small to be registered on such a low resolution display. The counts, however, do register because they always affect the least significant digit being displayed – which in this case is the 100mV digit (the 8). So the actual reading displayed could be between 1.3V and 2.3V! That’s a total error of ±28%which, as I’m sure you’ll agree, is completely unacceptable. So watch out for that and always make sure you make use of the best possible measurement range!
That’s all folks!
So there you have it, digital multimeter specifications explained. It’s really quite simple once you get down to it. The topic is a little bit more hard work for analogue instruments – I’ll tackle that little minefield in a separate tutorial.
I’ve experienced a problem recently whilst measuring high voltages with a 1000:1 high voltage probe and an Agilent (U1253B)/Fluke (28II) multimeter. The problem is that the two meters don’t agree with each other!
The Fluke (my meter of choice) returns voltage readings within expectation all the way up to 12kV (12V as displayed on the instrument). But the Agilent meter only seems to agree with the Fluke up to about 3kV, after which it starts to drop off. By the time we get to 12kV the Agilent is reporting a voltage that is more than 1000V less than expectation. I tried another Agilent U1253B and experienced the same drop off. What is going on here?
Know your input impedance
So I started to think about input impedance. The high voltage probe is designed to work with a 10MΩ impedance. Both these high-spec handhelds will be 10MΩ, right? That’s standard for handhelds these days. Is this a fair assumption?
It turns out, no it isn’t!!!
Firstly, RTFM. Both the Agilent and Fluke claim 10MΩ input impedance for the D.C. voltage range in their manuals. However, the Agilent has a fancy dual display mode whereby you can measure two different properties (say, A.C. and D.C. voltage) simultaneously. In this mode each display presents as a 10MΩ impedance so you end up with an effective impedance of 5MΩ in total.
…but I wasn’t using the dual display mode, so I should expect 10MΩ, right? Well, that’s what the manual says. But let’s measure it!
Measure the Agilent’s Single Display Input Impedance Using the Fluke
Firstly we connect the Fluke up to the Agilent and take a resistance measurement of its inputs. We should expect to see 10MΩ, and sure enough we do:
Measuring the Agilent’s single display input impedance using the Fluke.
Measure the Agilent’s Dual Display Input Impedance Using the Fluke
Next we set the Agilent to dual display mode and take the measurement again. We should see 5MΩ, right? Yes! So far so good…
Measuring the Agilent’s dual display input impedance using the Fluke.
So far we seem to be doing well. The Agilent’s input impedance is as expected.
But there’s a problem…
There’s more than one way to measure the Agilent’s input impedance. We can use an insulation tester. The difference is that the Fluke is applying a constant current and then using the measured voltage drop to calculate resistance, where as an insulation tester does it the other way around – it applies a constant voltage and, I presume, uses a measured current to calculate the resistance. It’s six of one and half a dozen of the other – both types of measurement should agree with each other. But do they? Let’s find out:
Measure the Agilent’s Input Impedance Using the Insulation Tester
Here we connect the Agilent up to the Insulation Tester. I tried it at various test voltages and they all agreed with each other, but there’s a surprise in store – the insulation tester reports 5MΩ input impedance for the Agilent’s voltage measurement range. And this measurement is reported regardless of whether the single or dual display mode is used!
Measuring the Agilent’s Single Display Input Impedance Using the Insulation Tester
What is going on here? Why is the insulation tester reporting 5MΩ input impedance for the Agilent’s single display mode? And could this explain my measurement problems with the high voltage probe? I think it could! But in that case, what can we say about the Fluke’s input impedance? Let’s measure it, first with the Agilent and then with the insulation tester:
Measure the Fluke’s Voltage Range Input Impedance Using the Agilent
Okay so we connect the Fluke up to the Agilent and measure its input impedance using the Agilent’s resistance range. We get 10MΩ as expected:
Measure the Fluke’s voltage range input impedance using the Agilent.
Measure the Fluke’s Voltage Range Input Impedance Using the Insulation Tester
Now we measure the Fluke’s input impedance using the insulation tester. We should get 10MΩ:
Indeed we do get 10MΩ. So, to summarise:
The Agilent’s single display input impedance measures 10MΩ using the Fluke’s resistance measurement, but the insulation tester says it’s only 5MΩ – and this is regardless of the display mode – both the single and dual modes look like 5MΩ to the insulation tester.
The Fluke, on the other hand, looks like a 10MΩ impedance to both the Agilent multimeter and the insulation tester. This is what you would expect.
I tried the measurements again with another Agilent U1253B and I experienced the same thing. I also experienced the same voltage measurement problems when using the high voltage probe. This rules out a faulty instrument.
So what is going on?
This is a very good question! Why does the Agilent look like a 5MΩ impedance to the insulation tester? Why is it not 10MΩ as stated in the manual? And why is there a discrepancy between the insulation tester measurement and the multimeter measurement? This discrepancy isn’t seen when we measure the Fluke.
This input impedance problem provides an explanation for the voltage measurement errors I’ve experienced. The high voltage probe I’m using is designed to work with a 10MΩ multimeter, so a lower impedance instrument is going to present a problem. This is what I’ve experienced in practice. The Fluke, on the other hand, works with the high voltage probe no problems at all.
Misleading Impedance Specifications
Here’s a copy of the input impedance specifications from the Agilent U1253B user manual:
As you can see, they are quoting 10MΩ for each VDC measurement range, from 5V to 1000V. However, there’s a problem with this! Refer to note 3 in the fine print below the table. That’s right – the input impedance actually varies with input voltage! So, even though they quote 10MΩ input impedance, it’s actually only 10MΩ for input voltages between -2V and +3V! Outside of that it’s only 5MΩ.
To put that in perspective, -2V to +3V is less than 0.3% of the instrument’s total range. So for 99.7% of its range, the impedance is only 5MΩ. Despite this, they somehow think it’s informative to quote the input impedance as being 10MΩ. That’s a bit bizarre.
Anyway, this fact explains why the insulation tester and the multimeter disagreed over the input impedance. The multimeter’s constant current stimuli yields a voltage that is <1.5V so it comes in on the 10MΩ impedance zone. The insulation tester’s minimum voltage stimuli of +50V is well into the 5MΩ impedance zone.
Also, the fact that the instrument has 5MΩ impedance above +3V explains why it starts disagreeing with my Fluke after about 3kV. The high voltage probe is designed to work with a 10MΩ impedance so as soon as the Agilent’s impedance changes over to 5MΩ, erronous measurements are returned.
The Moral of the Story Is:
Never assume your instrument’s input impedance! It’s not necessarily 10MΩ! And, in the case of Agilent, even if the manual quotes 10MΩ make sure you read the fine print because the might have been misleading you!
Anyone who has ever worked in the electronics trade will almost certainly have been asked to repair consumer electronics products for friends, family and even random neighbours. How do you deal with these requests? Do you politely decline or do you end up getting sucked in?
Rookies usually get sucked in. I’ve been there, done that, and got the T-Shirt. But give yourself a few years and you’ll soon learn that it can be a huge “trap for young players” (as EEVBLOG is fond of saying), and that once you’ve fallen into the trap it can be very difficult to get out!
I’m older and wiser now, so usually I’ll politely decline a request for this sort of work. Occasionally I’ll agree to do the odd thing for close friends and family, but even then I only tend to agree if I feel confident that the symptom is indicative of a quick/easy and permanent solution. If there’s any kind of uncertainty involved, or if it’s someone I don’t know, forget it – I’ll avoid it like the plague. Why? Well, let’s have a look at it!
My main reasons for declining this sort of work are as follows:
Once you agree to repair something for someone, suddenly everyone in the neighbourhood will want you to provide a similar service for them too – and once you’ve agreed to do it for one person, it becomes difficult to say no to anyone else! How can you justify saying no to Mr. Jones at number 4 when you previously said yes to Mr. Edmunds at number 3? You’re almost obligated to become the local repair guy, and from then onward your spare time will be constantly eroded by other people’s problems.
At least you can make a few bob for yourself on the side though, right? No! That neatly leads me on to my next gripe…
Nobody ever wants to pay you money for the work. They think your technical expertise in this area is worthless, and in one way (as explained further on in this point) they are right!
It doesn’t matter that you may have spent 4 hours tracking down a problem, and that the only reason you can do it at all is because you spent years (decades) honing your electronics skills – they’ll still want it done for free. Or, at least, for a very small amount of money.
In fairness, this kind of attitude has mainly been fostered by cheap consumer electronics products from countries like China. Your diagnosis/repair work might be worth £80 an hour in terms of your expertise, but why are they going to pay that when they can just get a brand new one from the local supermarket for <£100? Cheap goods from developing countries have literally decimated the monetary value of a Technician’s work. Products have become more complex and hence more difficult to diagnose, but the amount that consumers are willing to pay for their repair has fallen to a pittance.
Naturally Mr. Jones won’t want to add £100 to his next TESCO shopping bill though – he’ll just want you to fix the one he’s got for free.
Once you’ve placed your hands on someone’s product, you instantly inherit any future problems it may present. If Mr. Jones bought you a pint of beer in return for restoring power to his television last month, then he’ll bring it back to you when the colour goes down on it and assume that the two problems are linked. “The colour was fine before you started fiddling with it”, he’ll say. “It must have been something you did!”.
Naturally, he’ll not only expect you to fix his colour problem as well but he’ll also expect you to do it for free.
Sometimes you’ll even be blamed for totally unrelated things like, for example, poor reception. You fix their dead television for a measly tenner (even though it’s not even worth 15 minutes of your time, and the job took you three hours!) but then they’ll call you back 3 months later because the picture on ITV4 is breaking up. “It didn’t do that before you took the TV apart, Mr. Hoskins!” and before you know it you’ll be up on their roof fixing an antenna or adjusting their satellite dish.
Sometimes, if you don’t have your wits about you, amateur diagnosis work (and if it’s done in your spare time as an aside to a related but different professional trade, it is amateur – even if your skills are not) can even end up costing you money. Faults (especially in the digital domain) can be very difficult to diagnose, and are littered with “gotchas” that you can inadvertently stumble into. Fault symptoms will often lead you around the garden path.
It can seem, for example, like a Microprocessor is to blame for your problem when in fact the firmware is simply hanging up because some other component is upsetting it. But if you go ahead and order a replacement Microprocessor you’d better be damned sure it’s going to solve the problem, because Mr. Jones isn’t going to want to pay for it if it later transpires that the Micro wasn’t the root cause after all!
If you work in the diagnosis trade (which, these days, hardly anyone does) problems like these are easily navigated – you can swap components on like products to see if the problem moves with them, and then you can be more confident of your diagnosis before you spend any money (even though at the very least it’ll certainly cost you more unpaid time), but if, for example, you’re a design Engineer who (by the very nature of your work) happens to also possess some skills from the fault diagnosis trade, you won’t have this luxury. When Mr. Jones presents his faulty product to you it’ll almost certainly be the first time you’ve ever seen one. If you’re lucky (provided Mr. Jones never brings it back) it’ll be the last time you ever see one! So the moral of this particular point is that if you order parts for someone’s faulty product, make sure you’ve made your disclaimer clear before you do so otherwise the cost could be coming out of your pocket, not theirs!
The final reason I prefer to decline this sort of work is that it just isn’t my specialty. Yes, I probably could track down the fault on someone’s TV, or laptop, or PVR. Given enough time and sufficient motivation I could probably fix any electronic product. But unless you actually work in the trade, diagnosing these things day-in day-out, it’s always going to cost you more time and money than it’s actually worth. For a start you’ll hardly ever have schematics for the products, and that means you’ll have to reverse engineer them before you even start diagnosing the problem. You won’t have spare parts hanging around so you won’t be able to follow hunches by swapping bits out, and finally you’ll never have the opportunity to reap the rewards of a hard-earned diagnosis. What do I mean by this last point? Well, no Technician wants to see a one-time fault. Obscure one-time faults do happen occasionally, but usually (if you work in the trade) it’ll be something you or one of your colleagues have seen before. So you invest the time in a diagnosis once, and then you apply it instantly to any future occurrence of the problem. In that way, you start to make money on your investment. If a product costs you six hours of diagnosis time, most of which will end up being unbilled time, then you hope that it’ll pay you back when you see the problem again in the future.
When you do these things as a side job, you typically only ever see the problem once, even if it’s a relatively common problem for that particular product. So each time you complete a diagnosis you invest significant time, but never see its return. Even if people were still willing to pay good money for diagnosis work (which they’re not), it would hardly be worth it for someone who just does odd bits on the side.
So that’s why I will almost certainly decline any request to fix someone’s consumer electronics product for them.
What about you? Are you a design/development engineer who has been asked to fix other people’s stuff? Do you agree to it or do you decline? What stories can you tell?!
I completed this design a number of years ago when I was still learning to program. In fact, this was my first “proper” project that I directed my new-found programming skills towards. My programming skills have improved by many orders of magnitude since, and with that in mind I was a bit ambivalent about whether to put this project up on my site. I only like to showcase my best work, and although this really was my best work back in 2009, it’s nowhere near up to my current standards.
That said, I constantly find myself receiving emails from people who want to have a copy of the code and/or PCB design files. It seems I’m not the only one who is intrigued by IR communication methods.
I have therefore decided that I will publish this old work on my site for the benefit of others who are also learning. The information is a literal copy of the original article I wrote for a very old version of my website, back in the days when I used to hand code the HTML. I have simply copied and pasted the information into this post.
The code for the LCD and the RC5 decoding is mixed in to one C file, although obviously they are separated into their own relevant functions. These should ideally be made into their own libraries, particularly the LCD code. I have in fact used the LCD code in other projects and during that time I have made huge improvements to it and I’ve made it into my own library. The LCD code provided in this project is basically a skeleton solution, suitable for this particular application but not a lot else.
In practice I found this version of the LCD code to be a bit hit-and-miss. It works fine with the LCDs stated in the BOM, but when I tried it with other LCDs it sometimes worked and sometimes didn’t – so bare that in mind.
The RC5 decode solution provided here analyses the RC5 data stream in real time, bit-by-bit, as it is received by the IR receiver. There is some error detection that throws out the stream if it is not deemed to conform to expected RC5 standards, but you might want to have a play with this. In practice I found it to be reliable with a number of different RC5 remotes.
A better solution to this problem is to do a post-analysis on the received data. That is the technique being employed by version 2 of my RC5 decoder, which is currently in development as a kind of side project that I work on every now and then.
The IR receiver listed in the BOM is very important because it filters out ambient noise. The code relies upon that. If you try to use a simple IR LED to detect an RC5 stream you will experience noise problems because the code is not designed to account for it.
RC5 Decoder V2 is in development!
It should be noted that there is an RC5 Decoder V2 in development. The new version will receive remote control data streams and provide post-analysis of the data. This is an improvement because it will allow the project to decode other protocols as well, simply by adding new libraries. Also, the LCD code has been significantly improved and broken out into a separate library. The V2 version, when released, will be published on this site.
RC5 article from old version of brianhoskins.uk follows
What follows is an article I wrote for the first version of my website that details the RC5 decoder development, provides explanation, and also provides all of the design files you’ll need to repeat my work.
What is an RC5 Decoder?
If you’re a computer geek, you might think I’m claiming to have built a project that decrypts the RC5 block cipher. I wish that were the case, and indeed if it were I’d certainly be earning a lot more money than I am now! But alas, this project refers to a much simpler idea – the RC5 protocol for wireless (infra-red) communication. If you’ve used a television remote control before, then it’s quite possible that you’ve used one that transmits RC5 coded commands. In that case your television will be acting as an RC5 decoder; receiving the infra-red data stream from your remote control, decoding it and acting upon the command it has received.
The project described here can receive an RC5 coded infra-red data stream, decode it, and display the address, command and toggle data on an alpha-numeric LCD.
What is the point of this project? Well for me it was an educational experience. I had been busy learning assembly and C for PIC Microcontrollers and, whilst I had already written a number of experimental programs to help with my education, this was my first attempt at a complete microprocessor based project. Thus, this project served no other purpose than academic value. In practice, though, you could use this project in any application that requires remote control of equipment via infra-red communication. I have used my decoder firmware to display information about the commands that were received, but the project software could be easily modified to act upon the commands that are recieved and control practically anything you want with it. If you do create something useful/interesting with this project, please let me know!
The heart of this project is based upon receiving and decoding an RC5 data stream. So, I guess the first question to ask is: “what does an RC5 data stream look like?” A second one might be: “how does it work?” and a final one would be: “how can I decode it?”. These are all questions I had to ask of myself before I could come up with a working design, and hence this section shall answer those questions.
What does RC5 look like?
Firstly, to find out what an RC5 data-stream looks like, you can employ one of two techniques. Either you can try to find some written reference to an RC5 data-stream, or you can try to measure one. I did both. For written references, google is your friend. You will find that I am not the first Electronics hobbyist to try to decode an RC5 data-stream – far from it in fact. Plenty of other Engineers/hobbyists have done this before! That said, I haven’t yet found anyone else who has designed a solution using the C language so if C is your poison you’ve probably come to the right place.
If you do take the time to trawl google results you’ll probably do well to find a diagram better than this one:
This diagram was found at davshomepage and I think it is a good rendition of the RC5 protocol. This particular diagram is showing the following information:
Start Bits: 11 (these are always logic 1)
It also clearly shows the timing for a single RC5 encoded bit, as well as the timing for one full transmission in its entirety. If the logic levels look a bit weird to you, it’s because they’re Bi-Phase (Manchester) encoded. We’ll get to that later.
So that’s pretty much answered the first question, “what does RC5 look like”, but just for completness I’m going to show you what it looks like if you try to measure it. In order to do this experiment, I purchased an IR receiver device sensitive to 36Khz (because RC5 uses 36Khz modulation) with a demodulator built inside. You can receive the stream using a simple photodiode if you want, but then you’re going to have to demodulate it and you’ll need to concern yourself with other problems such as gain control and noise reduction etc. You can buy very small modules that have all of these features built in, so my opinion is why burden yourself with the hassle? Don’t reinvent the wheel, life is too short! The little module I used was a TSOP2236 which you can buy readily from Farnell.
All you need to do to start experimenting with one of these devices is to connect it up to a power supply and you’re good to go. If you read the datasheet it’ll start telling you about using pull-up resistors and connecting some capacitive filters etc, which is all very relevant and important to include in your final design but for the purpose of just playing around and experimenting you don’t need any of that stuff – power connections are all you need to do.
So, with some power connections made (I’d show you a picture but what’s the point – it really is that simple) we’re ready to receive an RC5 data stream. All we need now is an RC5 generator. For this I used an old Philips Universal Remote Control set up for TV and VCR modes. Philips designed the RC5 protocol, so most of their older equipment uses it. I also experimented with a universal remote application for Windows CE and that worked fine as well. You need to be a little bit careful here because your IR receiver package will receive your datastream regardless of the protocol – it won’t care whether it’s RC5 or not. It’s easy to recognise an RC5 protocol though, just compare it to the diagram on the previous page, paying particular attention to the start-bits and the number of bits in total. The data stream should also be Manchester Coded (or bi-phase coded, same thing). What’s Manchester Coding? We’ll get to that in the next section.
So, with the RC5 generator to hand and my IR module connected up to a power supply, I just need to press a button on the remote control and the IR module should recieve the infra-red light, demodulate the data stream, and output a manchester coded packet of data on its output pin. To see this, we need to connect the output pin to a Digital Storage Oscilloscope. If you don’t have one of these then you’re not going to be able to do the experiment yourself, but you don’t really need to unless you’re particularly interested – just check out my results on the next page!
This is what an RC5 Data Stream looks like if you measure it. I captured this shot using the Single Sequence Aquisition mode on my Tektronix DSO. To generate the RC5 I used an old Universal Remote Control set to VCR mode. I happen to know from previous experiments that a VCR remote is assigned Address number 5, so if you decode this stream (I’ll show you how next) you’ll find that the address is indeed 05. Also, I pressed number “8” on the Remote and, again from previous experience, I have found that the numbers 0 – 9 on the keypad are encoded as Data 00 – 09 respectively.
San Bergmans has compiled a small list of some RC5 codes on his website here. The list is by no means exhaustive, but it gives you the general idea. In addition to San’s listed codes, there are also codes for satellite receivers, CD players, and lots of other stuff. I have found through my studies that the RC5 protocol is not fully populated (i.e. not all of the possible addresses / commands have been used) so it is possible to make up your own little address and command codes to control your own equipment, without fear of inteference with other commercial products.
San’s site is also a good reference if you’re hoping to learn more about the RC5 protocol or indeed any of the other Remote Control protocols that exist!
This shot is merely a close-up of the data on the previous page, using the zoom function on the instrument. The zoom function is quite handy for seeing the fine details of your aquisition, and I used it here to zoom in on one single bit of the Manchester encoded data.
The RC5 protocol specifies that one single bit should be 889uS long but in practice, with my Remote Control, I found that it actually measured 812uS (see the cursor readings on the right hand side of the image), which is approximately 10% outside of that which the protocol specifies. This has implications for the design of an RC5 Decoder, because it means that the system will need to measure the length of a single bit of the encoded data, to account for inaccuracies in the RC5 generator. Thanks to Manchester Encoding, this is actually quite easy. The whole point of Manchester Encoding is that it is self-clocking. This is what I’m going to talk about next, and in the process of talking about Manchester Encoding it’ll become clear how the RC5 data can be decoded.
How do I decode RC5?
Now, to answer that question it’ll be necessary to talk about Manchester Encoding. Once the details of Manchester Encoding are understood, it will then be quite a simple task to analyse an RC5 stream and see how it might be decoded. We’ll do this later using the stream I captured on previous pages, and see if we do indeed arrive at Address: 05, Data: 08. Once all of this has been understood, we are then presented with the problem of how to accomplish the decoding task using an Electronic solution. I came up with a software based solution which I shall present later on!
Before I get into Manchester Encoding, I want to illustrate the problem that arises when your data stream is not Manchester Encoded.
Usually, digital data communication requires at least two separate lines – clock and data. The clock line is required for synchronisation purposes, so that the receiver knows when to sample the incoming data. For example, suppose I were to transmit an 8-bit data stream comprised of the data, “10001100“. This data stream might look something like that illustrated below:
Now, it’s not immediately obvious that the data above is equal to 1001100 is it? That’s because you don’t know where the data starts or where it ends, or even how long a single bit is. It would seem that the first logic 1 is a single bit, but how do you know? It could be 1, 2 3, or any amount of bits long. And, if we accept for the moment that it is 1 bit, then is that a logic 0 at the start? Or does the data start with the logic 1? And where does it end – how many logic 0’s are there on the end of that stream?
To make matters worse, what if I were to transmit 00000000? That’s a perfectly valid data stream, but without any kind of synchronisation technique (a clock) your only option for data recovery would be to agree a communications speed up-front with the transmitter. In fact, some communications protocols do exactly that! RS232, for example, relies upon us both agreeing – transmitter and receiver – that we’ll talk at a certain BAUD rate before we can exchange any data. After we’ve agreed the BAUD rate, we each have to trust the other’s internal clocks to be stable and accurate, so that 9600 BAUD means the same thing to me as it does to you. Communication in this manner works for small data packets.
In regular wired data communication busses, a clock line is normally included so that you know where to sample the line in order to recover the correct data. Data is either sampled on the rising edge or falling edge of the clock, depending on how the system has been designed. To illustrate a rising edge system, see the diagram overleaf which shows how the above signal could be sampled and recovered using a clock signal.
Recovering Data with a clock
The following diagram shows how the previous data is recovered using a clock signal:
With the addition of the clock signal it is suddenly easy to reconstruct the data. Data is only sampled on the rising edge of the clock line, and in this manner the original data (10001100) is correctly reconstructed.
Manchester Encoding – A Self-Clocking Communications System
In order to get around the problems caused by long strings of logic 1’s or logic 0’s, where there is no clock to aid data recovery, a clever person came up with Manchester (also called binary phased shift) keying. I’m not actually sure who that clever person was – answers on an email!
In Manchester Coding, logic 1’s and logic 0’s are no longer represented by simple high and low voltages. Instead, they are represented by a rise in logic level and a fall in logic level respectively. This is illustrated below:
Note that in some systems this is the other way around – a logic 0 can be represented by the low to high transition and the logic 1 by the high to low. It depends how the system has been designed.
To see how this method effects the previous transmission data 10001100, the data below has been generated using Manchester Code logic levels:
If you trace through the above diagram, you should be able to see that the original logic 1’s and logic 0’s are represented by low-high and high-low transitions. If you’ve understood this correctly, then a little further thought should convince you that the problematic data stream described earlier (00000000) would now be transmitted as a long string of high-low transitions under the Manchester Coding scheme. The clock signal is easily extracted from a manchester coded transmission, even when there are long periods of logic 1’s or 0’s! Also, the start point and stop point of the transmission is more easily determined.
For these reasons, a Manchester Coded data stream is often refferred to as “self clocking”. No separate clock line is required under this scheme.
The other day I stumbled across an old Nokia mobile phone sitting helplessly in the drawer. It’s not really that old, but already it’s pretty useless in terms of what we expect from a mobile phone in this day and age. What should I do with it? Throw it away?
No! What self-respecting electronics nerd would throw away a piece of kit with all those useful bits and pieces in it? Surely we can make use of some of those parts? What can we do with that GLCD, for example?
I took the phone to bits and robbed the GLCD from it to see what I could come up with.
Find the info!
First of all, I need to find a pin-out for that LCD. I found this really useful website that details the pinouts for quite a number of Nokia LCDs. The pinout I was interested in was this one:
Next I needed to find some information about the controller chip on the LCD. I found from the website that the LCD uses a SED1565 controller so I downloaded a datasheet from the controller from here.
Write some code!
I connected up the LCD using the wiring diagram above and I strapped the SPI bus of the connector to the SPI bus of a PIC micro. Now it’s time to write some code! If anyone can show me a decent code snippets plugin for wordpress that would be great. All the methods I’ve tried suck eggs. Here’s a link to the sourcecode instead. Open it in Notepad++.
After much testing and hackery, I managed to communicate with the GLCD. Here are the results!
For copy of the c code, see my repository on github: https://github.com/bh4017/nokia7110-glcd/