Low cost BMS

How do you store and manage your electricity?
Post Reply
User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Mon, 21 Nov 2011, 23:11

Thanks Coulomb, but you're forgetting we will now have two strings of 114 cells talking to two BMS masters.

The 14 byte packets are infrequent "luxury info" and only contain bytes with the hi bits clear (ASCII text). The character after the number is no longer really a units symbol, but tells what command generated the number.

The most important feedback comes more frequently as a single byte with its hi bit set. It could be inserted in the middle of a packet without damaging it. It carries 3 bits of data and 4 bits of error checking. The data is what we call the "badness" level. It represents the level of distress of the most distressed cell in the string, on a scale of 0 to 7, whether that distress is due to overvoltage, undervoltage, overtemperature, excessive link voltage or lack of received comms.

For example the badness level goes up by 1 for every 50 mV over 3.60 V, for every 100 mV below 2.50 V and for every 2 K over 45°C.
One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Mon, 21 Nov 2011, 23:22

I don't have any info on terminal post diameter. We have the same cells as you. But you could scale them off photographs by putting some calipers up to your computer screen. EVWorks have datasheets with photos.
One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Tue, 22 Nov 2011, 11:44

coulomb wrote:
Nevilleh wrote: Weber, what Baud rate are you using with your bms?

I can answer that: we're using 9600 bps. Any slower, and it would take a very long time to propagate messages throughout our looong string of 228 cells. Our packet sizes are a lot larger than yours, too, by the sounds of it. For example, a voltage response might be
\012: 3324 V
which is about 14 characters with a newline and CRC byte. The slosh is to make the line a comment as far as other cells are concerned. The 012 is a 3-digit cell ID; 3324 is the voltage (in millivolts), V is the unit, and there are spaces for readability (I guess we could lose those spaces). I suspect your equivalent is about 2 bytes.


I just wondered if you were running faster than me (I'm using 9600 too) - I know you sent me a lot of info, but I couldn't quickly look it up. I could slow mine down as I am sending MUCH less data than you are.
eg For a voltage poll, the controller sends command 01 terminated with FF. The first module inserts a single byte into the string and passes it on to the next and so on until the last module inserts its byte and sends it on to the controller. The byte position in the string is the cell address. Thus, for my 45 cells, the string is 45 bytes of data, plus the 01 and FF. With 228 cells, that would be only 230 bytes or about 230 mS at 9600 bps. I don't do any CRC'ing or indeed any error checking of the string at all. A bit risky you might say, but I am polling twice per second, so bad data is quickly over-written.
The temperature poll is the same, except the command is 03. The temperature data are still only 1 byte per cell.
The 02 command is "change shunt state" where the next byte is the desired shunt state. ie the controller sends a string of bytes after the 02, one for each cell. A cell actions the first byte after the command, removes it from the string and sends the rest on.
You can see that for my 45 cells, a string has a max length of only 47 bytes and takes but 47 mS at 9600 bps. The cell modules do not buffer the string (except for the UART buffer), they pass on the data immediately, only inserting or removing a byte as it goes, so there is very little extra delay.
Since shunts are only switched on or off when charging or balancing, the risk of data corruption is virtually zero, so no error checking is not a concern for that.
And so far, it is working pretty well, I just have to pin down the cause of the occasional glitch in the control unit. It doesn't appear to be caused by data stream errors, but I'm not certain as yet.
Edit:
It wouldn't be too difficult to add a CRC-16 error check to the string, but that would slow it down enormously as the modules would have to buffer the entire incoming data stream and not send it on until the crc check was passed.
Extra edit:
BTW, what algorithm are you using for your 1 byte CRC?
Last edited by Nevilleh on Tue, 22 Nov 2011, 02:50, edited 1 time in total.

User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Tue, 22 Nov 2011, 17:57

Our "CRC" is only a simple byte-wide XOR checksum in the case of interpreter packets and a duplication of the 3 data bits in the case of badness bytes. [Edit: Proper CRCs don't require any buffering either. They too can be computed on-the-fly.]

A big difference in our philosophies is that Coulomb and I put a lot of intelligence into the cell-top units, while you have minimal-intelligence (I was going to say "dumb" Image) cell-top units and a smart master. Smart cell-top units (BMUs) means that (a) they can continue to function autonomously if there is a break in the comms chain (and tell you where the break is), and (b) they can compress the important ("badness") data into a single byte, including error checking, so throughput is maximised.

Like you, we do not buffer more than a single byte, passing each one on immediately, so latency is approx 1 ms times the number of BMUs in the chain (114 ms in our case). But we effectively have two separate comms channels operating over a single pair of wires: the badness channel and the interpreter channel. These are distinguished by the high bit of each byte.

The badness channel (hi bit set) is binary and single-byte. It provides the high-priority information that will be used to automatically back off the drive or regen current when any cell becomes distressed ("bad") for any reason. The master does not need to poll for this information although it can do so if it wants it more often. The first BMU sends its badness regularly, and subsequent BMUs either pass it on unchanged or substitute their own badness if it is higher. The packet size does not grow as it passes along the chain but remains a single byte (including internal error check and implied packet termination).

Any BMU that doesn't receive a badness byte within a certain time will take over the job of the master/first-BMU and start regularly sending its own badness. It will also send a message in the interpreter channel giving its ID and telling of the break in comms (Tritium_James' idea).

The interpreter channel (hi bit clear) is textual and packetised. Packets are terminated by a carriage return which is immediately preceded by the XOR checksum byte. If the checksum would otherwise be a control character, a space is stuffed before the checksum. The interpreter channel is purely for communicating with slow humans. For convenience when debugging, there is a command to turn off checksumming (which must of course be sent with the correct checksum), and another to turn off badness-sending, so humans can just type at it from a dumb terminal program on a netbook. But in normal operation a master unit will be polling slowly on this channel to update a display for the humans.

Every BMU controls its own bypass resistors and LEDs for bypass, error and activity (yellow, red and blue). There is even an option for every BMU to raise its own audible alarm via an onboard 25 cent piezo disk. Every BMU keeps a record of its worst badness and the measurement that caused it, including what type of measurement it was, i.e. cell voltage, link voltage, temperature or comms. These can be interrogated at any time after a drive, and then reset. So the failure of a master, or the comms wires or optic fibres, or individual BMUs, has as little impact as possible on the system as a whole, within the limitations of unidirectional daisy-chained comms.
Last edited by weber on Tue, 22 Nov 2011, 07:13, edited 1 time in total.
One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Wed, 23 Nov 2011, 12:19

weber wrote: Our "CRC" is only a simple byte-wide XOR checksum in the case of interpreter packets and a duplication of the 3 data bits in the case of badness bytes. [Edit: Proper CRCs don't require any buffering either. They too can be computed on-the-fly.]


Thanks for that explanation of your system.
Sure, CRCs are computed on-the-fly, but I cannot forward a message until its CRC has passed and that means buffering the entire message. Hence a growing latency with each cell module.
Presently I use the full-duplex capability of the UART to send the incoming data stream on byte by byte as it is received. I think you use a software UART which would not do that, but with only 1 byte to pass on the latency is small, so it doesn't matter.
Your "badness" bits cut down your comms overhead quite nicely, not a bad way to go at all. How do you do the addressing though?
I like Tritium-James' idea for detecting where a break in the daisy chain has occurred and I might implement that myself. Also thinking of changing my comms line driver to a complementary pair (emitter followers) to reduce the impedance for both high and low levels. You already do that with your RS485 driver, I think.
I am quite puzzled by my control unit glitches at present. I tried powering it from a separate 12v battery and that didn't make any difference which rules out the power supply filtering. So it must be caused by comms glitches. I've added code to reset the UART if it gets a fault condition and that hasn't done anything either. Presently adding code to try and detect various things to see if anything gives me a clue. The strange thing is that the watchdog isn't doing a reset (it would tell me if it did) so the code is still running, but if I simply repower the control unit it works fine until the next time. I first thought it was related to the car speed, but it seems not as I can't make it happen at will. I've done over 60 kms since the balancing and the control unit has glitched up 6 times. It usually happens when I'm not looking at it, of course.
On a slightly different note, does anyone know why the AEVA system doesn't send me emails on topics I am supposedly watching?

Edit: I put a .01 uF cap across the opto-coupler output on my control unit this morning - effectively across the 2k2 pull-up resistor and went for a drive. First of all, the comms still worked! Second, I did 10 kms without a glitch. So it would appear to be better and go some way towards confirming that it is motor noise getting into my control unit via the comms interface that is the cause.
Last edited by Nevilleh on Wed, 23 Nov 2011, 04:21, edited 1 time in total.

User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Wed, 23 Nov 2011, 19:29

Nevilleh wrote:Thanks for that explanation of your system.
Sure, CRCs are computed on-the-fly, but I cannot forward a message until its CRC has passed and that means buffering the entire message. Hence a growing latency with each cell module.
Our BMUs just pass each interpreter-channel byte on as soon as it arrives, without worrying whether it constitutes a packet with a valid checksum. But every BMU will buffer the whole packet before _interpreting_ it, and will only interpret it if it has a valid checksum. The buffering-before-interpreting also allows backspace-editing of packets, for when they are being typed.
Presently I use the full-duplex capability of the UART to send the incoming data stream on byte by byte as it is received. I think you use a software UART which would not do that, but with only 1 byte to pass on the latency is small, so it doesn't matter.
We user timer hardware and interrupts for our software UART, so we can send and receive at the same time (full-duplex). No gaps between bytes. We don't even wait for a complete stop bit on receive before we start sending. We could forward them bit-by-bit, but the badness bytes need to be buffered to see if a higher badness should be substituted, and we want to allow for using a hardware UART in future.

The exception to the above is the software-only no-interrupts UART built into our awesome bootstrap loader (BSL). This BSL includes hardware initialisation, password, checksum and watchdog checking, gives feedback via the activity LED, and fits into 256 bytes of protected flash, along with calibration values for clock speed, voltage and temperature, and the BMU's ID and BSL version number. Sorry, couldn't help bragging about that. Image That software UART is only half-duplex. But it only comes into play if there's a watchdog timeout or if a valid BSL password is received by the interrupt-driven software UART.
Your "badness" bits cut down your comms overhead quite nicely, not a bad way to go at all. How do you do the addressing though?
Not sure what you are asking here. There is no addressing associated with the badness channel. The idea is to reproduce, as nearly as is possible with serial comms, the functionality of the single-wire daisy-chain of an analog BMS like Rod Dilkes'.

We share Rod Dilkes' philosophy that the way to get maximum reliability is to start with the simplest circuit that gives some protection and layer additional functionality on top of that in such a way that if the higher layers fail, the lower ones keep working. That's how evolution designs things. Trouble is, this is very hard to engineer at reasonable cost, so we compromise.

Our BMUs interpret any byte with its high bit set (and four valid check bits) as a command to either pass it on, or substitute their own badness byte if it is badder. The master can therefore interpret any such byte as telling it the maximum badness of the whole string at that point in time. It has no idea which cell or cells are having a problem, but at least it gets 3 bits worth of badness info instead of the single bit you get from an "analog" system, and it gets them as quickly as possible. This lets it start the backing-off process more gradually before things get really bad.

Remember that each cell logs its worst badness and the reason for it, so we can find out at leisure who was having the problem and why. At that later time we can address each cell in turn via the interpreter channel (using the "q" command for "query worst badness").

The interpreter is stack-based and reverse-polish (like a HP calculator) and the "s" command (for "select") means "If your ID is not the same as the number on the top of the stack then stop interpreting this packet". We have a special program that we bootstrap-load into each string of BMUs the first time it is wired up, that contains a command that assigns consecutive IDs to all the BMUs in the string, starting from a given value. The ID is stored in protected flash. Then we bootstrap-load the usual monitoring software.

The interpreter converts digit characters 0-9A-F to binary and accumulates them according to the input number base that is in force at the time (default is decimal, a preceding "$" means hex). The result is pushed to the stack as soon as a non-digit is received. A space character is a no-op and can be used to separate numbers.

So the packet "23sq"<chksum><cr>, or simply "23sq"<cr> if checksumming has been turned off, will cause only BMU #23 to respond with its worst badness and the reason for that badness.
Also thinking of changing my comms line driver to a complementary pair (emitter followers) to reduce the impedance for both high and low levels. You already do that with your RS485 driver, I think.
No. We just use two microcontroller outputs in a differential drive, with series 150 R resistors. But we're thinking of getting rid of the resistors completely. [Edit: The micro outputs are complementary pairs (CMOS).]
Edit: I put a .01 uF cap across the opto-coupler output on my control unit this morning - effectively across the 2k2 pull-up resistor and went for a drive. First of all, the comms still worked! Second, I did 10 kms without a glitch. So it would appear to be better and go some way towards confirming that it is motor noise getting into my control unit via the comms interface that is the cause.

Good work! I hope this holds up. Unfortunately this could not have been our problem since our master is connected to and from the string of BMUs by optic fibre. Only BMU's within the same box talk to each other on wires or PCB tracks (optocoupled).
Last edited by weber on Wed, 23 Nov 2011, 16:54, edited 1 time in total.
One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

User avatar
Johny
Senior Member
Posts: 3729
Joined: Mon, 23 Jun 2008, 16:26
Real Name: John Wright
Location: Melbourne
Contact:

Low cost BMS

Post by Johny » Wed, 23 Nov 2011, 20:47

weber wrote:No. We just use two microcontroller outputs in a differential drive, with series 150 R resistors. But we're thinking of getting rid of the resistors completely.
My advice - for what it's worth is DON'T. The auto environment is normally a spiky as an Echidna and EVs would be way worse.
I have blown up a BSP350 in my VFD (FET based purpose built output device with every kind of protection) just by having it's output running in the loom to the rear of the car (that incidently has the EV200 "coil" wire alongside it). resistors to limit uS wide high current spikes are essential. I'm now including series resistors on all I/O that connects to silicon that isn't already opto isolated.

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Wed, 23 Nov 2011, 20:50

No, I could see you didn't use addresses for the badness, but I wondered how you did it when you wanted some data. All is clear now.
Anyway, I have fixed my problem and the controller is no longer glitched up by a bad data byte. Same problem as before in that I - very cleverly - used 0xFF as the string terminator and wrote into the data array until that character was received. But if the terminator was corrupted, the thing wrote past the end of the array. "You WILL do bounds checking if you use arrays" (in C). So on the odd occasion the FF was corrupted and data overwrote ram in the controller.
I did wonder, because the cell modules all kept working dutifully sending data, but the controller went off with the fairies. Now works all the time and motor noise merely causes the odd bad byte. Considering I am polling twice per second and only get a bad byte every now and again, ie I can drive for 10 minutes and not see one, that's not too bad. And a single bad byte is not a problem as it is overwritten 1/2 second later. Because I get so much data so often, I don't "alarm" unless I get a few consecutive bytes ie over a couple of seconds.
Improvements I can make:
1. add parity (9th bit) to my data. Pretty similar to your XOR in terms of error detection and it means I can still transmit immediately if parity OK. Not as good as a CRC on the whole string, but better than nothing and MUCH faster. It will detect even/odd numbers of errors.
2. do a moving average on the data so one wildly out of whack byte doesn't have a big effect.
3. stick a .01 cap across the optos!
I think that'll make it pretty good. The odd corrupt byte won't matter at all.

Drove the car around some more, got to 96.6 kms and low cell was a bit over 2.8V so I drove home. Low cell was 2.85V at rest and the highest was 2.92V. I switched on the headlights and anything else I could find and got the dc-dc converter drawing 3A from the traction battery, turned the balancing on as well for another 1/2A or so and got the low cell down to 2.80V which left the high one still at 2.90V. Turned off all the loads except the balancer and left it going. It should bring all cells down to 2.80V this time and hopefully not take more than a few hours. Be interesting to see what the charger does this time.

User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Thu, 24 Nov 2011, 04:20

Johny wrote:
weber wrote:... we're thinking of getting rid of the resistors completely.
My advice - for what it's worth is DON'T. The auto environment is normally a spiky as an Echidna and EVs would be way worse ...
Good point. The reason I want to get rid of them is to get close to 20 mA into the optocoupler LED, instead of the 4 mA we're putting in now. Or putting it another way, to lower the driving impedance from 380 R to 80 R. The idea being to make the comms more noise immune.

The line only runs from one cell to the next, usually as parallel PCB tracks. The longest run is at the end of a row where it hops to the next row in twisted pair, maybe 200 mm. Probably should be shielded in both cases.

Might add some ceramic caps on the micro outputs for protection. Or a dual TVS?
One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

User avatar
Johny
Senior Member
Posts: 3729
Joined: Mon, 23 Jun 2008, 16:26
Real Name: John Wright
Location: Melbourne
Contact:

Low cost BMS

Post by Johny » Thu, 24 Nov 2011, 14:30

weber wrote:Good point. The reason I want to get rid of them is to get close to 20 mA into the optocoupler LED, instead of the 4 mA we're putting in now.
If you are not getting any/many errors now why increase the drive current? Ypur biggest problem is common mode noise which you have dealt with by using differential drive to the optos.
Adding caps directly on the uC outputs just stresses them and won't provide much in the way of RF or noise protection. Better to RC integrate on the receiving side after the opto.

I think you have already done all the right things so if you log/count errored packets when the whole thing is running full pack voltage and don't see any errors normally - leave as is. (IMO)

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Sat, 26 Nov 2011, 12:44

Balancing only took 3 1/2 hours this time and brought all cells to 2.80V. That's got to be a better result.
Charging took 121.6 AHrs (my charger is limited to 15A due to only 230V single phase power) and a bit over 8 hours. The highest cell was at 3.62V and the lowest at 3.39V but they came back to within 30 mV of each other with a short drive, less than a km or so.
Damn BMS is still glitching up now and again! In spite of my best efforts, and I don't see why bad data should cause it to stop receiving, but it does. I'll have to set it up on the bench and try feeding it some noise along with normal data, then I'll be able to analyse stuff a bit better. Its a bit hard programming the thing, putting it in the car, driving for 1/2 hour while trying to watch the display and not running into anything!

User avatar
coulomb
Site Admin
Posts: 3766
Joined: Thu, 22 Jan 2009, 20:32
Real Name: Mike Van Emmerik
Location: Brisbane
Contact:

Low cost BMS

Post by coulomb » Sat, 26 Nov 2011, 14:15

Nevilleh wrote: The highest cell was at 3.62V and the lowest at 3.39V but they came back to within 30 mV of each other with a short drive, less than a km or so.

Remember that when you are bottom balancing, unless you have cells with exactly matched capacities, they aren't going to look balanced at the top (they'll have "ragged top"). They'll come to within 30 mV of each other with a short drive mainly because they'll have moved away from the high voltage end of the discharge curve to the much flatter middle part.

Of course, if your pack was quite unbalanced to start with, then two cycles of bottom balancing will probably make them appear more balanced at the top, simply by reducing the worst of the imbalances. But once you get the cells well balanced at the bottom, you can expect them to stay somewhat "ragged" at the top, due to differences in cell capacities. If there were no significant cell capacity differences, the whole idea of bottom balancing would not be worth doing; balancing at the top would imply balance in the middle, at the bottom, and everywhere.

It's sad to hear that you're still having the BMS glitches, although it's perversely heartening to us that we're not the only ones with comms glitches (if that's what it is, both for you and for us). I think we were close to finding some errors in our software yesterday, when one of the chips on our Driver Controls programmer let some smoke out. Another hundred+ dollars and another week delay while we get a replacement. Sigh.
Nissan Leaf 2012 with new battery May 2019.
5650 W solar, 2xPIP-4048MS inverters, 16 kWh battery.
1.4 kW solar with 1.2 kW Latronics inverter and FIT.
160 W solar, 2.5 kWh 24 V battery for lights.
Patching PIP-4048/5048 inverter-chargers.

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Sat, 26 Nov 2011, 15:22

Yes, I don't expect that the voltages when charged until the first one reaches 3.6V will get much less "ragged" than they are now. I do hope that now I am no longer thrashing the poor things as hard as I was before I had the full bms connected up, they will last forever Image

The BMS has been frustrating me a lot, because when it is working properly it is wonderful! Gives me instant cell voltage changes and is quite fascinating to watch to the point where it becomes dangerous. I need another driver!

One thing that has puzzled me is that when it glitches up, simply cycling the power to the controller causes instant recovery, so it appears that all my cell modules are working away without any problems.

I now have the control unit on the bench with a computer generating the data stream that I would receive from the battery and it is already glitching up when I add the temperature poll. Hopefully, the bug in my controller software will now become a bit easier to find.

I'm not much of a programmer, been a hardware engineer all my life, but I am ever hopeful Image

Sorry to hear your programmer went up in smoke. I can merely sympathise and secretly chuckle that my PickIt 3 programmer for the PIC chips only cost $30.......

User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Sat, 26 Nov 2011, 23:15

Nevilleh wrote:But if the terminator was corrupted, the thing wrote past the end of the array. "You WILL do bounds checking if you use arrays" (in C). So on the odd occasion the FF was corrupted and data overwrote ram in the controller.
We'd never make that mistake, would we Coulomb. Image
Johny wrote:If you are not getting any/many errors now why increase the drive current?
Ah, but we are getting too many errors. We are having a similar problem to Neville, although the cause(s) could be quite different.
Your biggest problem is common mode noise which you have dealt with by using differential drive to the optos.
I'm not sure I understand why differential drive should make any difference to common mode noise rejection at the opto. Tritium_James suggested it, and it seemed like a good idea at the time, but it seems to me that it's the fact of using opto-isolation that's the important thing. How could it matter (for common mode noise rejection) whether the twisted pair that's driving the opto's LED originates as two outputs, or an output and GND, of the driving BMU? I may be missing something. Is it about where the stray capacitances are or something? Please help me out here.

It seems to me that the only advantage of differential drive here is that when the LED is off it is reverse biased, so more noise voltage required to turn it on. But it gives no improvement in the noise margin when the LED is on.

The main reason we're glad we did differential comms outputs is that with the addition of a cap and diode it lets us do voltage-doubled drive of a visible-red fibreoptic emitter. The lower-voltage IR emitters suffer too much loss through the low-cost plastic fibre. This way the red emitter can keep working from a cell that's as low as 1.5 volts.

And it's such a buzz, after decades of working with copper wires, to be able to just look down the "wire" to see if there's a signal. And you can do this thing where you waggle the end of the fibre rapidly back and forth in front of your face to see how much data's there -- the "poor man's oscilloscope"? At 9600 b/s I can't quite resolve individual bits that way, but I can see individual bytes when they don't immediately follow each other.
Adding caps directly on the uC outputs just stresses them and won't provide much in the way of RF or noise protection. Better to RC integrate on the receiving side after the opto.
OK. But it does seem like we might want some filtering immediately before the opto LED, as a bright spike might make the phototransistor very slow to turn off -- there being such an asymmetry in that regard, with phototransistors (PTs) that don't have base pulldown resistors (using 4 pin optos).

Neville has full cell voltage (say 3.3 V) for his supply and uses 330 R in series with his opto-LED, a 2k2 pullup on his opto-PT, and a 10 nF cap at the micro input (on the master at least). We have a 2.5 V regulated supply (until a cell falls below that), 300 R in series with our LED, 1k5 pullup on our PT, and no cap.

I've successfully bench-tested ours with no series resistor for the LED, just the 80 R MOSFET on-resistances of the two micro outputs (40 R each), a 390 R pullup on the PT, and 27 nF at the micro input. That's a time constant of 11 us or 1/10th of a bit time, or in the frequency domain it's a -3dB frequency of about 15 kHz which is about the 3rd harmonic of the fastest square wave at 9600 b/s (i.e. 4.8 kHz). The opto is a low cost TCMT1106.

Given that Neville's time constant is double ours, we can probably stand to have the same time-constant again at the LED. Say 120 nF with the 80 R. And maybe the right distribution of capacitance between LED and PT will equalise the rise and fall times at the micro input. I'll give it a try.

One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Sun, 27 Nov 2011, 12:30

I'll look forward to seeing your results with great interest.
I have found another bug in my software and hope to have yet another iteration ready to test shortly, except that it is Sunday and I am off to church (actually a wine tasting) this afternoon.
I'm also going to try screening the long wires running from my last cell to the controller by wrapping a bit of Al foil around them to see if that reduces the error rate. Pity I cant hook a 'scope up to the thing while driving! That would not be a problem with Weber/Coulombs system as you use optics for that link. I could make up a fibre-optic link for that bit without too much trouble (excuse dreadful, unintentional pun) and it might be worth a try.
Trouble is, I'm not sure where the interference is coming from, emi or picked up via the power supplies to the cell modules.

User avatar
coulomb
Site Admin
Posts: 3766
Joined: Thu, 22 Jan 2009, 20:32
Real Name: Mike Van Emmerik
Location: Brisbane
Contact:

Low cost BMS

Post by coulomb » Sun, 27 Nov 2011, 14:32

Nevilleh wrote: ... it is Sunday and I am off to church (actually a wine tasting) this afternoon.

Oh, wow, does St Peter at the Pearly Gates fall for that one? "Oh, yes, St Peter; I went to church every Sunday (hic!)"   Image
Pity I cant hook a 'scope up to the thing while driving!
If your CRO is a not-too-bulky DSO, you actually possibly could, with a small mains inverter plugged into a cigarette lighter outlet. The trouble would be triggering it in some way without crashing the car. I would think that you might be able to set it up with a very slow sweep, and just press the single trigger button before some event (like when you floor it up a hill). After the event, find some place to pull over, examine the output, and if necessary repeat the test a few times.
Trouble is, I'm not sure where the interference is coming from, emi or picked up via the power supplies to the cell modules.
Right. So the DSO on the 12 V line, or on the end of the long data line, might at least tell you that, so you could focus your efforts on the right problem.

If you don't have a DSO yet, Christmas is coming... maybe a hint to the wife?   Image

[ Edit: DSOs are dead useful; they're really a whole new level past the ordinary CRO, because you can capture some event, then examine it in great detail afterwards. You usually just can't do that with a CRO; you have to know what you're looking for, and have to be lucky enough to capture the event properly. ]
Last edited by coulomb on Sun, 27 Nov 2011, 03:34, edited 1 time in total.
Nissan Leaf 2012 with new battery May 2019.
5650 W solar, 2xPIP-4048MS inverters, 16 kWh battery.
1.4 kW solar with 1.2 kW Latronics inverter and FIT.
160 W solar, 2.5 kWh 24 V battery for lights.
Patching PIP-4048/5048 inverter-chargers.

Tritium_James
Senior Member
Posts: 683
Joined: Wed, 04 Mar 2009, 17:15
Real Name: James Kennedy
Contact:

Low cost BMS

Post by Tritium_James » Sun, 27 Nov 2011, 16:38

weber wrote:It seems to me that the only advantage of differential drive here is that when the LED is off it is reverse biased, so more noise voltage required to turn it on. But it gives no improvement in the noise margin when the LED is on.

Yep, that's it! It can help a lot when you have rapid dV/dt transitions between one side of the opto and the other, as the capacitance across the isolation barrier can actually turn on the LED in some situations - usually in motor drives where you have the phase voltage banging around as fast as the main silicon can move it. There's a couple of app notes from the various opto manufacturers on it - Agilent have a decent one I seem to recall. Probably worth a read for you guys actually.

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Sun, 27 Nov 2011, 16:54

Whoopdy-Doo, happiness and joy!
Just did 12.6kms in the car with my latest software revision and NO glitches. The thing worked perfectly the whole way, 50 zone, 80 zones, 100 zones. It did log 10 framing errors and those cause the UART receiver to be reset, but that's all. I don't know how many bad data bytes might have occurred as I don't have any error checking ie parity other than framing and over-run errors, but the displayed voltages all looked pretty good the whole time.
Since it does a poll twice per second, the data is refreshed so often that if it did display a silly value, you'd have to be looking at right then to see it.
I do have a DSO so I suppose I could buy a small inverter to power it and enlist tye help of someone to drive the car while I drive the gear, but tyhe urgency has dropped right off Image
Now I'm off to the winetasting!

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Mon, 28 Nov 2011, 11:50

coulomb wrote:   
Oh, wow, does St Peter at the Pearly Gates fall for that one? "Oh, yes, St Peter; I went to church every Sunday (hic!)"   Image


Apparently he is quite amenable if you have a bottle tucked under your arm as you approach!!

Normally I only buy NZ whites and Australian reds, but yesterday I tasted a couple of really excellent reds grown in the Hawkes Bay region of NZ. Might have to get rid of my prejudices.

Very happy with the bms now, I think (hope) I have fixed the last software bug and it is doing as expected. I'll do some more testing this week and error logging - I put some software in to count framing and over-run errors - and see how it goes.

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Fri, 20 Jan 2012, 12:12

Just realised its been a month and a half since my last post on here! The BMS has been operating quite happily all that time and has proven to be very useful for managing the battery. The error logging I put in shows framing and over-run errors now and again, but they are of no consequence as the UART is simply reset each time and the vast amount of good data simply swamps the odd error.
The cells achieve balance quite quickly now, usually within an hour or so which is very different to the first time I did it - days!
I also have the cut-off output connected to the charger, so charging stops as soon as one cell gets to 3.60V and this means I can just leave it plugged in and charging knowing that no cell will be overcharged.
I find that running everything this way and re-charging as soon as any cell drops below 2.8V - an arbitrary limit, but one that I hope allows me to keep dod to around 80% - I am getting a consistent range of 100 - 105 kms on a single charge. All in all, the car is pretty good, goes well, nice to drive and cheap to run! My watt-hour meter shows about 16 units of electricity per charge which is about $3.20 and that takes me 100 kms. The original petrol engine would've used about 8 litres of fuel at around $2.10 per litre ie $16.80, so I save $13.60 per 100 kms. The thing cost me about $20,000, so I'll get my money back in 147,000 kms!
Anyway, the bms is now a reliable and useful item and if anyone would like to make one of their own, all the information needed can be downloaded here:
http://ecomodder.com/wiki/index.php/Ope ... .28Rev1.29

I do have some pcbs available for purchase as well.
Last edited by Nevilleh on Fri, 20 Jan 2012, 01:16, edited 1 time in total.

User avatar
weber
Site Admin
Posts: 2623
Joined: Fri, 23 Jan 2009, 17:27
Real Name: Dave Keenan
Location: Brisbane
Contact:

Low cost BMS

Post by weber » Sat, 21 Jan 2012, 04:01

That's good news. It seems our BMS was working all along too. It was the sniffer that was introducing the noise!
Nevilleh wrote:The cells achieve balance quite quickly now, usually within an hour or so which is very different to the first time I did it - days!
I also have the cut-off output connected to the charger, so charging stops as soon as one cell gets to 3.60V and this means I can just leave it plugged in and charging knowing that no cell will be overcharged.
I find that running everything this way and re-charging as soon as any cell drops below 2.8V - an arbitrary limit, but one that I hope allows me to keep dod to around 80% - I am getting a consistent range of 100 - 105 kms on a single charge.

I take it this is still bottom balancing? How do you arrange it so you're always near home (or other charge point) when the first cell drops below 2.8 V?

What makes you think 2.8 V is near 80% DoD? Your balancer surely discharges at less than 0.1 C and the Sky Energy curves show that at low rates of discharge 2.8 V is more like 98% DoD.
One of the fathers of MeXy the electric MX-5, along with Coulomb and Newton (Jeff Owen).

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Sat, 21 Jan 2012, 12:40

Yes, still bottom balancing.
The 2.8v figure is what my low voltage alarm is set for and that is under full load. If a cell gets this low while accelerating, its time to go home! Note that I seldom see more than about 400A on the ammeter. Any cell should be be supplying 130 of those, something like 3C, so I expect a voltage drop of about .4V.

With no load, the voltage goes back up close to 3.2 and that is - sort of - in and around the 37 AH mark which is about 80% of the 46 AH that I am using as the full capacity. With that sort of capacity margin still left, I can drive quite a way and still keep the volts above 3.0 if I keep acceleration down to a minimum. So 2.8v for a low voltage alarm is perhaps not as arbitrary as I made out.

Its quite difficult to use a dod of exactly 80% and I figure between 80 and 90 is as good as I can do. I know from bitter experience that if I go to the 100% mark, I can get about 130 kms and 100 kms is nearly 80% of that.

I think the only way to really manage the dod is to have an accurate amp-hour meter installed and I do have such a thing but haven't wired it up as yet.

Most of my trips are from 10 - 25 kms as I only use it around town and pretty much all of that is in 50 or 60 zones, so the amount of power used is more dependent on how heavy footed you are than anything else.

I reset the trip meter every time I do a charge, so I know pretty much how far I can go. I don't do balancing on every charge and when I do decide to do it, I drive around the block a few times to get the cells down pretty close to the balancing voltage. This is pretty close to a 100% discharge and I'm not sure how good that is for the things, probably not very! It might be better if I set the balancing voltage up a bit higher, but I haven't got around to that yet.

I have yet to try top balancing, so I have no opinion on that.

I have learnt that with an EV you really have to manage your trips carefully and plan ahead. You can't just jump in and turn the key, as if you have forgotten to fill up, there's a good few hours before you can use the thing.
Its going to be a long, long time before EVs become as convenient as ice vehicles, if ever.
I'd say that now the novelty of doing it has worn off, I wouldn't bother again unless it was to build something designed specially for electric drive, with a very low weight, super efficient motor/controller and better batteries than what are currently available, and that would just be to explore the limits of technology.


User avatar
bladecar
Senior Member
Posts: 445
Joined: Tue, 05 Jul 2011, 16:32
Location: Brisbane

Low cost BMS

Post by bladecar » Sat, 21 Jan 2012, 13:36

Hi Nevilleh,

It's good to read offered opinions on the ev's people own regarding the usability of the ev's.

My Vectrix scooter with the Nimh batteries is a pain, where it can be showing 50% readout on the meter and then suddenly crash to 2 bars. I'm expecting that the Li batteries will be much better.

The cars I'm buying should take care of some of the uncertainty you have described regarding remaining range (from the point of view of battery balancing setup), though you did say you're confident of just how far you can go.

I guess it's like owning a Harley sportster with its tiny fuel tank. That range limit could sometimes overshadow the happy feelings.

I misuse my Vectrix's because I do feel 'superior' while riding along amongst the petrol/diesel cars. (Yes, I know it's silly to pretend such a thing, but it's fun). I felt privileged on riding past the petrol station yesterday and observing E10 at $1.49.9 and Premium at $1.60.9. The price is supposed to drop again in the months ahead, but, like superman, its fate is up, up and away.

p.s. I too, buy local wine, though the reds can come from WA, which is not quite local.

Nevilleh
Senior Member
Posts: 773
Joined: Thu, 15 Jan 2009, 18:09
Real Name: Neville Harlick
Location: Tauranga NZ

Low cost BMS

Post by Nevilleh » Sat, 21 Jan 2012, 15:04

Make no mistake, I do enjoy driving the BMW and I feel a certain degree of smugness as I wizz by a petrol station! But you simply can't live with an EV as your only means of transport, you have to have an iced car as well.
I have a couple of electric scooters, an EVT and an E-Max and they were/are a bit of fun on a nice day and really good for getting around town as long as you don't have too far to go. You can park on the footpath (illegally) and no-one seems to care.
So its horses for courses and the electric car won't take over the world until it can do 600kms on a charge and you can recharge it in 5 minutes, including paying. And it has to be cheap to buy!
The doomsayers all predict the time when liquid fuels disappear, but that is not in the foreseeable future as the oil companies just keep on finding more of the stuff - all it does is cost more. eg shale extraction.

I've thoroughly enjoyed doing the conversion though and its hard to put a price on that. I did think about doing another one with an ac motor and the like, but when I look at the amount of work (look at coulomb and weber's project!) and the cost and I see that the vehicle produced isn't really much better - if any - than mine, I think its not worth doing. I can see a small improvement in efficiency by going down that path, but in practical terms, there's no difference between a range of 100 kms and a range of 150 kms, they're equally inconvenient.

User avatar
bladecar
Senior Member
Posts: 445
Joined: Tue, 05 Jul 2011, 16:32
Location: Brisbane

Low cost BMS

Post by bladecar » Sat, 21 Jan 2012, 16:25

Hi Nevilleh,

I have electric scooters, an electric-assisted pushbike, the Blade cars that are due when ready, but I also bought a 2nd hand prius which is due to have 14.4Kw of Li batteries, with a promise of absolutely minimum fuel requirements. oh, and an electic mower, but we have a small yard (and solar hotwater as well, and rechargable batteries).

The Prius is my plan for going out and going away. The phev will be my answer to the situation that you rightfully describe.

Time will tell me how right I was to take this path.

Our pre-bought panels now make so much more sense.

When we bought the panels (it's now or never), they said that it would cost double to have a battery system, one that stood alone and not part of the grid. I never realised up 'til then that when the grid goes down (blown transformer, eg), our inverter shuts itself down until the grid comes up again. The grid will disable our free power.

On buying the blade cars, and realising that there weren't no lead-acid batteries in them :), I started to think about such a battery pack for our panels. Finally, it dawned on me that my combined battery capacity will be approx. 58 KW.

Can anyone give me their thoughts on what the house wiring will look like in order to be able to isolate from the grid under such circumstances, and simply run off our plugged-in battery cars (prius included).

I.e., I realised I didn't need to buy a battery pack for the panels in a conversion attempt to stand-alone, they are arriving with the cars.

Post Reply