Wire recommendation from panel
Collapse
X
-
Im not trying to pawn any reasons. I do have an open mind. None of you have tried to explain the 101 electronics, only flaming me for what I have shared. And The mod asked to stop, so I stopped. Maybe you should do as the mod asked as well. -
Can we cut the BS here? As Mike said, this needs to stop.
ncs55, I respect your vast field experience (and I'm sure there's a lot I can learn from you there), but understand you are dealing with some full-on electrical engineers who know their shi!t. Quit pawning off your reasons because we don't know the "internals of software design of inverters", etc. Most of this is basic electronics 101, V=IR, etc. I believe you sincerely want to help and are not a snake-oil salesman but it is also You that needs to have an open mind and learn.
If that's not enough, I'll take you up on your rhetoric, please gives names of your contacts who tell you other-wise and we'll conact them and plow forward. -
Sadly, NCS55, the mfg's have been BS'ing you. To measure resistance in an active wire, you need a 4 lead tester or a dedicated circuit which is not likely present in any MPPT controller. As has been said before, if an inverter is rated at x Watts and the input range is 300 - 500VDC, and you feed it 302v, it supposed to perform as well as at 500V. Or they are fudging on the specs, to capture more market share.
Now it's true that a 302Vdc 4Kw circuit will have more amps flowing, than a 495Vdc 4kw circuit, and the internal components are SUPPOSED to be sized to handle this, as to if they actually are, is a whole different story, it's often the case mfg's will overspec stuff, and deal with the 1% warranty claims for their failures, but they still make more money than if they lost market share.
And solar MPPT gear works no different than any other electronic circuits at the same voltages / wattages. Nothing magic about it other than it will light your house on fire if designed wrong. UL lab testing is supposed to weed out the fire hazards, but not poor design failures.
You have been allowed lots of space to spin your tale as you know it, but I'm (as an electronics engineer and moderator) saying it's going to stop;
A full 1/3 of my design was high voltage spacecraft DC bus stability electronics, and there is nothing as you describe, that is feasible, even with an 8 hour re-education.Leave a comment:
-
Most of the time, when these burned boards are discovered on the DC input side of the inverter, the techs start asking me about the wire size, array current and array voltage that is directly upstream. 99.9% of the time they relate the bad board to, and I quote, out of range voltage drop. Although sometimes it is a grounding or lack of grounding issue. So when I ask them what is the optimal voltage drop to keep this from happening they always tell me 1% or less for the longest life of their product.
In a 4Kw inverter, running 400VDC input (10A), an extra 1% voltage drop (4V) [ 2% V drop total ] in the lead in wire would cause a whopping extra 0.1 amp. If 0.1 amp can burn an inverter up, it's a failed design.Leave a comment:
-
Sadly, NCS55, the mfg's have been BS'ing you. To measure resistance in an active wire, you need a 4 lead tester or a dedicated circuit which is not likely present in any MPPT controller. As has been said before, if an inverter is rated at x Watts and the input range is 300 - 500VDC, and you feed it 302v, it supposed to perform as well as at 500V. Or they are fudging on the specs, to capture more market share.
Now it's true that a 302Vdc 4Kw circuit will have more amps flowing, than a 495Vdc 4kw circuit, and the internal components are SUPPOSED to be sized to handle this, as to if they actually are, is a whole different story, it's often the case mfg's will overspec stuff, and deal with the 1% warranty claims for their failures, but they still make more money than if they lost market share.
And solar MPPT gear works no different than any other electronic circuits at the same voltages / wattages. Nothing magic about it other than it will light your house on fire if designed wrong. UL lab testing is supposed to weed out the fire hazards, but not poor design failures.
You have been allowed lots of space to spin your tale as you know it, but I'm (as an electronics engineer and moderator) saying it's going to stop;
A full 1/3 of my design was high voltage spacecraft DC bus stability electronics, and there is nothing as you describe, that is feasible, even with an 8 hour re-education.Leave a comment:
-
Most of the time, when these burned boards are discovered on the DC input side of the inverter, the techs start asking me about the wire size, array current and array voltage that is directly upstream. 99.9% of the time they relate the bad board to, and I quote, out of range voltage drop. Although sometimes it is a grounding or lack of grounding issue. So when I ask them what is the optimal voltage drop to keep this from happening they always tell me 1% or less for the longest life of their product.
That's the explanation you're going with for why you were wildly optimistic about the benefits?Leave a comment:
-
I am afraid it's you who doesn't appear to understand this enough.
An inverter can not know what the voltage drop is within wire that connects it to the modules.
It knows what the voltage-current curve characteristics are.
And adjusts to the max power-point.
All it can measure is at the terminals connecting to the modules - there is no way for it to know how much power is being disipated in the wire between the modules and the inverter.
And because that's the only place it can measure, it will not know if there is voltage drop on the DC side that is outside the "optimal range". There's simply no way for it to know that the curve it is able to observe is because of 2% vs 3% voltage drop within the wires or because the system is using Mitsubishi modules instead of LG modules.
So if a 1% vs. 3% voltage drop on the DC side impacts an inverter's longevity, please explain what mechanism is it that impacts the inverter's components. Claiming it's too complex for us to understand isn't going to carry much weight with people who have electrical engineering degrees and work in related fields.
Decreasing the voltage drop to provide higher production MAY be worthwhile economically.
However whoever is footing the bill should do the actual calculations. Because someone may try to claim that it's more benefit than it really is. (ex. claim it's a years worth of production over 25 year lifetime when really it is 1/4 of a year's production)
The production claim, I only claimed that to get someone to do the math and actually show the percentage. It was an exercise to show the value of planning for this in the beginning and that the benefits outweigh the costs. You were the only one smart enough to take the calc and do the math. My customers already know the benefits as they are mostly referrals from other satisfied customers.Leave a comment:
-
Exactly my earlier point. A string inverter has valid input from 300Vdc - 500Vdc (generally) it's such a wide range, the difference of 1 or 2% wire loss is meaningless. Lost in the noise. heck my old inverter accepted 280V - 540V and the MPPT was valid from 320- 480Vdc, which was a much tighter band to dial the array configuration into.
And circuits are circuits, either in space or on the ground, the components follow the same basic rules. They are either designed to survive reasonable conditions, or not. -
It is not far fetched if you understand how the software and the components on the boards react to voltage drop that is out of the optimal designed range. Please read what I originally said, that was, I am seeing this causing premature inverter failure. it has nothing to do with how many modules are in the string as long as the strings meet the design criteria of the inverter You do not see it because you are not trained in internal componentry and how to fix the failures or even what causes them. These designs are not mine either and I have no issues with my designs or production or how long they last. I do not understand why you guys cannot understand this simple principle. I'm tired of the closed minds in here. And I see in a lot of posts where others relate experiences and the self proclaimed experts shoot it down with basic 101 knowledge. Pay a few thousand dollars, and get some training on this subject and then come back and discuss this. You obviously do not know what you are even commenting on, so maybe you should not comment.
An inverter can not know what the voltage drop is within wire that connects it to the modules.
It knows what the voltage-current curve characteristics are.
And adjusts to the max power-point.
All it can measure is at the terminals connecting to the modules - there is no way for it to know how much power is being disipated in the wire between the modules and the inverter.
And because that's the only place it can measure, it will not know if there is voltage drop on the DC side that is outside the "optimal range". There's simply no way for it to know that the curve it is able to observe is because of 2% vs 3% voltage drop within the wires or because the system is using Mitsubishi modules instead of LG modules.
So if a 1% vs. 3% voltage drop on the DC side impacts an inverter's longevity, please explain what mechanism is it that impacts the inverter's components. Claiming it's too complex for us to understand isn't going to carry much weight with people who have electrical engineering degrees and work in related fields.
Decreasing the voltage drop to provide higher production MAY be worthwhile economically.
However whoever is footing the bill should do the actual calculations. Because someone may try to claim that it's more benefit than it really is. (ex. claim it's a years worth of production over 25 year lifetime when really it is 1/4 of a year's production)
Leave a comment:
-
To ncs55
And I do thank you for contributing your knowledge and experience concerning what you do for a living. It is always a welcome to get good feedback from those in the field. You are also correct that most failures are not due to manufacturers design or equipment but how the end users takes shortcuts to get them to work.
But through my experience of determining root cause analysis I have found a number of equipment issues being caused by poor manufacturing and their lack of good quality control.
Sometimes a manufacturer is under the gun to deliver equipment per the purchase order documentation. When a delay happens they might take shortcuts to make sure the delivery is on time and the equipment works as designed . But sometimes they miss the mark and a piece of crap has been let out of their control. Not a lot but it only takes a few times to get a black eye.
Case in point were some of the early version of micro inverters (manufacturer name held to protect the guilty) that quickly failed due to heat issues.Leave a comment:
-
Maybe the manufacturers components do have problems, but I do not think so. Most inverters are very rugged. 99.9% of the time inverter failure is due to improper design and or installation techniques, IE Improper voltages, voltage drops, improper grounding etc. I have not been able to go further with explaining the other contributing causes that we see in the field that lead to this failure due to the fact that I have to defend myself from these experts. I live off grid and for a long time also, but that makes me no expert on the subject. We have been able to keep Lead Acid batteries healthy and in the field longer than most others in this area who settle for a 5-7 year lifespan consistently. When we are asked how this is possible and how we can achieve this we are either flamed or called a liar and have to prove what we have already openly and willingly shared. I see the same type of closed minded thinking in this forum. As far as what I have shared on this subject, I have simply shared what I am seeing and how it gets fixed and what was determined as the cause of failure. Concerning this subject specifically, the people who do not understand what I am trying to say or think I am wrong obviously do not understand how an inverter processes the energy coming into it or how the internal circuits work and are affected for long term when the parameters are not at optimal conditions. Looking at the previous replies shows me that what I said was never even understood. I understand about manufacturers, and issues with products as I choose to share my field data to help them improve their products and in return I gain knowledge from them about their products that most people will never get. It is called collaboration. Head to head battles is a long and hard road. In the end, I see that when someone in here tries to share what is not common knowledge, or experiences in the field to this collective they had better be prepared to be called names and be flamed. -
To ncs55
You have to understand that Mike has been living off grid for a long time and uses LiFe battery technology. He has first hand experience with the batteries and a number of charge controllers.
You seem to have experience with repairing inverters and may have seen the same issue come up that was caused by overheating of the internal components. Certainly a high voltage drop will contribute to heat in the wires and electronics but I will tell you that if components are failing due to someone use a wire that causes 1 to 2% more voltage drop then I would say the manufacturer of that inverter is not building their equipment sturdy enough for the general public.
I have had a number of head to head battles with some small companies like Allen Bradley, Siemens, Westinghouse, ABB, Fluke, etc where I found component issues that had nothing to do with the user but the manufacturer. While those companies did not immediately agree with me I got enough of their attention for them to review my comments and to make some software and hardware modifications to their equipment.
Now I will get off my soap box and back to enjoying this forum and the knowledge I get from it.Leave a comment:
-
It is not far fetched if you understand how the software and the components on the boards react to voltage drop that is out of the optimal designed range. Please read what I originally said, that was, I am seeing this causing premature inverter failure. it has nothing to do with how many modules are in the string as long as the strings meet the design criteria of the inverter You do not see it because you are not trained in internal componentry and how to fix the failures or even what causes them. These designs are not mine either and I have no issues with my designs or production or how long they last. I do not understand why you guys cannot understand this simple principle. I'm tired of the closed minds in here. And I see in a lot of posts where others relate experiences and the self proclaimed experts shoot it down with basic 101 knowledge. Pay a few thousand dollars, and get some training on this subject and then come back and discuss this. You obviously do not know what you are even commenting on, so maybe you should not comment.
Leave a comment: