Announcement

Collapse
No announcement yet.

Mirroring/intercepting SunPower Monitoring Traffic?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by astroboy View Post
    anyone who can build either one of those things probably also has the technical chops to just write the script from scratch...
    And yet it would be easy for you to check the script in to github.... maybe someone else would contribute polish to address those other issues. Give it a try, and be sure to credit that other project!

    Comment


    • #47
      Originally posted by astroboy View Post
      the sunpower monitoring outage a couple of weekends ago was interesting - the supervisor seemed to keep retransmitting a whole load of data points for some number of hours, and then it gave up. when the monitoring came back on line it eventually transmitted everything that was pending, but it seems to have taken many hours of successful "live" transmissions for it to decide to replay the failed messages.
      Indeed! Since my script _is_ stateful, and I serve up my own page with a graph, my stuff kept going despite the sunpower outage, just based on the data that the supervisor kept trying to send. Combine that with the finer resolution of the differential data (there's some heavy rounding going on on the sunpower site!), and I'm quite happy with the results!

      Originally posted by astroboy View Post
      my script is once again stateful; without checking the latest data point already submitted to PVOutputs, i pretty quickly overran the API request limit while the supervisor was sending the same stuff over and over again. also the lack of packet reassembly in Net::PcapUtils kind of sucks; with the big bursts of data the packets are fragmented. for whatever reason, under normal circumstances, if the supervisor sends a long packet, the 130 messages come early enough in the packet that they are not split by a fragmentation boundary. however when the supervisor was backed up there were lots of 130 messages in big, fragmented packets and so i missed some of them. Net::Pcap is kind of a pain in the butt so i'll probably leave well enough alone, it seems to work well enough.
      I have two scripts: the first one simply handles the Pcap stuff and streams just the supervisor->sunpower traffic, extracted as text, to stdout, and the second acts on the data. The second script spawns the first, and since my workflow requires promiscuous mode on the network interface, which requires sudo, this has the added benefit that only the Pcap script needs to run sudo. The decoupling means I don't incur any buffer overflows in the Pcap kernel buffer (I've made that big enough anyways that it would be very unlikely to happen!) in the first script, and means that the second script only needs to deal with a stream of text lines, which is easy.

      Also, turns out my net (and thus consumption) numbers were right after all (and track closely to the sunpower numbers, albeit without the massive rounding errors of the sunpower system), there was a flaw in my logic that summarizes each day's cumulative numbers. So the data was correct, just my processing sucks... I can fix that, I'm sure...

      Thanks for the response, and for all your help working through this (and the revelation that the wonky 102 messages really are most likely simply a checksum), much obliged for the help!

      Comment


      • #48
        Originally posted by astroboy View Post
        given all that, anyone who can build either one of those things probably also has the technical chops to just write the script from scratch...
        I agree. The logic of the script is the easier part, setting up the system that is necessary to capture the packets and deal with the data is, while not exactly difficult, certainly more involved, and it's not something that can be captured on github...

        Comment


        • #49
          Originally posted by robillard View Post

          I agree. The logic of the script is the easier part, setting up the system that is necessary to capture the packets and deal with the data is, while not exactly difficult, certainly more involved, and it's not something that can be captured on github...
          Also, this thread serves as a pretty decent write-up, both of the approach to take for scripting, and for at least an approach to the hardware setup.

          Comment


          • #50
            Originally posted by DanKegel View Post
            And yet it would be easy for you to check the script in to github.... maybe someone else would contribute polish to address those other issues. Give it a try, and be sure to credit that other project!
            I think you're missing the point... The script will be naturally tailored to the workflow required by the hardware; mine certainly is...

            Comment


            • #51
              Originally posted by robillard View Post
              I think you're missing the point... The script will be naturally tailored to the workflow required by the hardware; mine certainly is...
              That's ok. You can start off with a hardware-specific script. Someone else can generalize it. The point is just to get a proof-of-concept out there.

              Comment


              • #52
                Originally posted by J.P.M.

                I wouldn't get upset about Dan missing the point. As I recall, he doesn't claim to know much but he does claim to like science, if I correctly interpret the sense of what he once wrote on his handle, take him at his word and, IMO only, as he seems to have demonstrated many times.
                I've no idea how you got "upset" from the tone of my post, I was simply stating that posting a script that is specific to the hardware setup isn't going to be helpful in this case...

                Comment


                • #53
                  Originally posted by robillard View Post
                  I've no idea how you got "upset" from the tone of my post, I was simply stating that posting a script that is specific to the hardware setup isn't going to be helpful in this case...
                  He didn't really mean you were upset. He was just stating his opinion that I miss the point frequently.

                  In this case, I was making a different point -- sharing code is a good idea, even if the code isn't going to be useful as-is, it still can be illuminating and save other folks time. Yes, people could reinvent, but being able to look at your script while they do that might save them some time.

                  Comment


                  • #54
                    Originally posted by DanKegel View Post
                    That's ok. You can start off with a hardware-specific script. Someone else can generalize it. The point is just to get a proof-of-concept out there.
                    I appreciate that, but I really cannot post my script, as it is wrapped up in 4 different workflows all tied into the same script, and has my systemid and other stuff coded into it that I'm not willing to share (nor sanitize). I am happy to help others write a script, and share the knowledge I've gained in this process, but I cannot post my script anywhere.

                    Once you've got the stream of sniffed network data, filtering out the 140 and 130 entries is really easy, it's just a regex away:

                    Code:
                    /^(1[34]0)\t(20[0-9]{12})\t[0-9]+\t[^\s]+\t[^\s]*\t(-?[0-9]*\.[0-9]*)\t/
                    
                    whichmsg = $1    # either 130 (for production) or 140 (for net metering)
                    utcdatetime = $2 # the date in UTC, use DateTime to convert to local time
                    currvalue = $3     # the data value (lifetime production if whichmsg == 130, lifetime net if whichmsg == 140)
                    
                    production = (current production value) - (previous production value)
                    net = (current net value) - (previous net value)
                    consumption = production + net
                    then you're off to the races.

                    My script will assume no production if I do not receive a 130 (production) message before the next set of 130/140 messages and the datetime indicates that we're outside of daylight hours (by a conservative margin); in which case I assume 0 production. Otherwise, I wait a couple cycles to see if one comes in, and if it still hasn't, I assume zero production (which does mean that my latency varies over time, based on the data that gets produced). A missing 140 (net) message is, for me currently, a "die" condition, because if I get a 130 but no 140, that means something is wrong...

                    That's pretty much all there is to it.
                    Last edited by robillard; 04-17-2016, 12:04 AM.

                    Comment


                    • DanKegel
                      DanKegel commented
                      Editing a comment
                      Thanks for explaining. I'd be happy to sanitize the script for you, if you like (assuming you're willing to trust a stranger I'd keep it confidential, and just send you back the sanitized script. My site's http://kegel.com if you want to check me out.

                  • #55
                    Originally posted by robillard View Post
                    I appreciate that, but I really cannot post my script, as it is wrapped up in 4 different workflows all tied into the same script(...)
                    For the curious (in case there are any), those 4 workflows are:
                    a) spawn the sniffer, and read and store the sniffed data
                    b) convert sniffed data to intermediate format for local storage (for use by web server)
                    c) download sunpower data from site and convert to intermediate format for local storage (for use by web server, alternate view for comparison purposes)
                    d) upload to PVOutput.org from locally-stored intermediate storage format

                    Yes, this is way more decomposed than it needs to be, but decoupling these functions allowed me to mix-and-match over time, and allows a certain amount of redundancy (for example when the sunpower site went down, or when PVoutput went down, it did not perturb the other functions).

                    So again, it's not really in a shareable form, and I don't have the time to put it into such a form. I truly am sorry, and I would be more than happy to help someone else with their own script.

                    Comment


                    • #56
                      Originally posted by robillard View Post

                      I've no idea how you got "upset" from the tone of my post, I was simply stating that posting a script that is specific to the hardware setup isn't going to be helpful in this case...
                      Sorry, Guess I should have used the sarcastic script.

                      Comment


                      • #57
                        Originally posted by robillard View Post
                        I would be more than happy to help someone else with their own script.
                        I filed https://github.com/jbuehl/solaredge/issues/10 to relay your generous offer. Thanks!

                        Comment


                        • #58
                          actually, i take it back; my script is written in perl, so i never looked at that project, which seems to be written in python. it was enphase-output.pl from which i took the ~5 lines of code that post to pvoutput, which i think is linked from pvoutput.org somewhere.

                          anyway my script is not really based on anyone's script... it's cobbled together from example code showing how to write a Net::Pcaputils filter and packet processing function. the core of it though is simply a regular expression similar to the one posted above; pick apart the message 130 (only, since i do not have power monitoring in my system), convert the UTC time in the message to local time, set the cumulative flag and post the lifetime energy reported in message 130.

                          Comment


                          • #59
                            Originally posted by DanKegel View Post

                            I filed https://github.com/jbuehl/solaredge/issues/10 to relay your generous offer. Thanks!
                            In that link you state that my script is based on that Python work for cross posting to PVOutput. It's not. Even remotely...

                            I've never seen the script you mentioned. I don't like Python as a scripting language and avoid it. I wrote my PVOutput cross-posting code from scratch, based on the documentation of the restful api from PVOutput. And I wrote it in Perl.

                            Furthermore, like the github page owner states, my work was all about reverse-engineering the SunPower proprietary application-level protocol, and (as I've said repeatedly) is not easily integrated into the PVOutput posting process.

                            Furthermore, my offer to help anyone who wants to write a script was just that: an offer to provide advise based on my findings, not an offer to become a github contributor.

                            I'll repeat my offer: if someone needs help writing a script to parse SunPower supervisor traffic for collection of monitoring data, I'm more than happy to provide some insight from my experience (and the humongous help provided by astroboy, without which I would have still been stuck).

                            But the whole point of my documenting this so well in this thread was to make it fairly easy for others to replicate what I (we) did, so I'm thinking that reading this thread will suffice.

                            Comment


                            • #60
                              Originally posted by robillard View Post

                              In that link you state that my script is based on that Python work for cross posting to PVOutput. It's not. Even remotely...
                              robillard - it's a misunderstanding; DanKegel was actually responding to my message and apparently confusing the two of us... and i had erroneously stated that i had taken the http submit code from that python project for my script, when in fact i pulled it from enphase-output.pl.

                              Last edited by astroboy; 04-20-2016, 09:14 PM.

                              Comment

                              Working...
                              X