Announcement

Collapse
No announcement yet.

Mirroring/intercepting SunPower Monitoring Traffic?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • I had drawn conclusions similar to intipower's findings. (I wish I had seen this thread first before having to discover it on my own!) A couple of additional/similar findings are below. I'm wondering if my supervisor is a bit different than yours based on what I'll highlight below.

    First, this is what mine looks like. Note the two LAN ports on the top right. Is that how all of yours are too?


    best free photo upload sites


    Assuming so, try this. Hook up the network port not in use (one is probably "wired" to your LAN one way or another) on your supervisor to a an AJAX capable browser device (think: Laptop with a GUI... more on that AJAX requirement below) You could throw a hub/switch in the middle, but it really isn't necessary. Now you can:

    1)
    Visit your Supervisor's main page. If you don't know the IP of your supervisor, visit:
    http://sunpowerconsole.net

    That'll take you to the supervisor's main local page on your LAN, and indeed the page that SunPower uses to discover/setup devices when the technicians are out.

    2)
    The supervisor will bridge your internet connection. Assuming your supervisor has Internet access (i.e. is reporting data to the sunpower servers), your laptop (or whatever) device that is hooked into the second port of the supervisor will now have the same LAN/WAN access. Try it out, visit https://www.solarpaneltalk.com while you are in that configuration. (be sure to turn off wifi so you know you aren't accessing through wifi!)

    3)
    As mentioned earlier, there are some key URLs in the above setup. I.e.:

    http://sunpowerconsole.net/cgi-bin/d...and=DeviceList (which spits out a list of known devices and their serial number but NOT in JSON format as mentioned by other members, in html format. I'm wondering if this is because I've got a different version and/or software/firmware version on my supervisor?)

    example output:


    best free photo upload sites

    or another nifty URL:

    http://sunpowerconsole.net/cgi-bin/d...ber=########## (where one replaces the ########## with one of the serial numbers from the above URL. Here again, the information is presented in HTML format and not JSON format.)

    example output:


    best free photo upload sites

    The problem I'm having is that one cannot simply just wget/curl/lynx/etc. to these URLs because they have an AJAX "Loading..." screen. It is pretty fast, but if on tries to say 'lynx' to one of those URLs, you'll end up with:


    best free photo upload sites

    Hence me mentioning 'something with an AJAX capable browser above.

    So my question: do any of you see that 'Loading...' on yours? I'm looking at some sort of PhantomJS (or other headless browser) solution to retrieve data from my CLI Rasp Pi and store/present/alert how I'd like to. Too bad, it was THIS close to being simple:

    ->||<-

    Am I overlooking anything, or does that seem to be the best approach?

    Comment


    • Test - I just typed out this long reply yesterday and it went to 'Unapproved'. Is there a moderation process that may result in my post being seen, or is it gone?

      Comment


      • Originally posted by boing View Post
        Test - I just typed out this long reply yesterday and it went to 'Unapproved'. Is there a moderation process that may result in my post being seen, or is it gone?
        Your post had web links which send it to the moderators for review before it gets approved. Which it is now.

        Comment


        • Ah, thank you SunEagle!

          Comment


          • Boing, I don't think I can speak for all on this thread, but you do not have a pvs5 which looks like this: https://fccid.io/document.php?id=2725912. And I'm using the snooping method which intercepts real time pushes, so I can't speak to your AJAX problem either. Good luck!

            Comment


            • Originally posted by JJNorcal View Post
              DanKegel, just now seeing this. I must have missed a solarpaneltalk email indicating thread update. Sorry.

              I just now changed "View and edit files in this project" to "Everyone with access" and enabled "Allow users to request access". Hopefully you can find a way in now
              Much better, gitlab.com/JJNorcal/SpPvoConnector now shows me your files. Thanks!

              Comment


              • Originally posted by DanKegel View Post
                Much better, gitlab.com/JJNorcal/SpPvoConnector now shows me your files. Thanks!
                I thought they finally banned you. Well I will report you for posting links again.

                MSEE, PE

                Comment


                • Quick update on my project.

                  Over time, I worked through what appeared to be unreliable/inconsistent behavior from perspective of PVS5. Occasionally a 130 packet went missing. Occasionally a 130 packet was truncated.

                  I made "improvements" to fill in missing data points based on lifetime energy supplied in each packet.

                  Then I had a day where an entire morning was missing. Reviewing saved tcpdump file, it appeared that all the 130 packets were truncated (tcpdump prematurely ends the packet with "[!http]"). So I started a parallel tcpdump to write packets to a file instead of parsing and piping to node, and reviewing the resulting file with wireshark, none of the packets were actually truncated. There is an apparent problem rendering SP packets with tcpdump (both -v and -A). My best guess is that tcpdump is attempting to honor the content-length tag and is treating the tabs incorrectly as up to 8 chars, so current length of valid data fields can lead to different truncation point.

                  So I switched to tshark, which looks like wireshark fork/variant of tcpdump. It can output each parsed packet en masse, so easier for subsequent parsing in node. So far, tshark appears to be a reliable sniffer parser, though the command line syntax proved somewhat challenging to obtain the body of the http post.

                  The one caveat to statement of reliablity is that the node app appeared to stop on its own after switching to tshark. This occurred once so far, 90 minutes after the first time I started it up for the first time fully debugged. This is a significant caveat given that this never happened with tcpdump. Linux logs showed signs of terminating the user session which was hosting node, but I was unable to determine a cause. One difference is that tcpdump needs to be launched with admin privileges whereas tshark does not, but I have not been able to reproduce an anomaly despite exiting ssh session, putting computer to sleep, forcing network anomalies, etc.

                  I will continue to monitor and report back after a week or so.

                  I have not pushed the changes to gitlab yet. It will require changes to instructions, and I want to get more bake time before committing.

                  Comment


                  • Originally posted by JJNorcal View Post
                    I have not pushed the changes to gitlab yet. It will require changes to instructions, and I want to get more bake time before committing.
                    Sounds promising!

                    (FWIW, around here, standard practice is to push works in progress to an alternate branch or personal fork. That way other developers can review if they like, and plus if your workstation explodes or you forget which of a dozen places you were working, your source is safe in the branch.)

                    Comment


                    • Pushed tshark solution to new testing branch.

                      Comment


                      • Originally posted by JJNorcal View Post

                        Then I had a day where an entire morning was missing. Reviewing saved tcpdump file, it appeared that all the 130 packets were truncated (tcpdump prematurely ends the packet with "[!http]"). So I started a parallel tcpdump to write packets to a file instead of parsing and piping to node, and reviewing the resulting file with wireshark, none of the packets were actually truncated. There is an apparent problem rendering SP packets with tcpdump (both -v and -A). My best guess is that tcpdump is attempting to honor the content-length tag and is treating the tabs incorrectly as up to 8 chars, so current length of valid data fields can lead to different truncation point.
                        i see this from time to time as well with my snooping code, which uses perl and the Pcap module. i'm pretty sure the problem is that if the monitor gets backed up (meaning sunpower is not acknowledging the inverter data) then the datagrams that are being sent grow and grow until they exceed the MTU of the network, which is generally somewhere around 1500 bytes. at this point you need software that can reassemble fragmented IP packets. it's possible that tshark is simply handling this transparently for you. i don't know if tcpdump itself can reassemble IP fragments.



                        Comment


                        • Tshark based sniffing has performed very well. App stoppage has not recurred, and perfect PVO reports for a week.

                          There was one malformed 130 packet, which coincided with the first 130 leading into the zeros that start a new day. An extra field led to lifetime energy being reported instead of an instantaneous zero power. PVO rejected the datapoint as "too big". I wouldn't have noticed the event had I not replayed saved packets on Windows while playing with Visual Studio node debugging. So one apparently bad packet after a week, found playing and confirmed by subsequent reviewing of live log file.

                          Comment


                          • I've seen a handful of malformed 130s now. They occur maybe once or twice a week, always coinciding with the first 130 of a new day.

                            As an example, these two 130s were transmitted in the same packet:

                            130\t20170611124000\t1913171660\tSMA-SB-7700TL-US-22\t\t\t\t\t\t\t\t0\t16.7\t59.9999\t0\n
                            130\t20170611124500\t1913171660\tSMA-SB-7700TL-US-22\t\t11752.6205\t0\t\t\t0\t297.1212\t0\t16.6875\t 59.9906\t0\n


                            The bogus 130 line contains 8 non-blank fields, whereas good 130s contain 12.

                            The missing fields has resulted in my app sending the frequency (59.9999) above as instantaneous power, with PVO rejecting it as too big.

                            I decided to block all 130s that don't contain 12 fields.

                            Questions: (1) Has anybody else received any malformed packets? (2) Does anybody know how to determine if there is a PVS firmware update available?

                            Comment


                            • My equipment is completely different, but my vendor sends out silly packets once a day, too. I figure they aren't affected by the crud, so they don't care.

                              Comment


                              • Here is how my packet output looks like (with some 130 lines snipped out):

                                100 SPMS 10 ZT170285000441C0308 20170619003543
                                120 20170619003500 ZT170285000441C0308 0 360 0 1 0 1035361 0.03 24348 12496

                                130 20170619003000 414051707015080 AC_Module_Type_C 28.6732 0.0579 244.6653 0.3435 0.0605 54.061 1.1044 44.75 59.988 0
                                130 20170619003000 414051707015260 AC_Module_Type_C 28.6014 0.0555 245.3928 0.4019 0.058 53.8978 1.0619 47 59.9862 0
                                130 20170619003000 414051707014786 AC_Module_Type_C 28.9052 0.1654 244.9078 0.7069 0.172 53.8727 3.1672 56.625 59.9862 0
                                ...
                                141 20170619003000 PVS5M562239c PVS5M0400c 1 5.8942 5.9624 121.9598 122.7836 -0.662 -0.2178 392.38
                                140 20170619003000 PVS5M562239c PVS5M0400c 100 -211.52 -0.8799 0.8759 1.4509 -0.606 59.969 0
                                140 20170619003000 PVS5M562239p PVS5M0400p 50 0 0 0 0 1 59.969 0
                                102 AngZdD3DD7u2WG/LDEnV

                                By using parts of code developed by Eric Hampshire, while learning Python, and comparing this data with the old SunPower monitoring system, I think the 130 line maps like this:

                                0) 130 -- reporting type
                                1) 20170618195000 -- date/time
                                2) 414051708000625 -- serial number
                                3) AC_Module_Type_C -- description

                                4) 27.8296 -- total lifetime energy in kWh

                                AC Power(5) = Voltage (6) * Current (7)

                                5) 0.3109 -- average AC power in kW (currently picking up from sun AC, max .360 kW)
                                6) 248.4238 -- average AC voltage in V
                                7) 1.2782 -- average AC current in A

                                DC Power(8) = Voltage (9) * Current (10)

                                8) 0.3245 -- average DC power MPPT 1 in kW (currently picking up from the sun)
                                9) 52.6426 -- average DC voltage MPPT 1 in V
                                10) 6.142 -- average DC current MPPT 1 in A

                                11) 65.125 -- panel temperature in C
                                12) 60.006 -- average operating frequency in Hz
                                13) 0

                                While I think I have the 130 line figured out, does anyone know how to read the 140 line and why there are two of them in my output:

                                140 20170619003000 PVS5M562239c PVS5M0400c 100 -211.52 -0.8799 0.8759 1.4509 -0.606 59.969 0
                                140 20170619003000 PVS5M562239p PVS5M0400p 50 0 0 0 0 1 59.969 0

                                Thanks.
                                Last edited by apara; 06-19-2017, 08:05 AM.

                                Comment

                                Working...
                                X