**** BEGIN LOGGING AT Tue Apr 21 02:59:58 2015 Apr 21 12:43:35 ds2: /names Apr 21 12:43:48 well... not that Apr 21 16:21:05 vvu, ping Apr 21 16:47:41 alexanderhiam: ping Apr 21 17:29:09 geekswine: pong Apr 21 17:31:55 the adc library which i showed, was that a bit convincing ?? Apr 21 17:32:02 alexanderhiam: was busy with projects and end sem starting in 2 days :( Apr 21 17:33:58 went through the datasheet .. Apr 21 17:34:23 it's what I was looking for Apr 21 17:34:58 the other libraries ?? Apr 21 17:37:53 alexanderhiam: so i should start looking for other libraries ?? right ?? Apr 21 17:38:04 like making an overview of the library?? Apr 21 21:56:06 alexanderhiam nerdboy: I've updated the proposal with the things we discussed Apr 21 21:58:29 I edited the proposal body with as much info as possible, without going to deep into the technical specifics, which I guess would be better sorted out via feedback from you guys as the project goes on, especially since there are multiple ways to do some of the parts Apr 21 21:59:32 I did my best to point out the parts with multiple possible ways of solving them, and pointed out the ones I think are best bets, based on what I've read/heard/tested so far Apr 21 22:03:18 the timetable got the biggest rehaul, since including the kernel driver and changes in how the processing should be done called for it Apr 21 22:04:13 comments on if it looks realistic would be appreciated Apr 21 22:05:45 /names? Apr 21 22:05:51 Also, there's quite a good chunk of extras/things to be done in the proposal that I've researched and are important, but didn't include in the timetable because I don't think it makes sense to force too much into the GSoC timespan Apr 21 22:06:22 ds2: ha! yeah sorry for that. I wanted to see who's online, but forgot to notice I started writing a reply to you Apr 21 22:06:31 * neemo had a brainfart Apr 21 22:06:34 oh Apr 21 22:07:12 I'm gonna write that response now, if you don't mind. Need a bit more input on the PRU, to see if I'm thinking this project through correctly Apr 21 22:07:44 'k Apr 21 22:08:05 you only have 2K of instructions to play with... Apr 21 22:08:12 any insight you can give is highly apprecited Apr 21 22:08:31 what algs are you planning to use? Apr 21 22:08:33 Isn't it 8k data & 8k instructions Apr 21 22:08:42 just a sec I'll dig up the link to the paper Apr 21 22:08:43 8kBYTE Apr 21 22:08:52 instructions are 32bit or 4 byte Apr 21 22:09:01 gotcha Apr 21 22:09:07 so 2k instructions Apr 21 22:09:17 gimme a sec Apr 21 22:09:20 'k Apr 21 22:10:37 ds2: So since the PRU is limited Apr 21 22:10:51 I was counting on doing the data correction only on the PRU Apr 21 22:11:13 and calculating the correction paramtes (periodically, not real-time) on the ARM core Apr 21 22:11:23 what do you mean by data correction? Apr 21 22:25:53 of course my vpn crashed, in the least convenient moment Apr 21 22:26:15 ds2: please copy paste anything I missed :/ Apr 21 22:29:34 ds2> what do you mean by data correction? Apr 21 22:29:52 ah, so it didn't send anything Apr 21 22:30:00 the correction equtions are mentioned in a couple of papers Apr 21 22:30:00 06:06 < neemo> eg. here http://exploration.engin.umich.edu/blog/wp-content/uploads/2012/08/springmann_mag_calibration_4S.pdf Apr 21 22:30:19 page 7 Apr 21 22:31:15 eqs 12 to 14, they're the more complex variety of the error compensation to be done Apr 21 22:31:28 I should implement something like that in the PRUs Apr 21 22:32:07 but I'm counting on calculating the correction parameters on the ARM core, since it's a parameter optimization via cost function minimization, which I'm guessing is waaay out of the PRUs league Apr 21 22:32:44 what are you correcting? Apr 21 22:32:52 ds2: Is something like that doable on the PRUs? am I thinking this correctly Apr 21 22:33:08 im correcting magnetometer data from the IMU sensors Apr 21 22:33:15 correcting for what? Apr 21 22:33:15 MPU9250 in my case Apr 21 22:33:38 correcting for static and dynamic sources of magnetic interference Apr 21 22:33:52 dynamic sources? please elaborate Apr 21 22:33:55 both from issues with the magnetometer and the extra sources onboard a cubesat Apr 21 22:33:59 static sounds like calibration Apr 21 22:34:42 static sources would be constant sources of magnetic interference, think large blocks of metallic/magnetic materials which make up a CubeSat Apr 21 22:35:06 what are you calling dynamic? Apr 21 22:35:49 dynamic sources would be magnetic intereference induced in wires going from solar panels and similiar on/off stuff Apr 21 22:36:44 can you explain in simple terms how you plan to correct for dyanamic stuff? (keep in mind, I am not an alg guy) Apr 21 22:37:25 dynamic, as in time-variant sources of magnetic interference, based on some additional measurements/information Apr 21 22:37:52 eg. in that paper, the example is current telemetry for solar panels on board the cubesat they're working with Apr 21 22:38:54 they use that, to model, or expect additional interference when the panels are active, and can use the information from the telemetry to compensate the readouts from the magnetometer Apr 21 22:43:15 explain Apr 21 22:43:44 I don't see how having a paper on hand is enough to implement something if you cannot explain what it is about Apr 21 22:44:25 so, for example, you want to position your CubeSat, and use the magnetometer readouts for fine positioning data, you know the panels are working, and compensate for that, you also know the rotation wheel will be active, and must compensate for that interference as well, based on the information on what state your cubesat is in (and on extra telemetry inputs) you can get a much better precision from the Apr 21 22:44:31 readouts of your magnetometer (by processing the input data through the PRUs before using it for positioning) Apr 21 22:45:27 that's fine on a high level - how exactly does dynamic compensation work? Apr 21 22:45:46 is it a learning machine? a filter? voodoo? random number generator? Apr 21 22:46:01 and the goal in this project is to offload that real-time processing of magnetometer data to the PRU, to free up the ARM core for more interesting stuff. Apr 21 22:46:15 voodoo gives the best results Apr 21 22:46:53 define "real time processing of magnetometer data" Apr 21 22:47:21 that's what the algorithm achieves, how does it do that? Apr 21 22:47:33 alexanderhiam: yes but with what proportions of chicken blood to the various herbs? :D Apr 21 22:47:49 well, the compensation (depending and what you're compensating for) is just using the compensation parameters you've calculated beforhand for that spefic state of your cubesat on the raw data you're getting from the magnetometer sensor Apr 21 22:47:54 voodoo is best Apr 21 22:48:11 but gimme a sec, I'm slow on the keyboard this morning Apr 21 22:48:13 :) Apr 21 22:48:46 voodoo would be nice, but unfortunately the compensation parameters we get from some number crunching on the ARM core Apr 21 22:49:24 so there's a training step before hand? or does it learn as it goes? Apr 21 22:49:28 specifically, we use a least square cost function minimization to find the parameters that best fit are expected data points Apr 21 22:49:31 What kind of number crunching? Apr 21 22:49:36 beforhand, should be training Apr 21 22:49:44 * neemo still writing :) Apr 21 22:50:01 so what are you training with? launch 23509493128490280923849023480923 sats and each one iteratively gets better? Apr 21 22:52:49 the training is set with batch optimization algorithm on magnetometer data measured by the cubesat (a bunch of measurements) and is optimized against the expected magnetic field estimate in that region of space around earth (gimme a sec to dig out what it's called) Apr 21 22:53:43 For Apr 21 22:53:43 calibration with on-orbit sensor data, we use the magnitude of the International Geomagnetic Ref- Apr 21 22:53:46 erence Field (IGRF), Apr 21 22:53:52 * neemo a lot of pdf digging Apr 21 22:55:12 ok. in pseudo code describe what needs to be done? Apr 21 22:55:30 On the PRU? Apr 21 22:55:43 the whole algorithm Apr 21 22:55:44 or the whole thing? Apr 21 22:55:54 start with the training Apr 21 22:56:12 ok Apr 21 22:56:25 if I had a satellite with 3 reaction wheels controlled by 16-bit PWM, what would the training consist of and what would the resulting data look like? Apr 21 22:57:14 training Apr 21 22:59:03 1. make the cubesat rotate freely to get as much diversity in measurement angles for all three spatial dimensions of the magnetometer data (x, y, z) Apr 21 23:00:40 In a constant magnetic field, you expect a batch of such data to map to sphere for a perfect sensor with a radius equal to the magnitude of the measured field Apr 21 23:01:59 a non perfect sensor (with intereferences static and dynamic) will not have it's measurements mapped to a perfect sphere, more a distorted elipsiod Apr 21 23:02:03 quick interjection - how do you know there is sufficient diversity? the thing is moving and depending on where it is, the amount of ferromagnetic material would change Apr 21 23:04:07 what's do you mean by the amount of ferromagnetic material would change? As in a a piece (which is not predicted to be an active source at some point, eg. a solar panel) that was not previously magnetized would get magnetized? Apr 21 23:04:38 eg. not a solar panel, or subsystem which you'll control ** Apr 21 23:04:38 as in certain materials can focus the magnetic field Apr 21 23:05:17 since the source of the field we are looking at is based on the molten core... a mountain of the material can create a non uniform field Apr 21 23:06:05 and since the planet is rotating and the sat is not steadystate relative to the planet, this would appear as a dynamic error, right? Apr 21 23:06:17 ah, you mean the earths field itself as being non uniform Apr 21 23:06:18 ? Apr 21 23:06:42 I though you meant stuff onboard the cubesat Apr 21 23:06:45 sort of Apr 21 23:06:47 so Apr 21 23:06:55 no, from the planet Apr 21 23:07:00 say a mound of ore Apr 21 23:07:07 non uniform earth magnetic field, you can't do anything about that Apr 21 23:07:11 s/mound/mountain/ Apr 21 23:07:19 at least not in this algorithm Apr 21 23:07:27 okay Apr 21 23:07:35 go on Apr 21 23:07:53 you've got the measurements and you can probably spot such a change on them, but the calibration step assumes a uniform predicted magnetic field to work Apr 21 23:08:19 so it has to be done in a period of magnetic stability (no crazy sun stuff) and you use the IGRF as reference for your calibration Apr 21 23:09:11 you can obviously do the calibration again, if you think something messed it up. but the calibration step for the compensation parameters is based on the IGRF magnitude prediction Apr 21 23:09:18 so anyway Apr 21 23:09:21 training Apr 21 23:09:24 1. get a bunch of data Apr 21 23:09:31 based on the assumptions above Apr 21 23:10:45 the difference between your ideal sensor (ideal sphere mapping based on data) and your all interfered one ( some kind of ellipsoid) is the error Apr 21 23:11:39 2. use that data to do cost function optimization (ML essentially) to get the correction parameters out of it Apr 21 23:12:14 3. push the correction parameters to the PRU and make the PRU correct incoming raw magnetometer data for more precise mag data Apr 21 23:12:58 4. use that for all the other useful stuff your doing (positioning for camera, magnetic field measurements for weather and other cool stuff) Apr 21 23:13:01 .. Apr 21 23:13:03 so it's just a multiplier for each axis? Apr 21 23:13:07 6. profit? Apr 21 23:13:15 well, yeah Apr 21 23:13:35 divisor, and subtraction for the most basic stuff Apr 21 23:13:53 so the complex part is the training Apr 21 23:14:10 there is more stuff in the equations depending on for how many time varying stuff you want to compensate Apr 21 23:14:15 so in otherwords, you are generating linear transformson the ARM side and doing the linear transforms on the PRU? Apr 21 23:14:22 eg. panels, transmitter etc etc Apr 21 23:14:55 ds2: that was the idea yes Apr 21 23:15:00 and every time you enable/disable something you alter the linear transformation Apr 21 23:15:40 alexanderhiam: well, you could keep the transformation enabled constantly (and put params for those unused to 0) and just run that Apr 21 23:15:42 are you doing anything else with the gyro/accel? Apr 21 23:15:46 but that would be a waste I think Apr 21 23:16:06 the gyro/accel are not involved in mag. compensation Apr 21 23:16:25 I mean are you doing gyro/accel compensation Apr 21 23:16:38 I think I found some algorithms that could use them (not sure, read to much of the papers), but the ones I'm focusing based on the project description don't use them Apr 21 23:17:14 what's your background again? Apr 21 23:17:48 as in education? EE Apr 21 23:19:17 what do you anticipate as the hardpart part for the arm side code? Apr 21 23:19:20 I'm doing magnetic compensation, based on the NASA papers, they don't mention any gyro/accel compensation. Apr 21 23:19:38 Am i missing something on that part? Apr 21 23:20:16 ARM code, probably getting it to converge in as little time as possible Apr 21 23:20:20 are you familiar with the accuracy of MEMS sensors? Apr 21 23:20:37 no, how much development time not run time Apr 21 23:20:56 for the algorithms, I wouldn't say a lot Apr 21 23:21:08 the kernel driver part is the one that I'm least experienced with Apr 21 23:21:35 writing kernel drivers is a well documented process, google hello world kernel driver Apr 21 23:21:36 and that will probably take most of my time, both in research (which I'm doing now) and in coding/testing Apr 21 23:21:55 I'm concerned with the training Apr 21 23:22:12 I know it's well documented, it's just the part I have least experienced with Apr 21 23:22:12 I am concern with your time allocation Apr 21 23:22:24 I have seen this stuff done and it took more then a summer Apr 21 23:23:08 for the training, aren't a lot of optimization libraries and ML stuff readily available, as in it's ill advised to go and develop something like that on your own Apr 21 23:23:45 that the impression I got from the ML course on Coursera, I'm no expert by a long shot though Apr 21 23:24:00 ds2: which parts? Apr 21 23:24:06 I think the training part of the algorithm is at risk of snow balling. I'm not talking about libraries, I'm talking about the actual implementation Apr 21 23:24:16 all of it Apr 21 23:25:19 alexanderhiam: weeks 3-5 are meant for the training algorithms (probably should have named it better though) Apr 21 23:26:10 say I built a cubesat, and I want to use this software. What do I do? Do I run a provided training program? How does it know what devices to train with? Is it an API I use to write the training code myself? Apr 21 23:27:18 can I just leave it sitting on my bench during the training? Or do I need to build some sort of elaborate hardware rig? Apr 21 23:28:43 when the whole project is done (not just my phase), there should be an API, you would call the training algorithm from it to train it for each mode of the satelitte Apr 21 23:30:05 and unforutnatly, you would have to have a hardware rig to train it (eg a 3 axis non magnetic rotational rig) to be able to spin the cubesat and simulate the spinning in space on the ground Apr 21 23:30:41 at least that's how they did it in the RAX satellite they used for developing the algorithms in the NASA papers Apr 21 23:31:39 you could probably get away if you have a precalibrated, standardized Cubesat which has already been tested for the correction parameters beforehand (or a cubesat similiar in design and equipment) Apr 21 23:32:39 but I'm quite sure people will want to add new measurement equipment and other tools to their cubesats, because, why send 10 identical Cubesats in space Apr 21 23:33:26 also, not sure if differences between individual mag. sensors would make it mandatory anyway, but I'd wager with a yes Apr 21 23:34:18 how much better is it to do it this way compared to a simple rotate and average each axis to generate a static linear transform? Apr 21 23:35:59 right, if you went with that instead it would be a lot more straight-forward to implement the training, and that could be built on down the road to cover the full rotation Apr 21 23:36:50 some error correction > no error correction, assuming it's not totally wrong Apr 21 23:37:47 there is no training in the machine learning sense Apr 21 23:39:20 I'm not sure if I'm thinking this straight, but how would avarage remove interference from sources genereted onboard the cubesat Apr 21 23:39:39 the sensor will rotate with the sat, and will stay in the same relation from it Apr 21 23:40:13 plus the rotation wheels would give their contribution to the error as well (if not compensation for them) Apr 21 23:40:26 compensation = compensating Apr 21 23:41:09 assuimg there is panels and in the sun, panel currents also influence the error Apr 21 23:41:11 you measure and find the min/max of the field per axis as you rotate the sat Apr 21 23:41:27 average them. use that as an offset for that axis. repeat for other 2 Apr 21 23:41:37 it is the most basic cal you can do Apr 21 23:42:31 if you just subtract the readings you get with everything off then you don't need to rotate it at all Apr 21 23:42:58 unless you want to compensate for differences between the axes Apr 21 23:43:09 you need to rotate it about an axis Apr 21 23:43:29 aka the figure 8 pattern some phones tell you to do Apr 21 23:44:11 oh right, to calibrate it Apr 21 23:45:34 will this achieve the same error compensation? Apr 21 23:45:36 but to get the effect of a reaction wheel on each axis you could just record the field with it off and with it on then subtract the former from the latter. No need to be rotating it Apr 21 23:46:19 I mean, wasn't the point of the project to isolate the errors in mag. data from the cubesat sources of mag. interference Apr 21 23:47:06 and that's essentially, all the active components, depending on working mode of the sat. power system, panels, rotation wheels, transmitter Apr 21 23:47:30 that is a brain dead way of doing it... question is - how much better will more complex algs do Apr 21 23:47:41 not saying drop everything and switch to it Apr 21 23:48:05 there are issues with that simple alg and it fails around certain ferro magnetic materials Apr 21 23:48:33 fair enough ^^ Apr 21 23:48:57 that simple alg being the figure 8 method? Apr 21 23:49:07 you are studying to be a EE so you probally have heard of things like newton's method of approximating square roots, right? Apr 21 23:49:37 yes I did Apr 21 23:49:50 alexanderhiam: I'd call it averaging to generate an offset. figure 8 is just a clever way of rotating on all axis. On some other devices (i.e. garmin), you can do the same thing by rotating it on seperate axis Apr 21 23:50:36 neemo: using that as an analogy... this brain dead method is like the first term in that... question is - are the more complex methods adding enough terms or refinement to justify the work Apr 21 23:51:16 makes sense Apr 21 23:52:35 and I don't know the answer to that. I can point out that people at NASA think it's worthwile to be developing for cubesats, but that's akin to saying Jake told me ... Apr 21 23:52:48 yes Apr 21 23:52:54 which is not a very logical argument Apr 21 23:53:07 so, testing? Apr 21 23:53:31 I mean, the simpler method is obviously better because it's easier on the end user Apr 21 23:53:52 Jake, who has a history of pouring millions of dollars into years of research then shelving it... Apr 21 23:53:52 but the question is again, how precise do you need it to be and who the end user is Apr 21 23:53:57 the math for it is not my area but my understanding is it can be deduced by proper analysis (sorry, donno enough to help there) Apr 21 23:55:03 alexanderhiam: Jake did it again (sigh) :) Apr 21 23:55:11 the tricky thing here is the sats are moving Apr 21 23:55:31 a question that needs to be answered is - what sample rate is needed for it to be meaningful Apr 21 23:55:50 ds2: and yeah, I mean we should compare the simpler and the more complicated methof (analytically or empirically) Apr 21 23:56:29 I'm not in the position to do it analytically either, I can try and research it though, but not sure where to start Apr 21 23:57:02 also, it depends on the application I guess Apr 21 23:58:08 the NASA papers, from what I've gathered, are mostly concerned on getting as precise readouts on cubesats as possible. Mostly with the intent on using the mag. data for space weather mesurements and models Apr 21 23:58:50 essentially to replace a part of the precise and huge (and very expensive) sats they use for that sort of thing now with a heap of cheep-ish cubesats Apr 21 23:59:43 but for proper positioning of the cubesat for something like, say, orbital imaging. I'd guess the sample rate would have to be high Apr 22 00:00:45 I may be guesstimating here, since I didn't have any aerospace education (yet) Apr 22 00:07:00 alexanderhiam ds2 nerdboy: So anyway, what concerns you the most with the timeline/proposal? Apr 22 00:07:14 the ML training algorithm? **** ENDING LOGGING AT Wed Apr 22 02:59:58 2015