Nothing wrong with the questions as far as I can see!
1. What software did you use to edit the video together? And how long does it take to put together a video like that?
I used the latest beta of Microsoft Live Move maker. I have mixed views of this on the one hand it is great and lets you do some pretty slick things very easily. On the other hand it drove me crazy by crashing lots which as a result I spent much longer than I really needed to. Part of this was down to me. I used my Canon T2i/550d SLR as the camera and this creates full hd h264 in .mov format. This format is notoriously hard for editing programs to work with (do a few searches on the various canon forums). Many experts recommend that you transcode the files into a more edit friendly format first. There are some formats designed especially for this and I think the high end editors transcode into those in the background to make editing easier. But I didn't try doing that! As a result I'm still trying to find a good editor...
2. For the modulation algorithm, is it basically taking a reading, turning off the laser, then taking another reading. If the bright spot is present only when the laser is on, it counts it as a hit? But if a bright spot is there both when the laser is on and also off, then it ignores it?
That is pretty much it. It is slightly more complex in that you get multiple reflections from the beacon as the light scans over it, and you also get a fair bit of main ripple on top (particularly when the beacon is far away. So I used a number of reads during on and off periods and performed peak detection on the result. I also ran a FIR filter over the results to merge multiple reflections together and again selected the peak...
3. Is there a laser sensor like yours on the market? If not, is it easy to build? It would be cool to have a laser sensor/actuator class in leJOS. We could include some assembly instructions in the API if it isn't too complex.
I'm not aware of a commercial version of the sensor. I don't have a step by step guide to building it but this thread:http://forums.nxtasy.org/index.php?show ... 5&hl=laser
describes on that is very similar. The main difference was my use of a laser pointer rather than a module. The only tricky bits are:
1. Opening the Lego sensor case.
2. Extracting the guts of the laser pointer and knowing which bits of the board can safely be removed (a good reason to use the laser module).
3. Soldering the wires to the pcb.
4. Getting everything back in the case!
4. Are there limitations to the positional detection based on the orientation of your three reflectors? I assume this works a little like GPS. Without the sensor being able to detect unique reflectors (ID 1, 2, or 3) then I assume all three must be up against a wall. Do you think you get 100% room coverage, or are there blind spots and distance limitations for this setup?
The sensors can actually be in any known position there are some robot/sensor positions for which the algorithm does not work. You need to be able to identify which sensor is which (which you can possibly do by using the relative positioning of them). Take a look at this paper for lots of details:http://repositorium.sdum.uminho.pt/bits ... 003328.pdf
5. What is your algorithm for correcting the position after it does a move and then checks position?
I used the BasicNavigator (name may have change by now) as a basis of my code. To correct the position I simply set the current pose to be the information returned by the scan at the end of each move sequence. I used a simply set of goto commands to perform the move...
All the best