NEW PRODUCT: TSL1401-DB Linescan Imaging Sensor

1235

Comments

  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited February 2012 Vote Up0Vote Down
    Here's what I would do:
    • Project the laser onto a ground glass in front of the TSL1401 module, or onto an opaque screen.
    • Replace the glass lens in the TSL1401 module with a vertical slit -- like a pinhole lens, but tall and skinny.
    This will give you a Gaussian profile, whose centroid you can interpolate, regardless of the projected spot's vertical position. I can fabricate a slit lens for you on my laser cutter if you like. One thing you need to be aware of is that you're replacing a fussy vertical alignment with a fussy rotational orientation, whether you use my idea or a cylindrical lens. If the lens isn't perfectly orthogonal to the sensor, a vertical misalignment of the laser spot will register as a horizontal offset. It's probably not as fussy, though.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • kbellvekbellve Posts: 5
    edited February 2012 Vote Up0Vote Down
    Just so we agree on the terms.

    Vertical = same axis as the linear pixel array.
    Horizontal = orthogonal to the pixels of linear pixel array,

    It is the centroid of the vertical position that we are measuring. It is a Gaussian spot, which the linear array only measures the central vertical strip, if perfectly aligned. It is the up and down movement of the centroid that we want to know, not the left and right horizontal movement.

    It is the horizontal alignment of the laser that makes measuring the vertical position tricky. The laser could move to the left or right of the linear array, which means we are on the edge of the Gaussian profile or miss it entirely, reducing signal to noise.

    What a cylindrical lens would do, mounted along the same axis as the vertical pixels, would compress the horizontal profile of the laser beam into a vertical slit beam, which can be projected onto the linear array. This means the 4mm wide laser beam can hit the cylindrical lens, and be projected down to ~60um.

    The alignment between the cylindrical lens and the linear array would be critical, but it is only 1/4" away. Now, the return beam can hit a much wider lens...

    Cheers!
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited February 2012 Vote Up0Vote Down
    Okay, I see: everything is rotated 90 degrees from what I assumed.

    I think I'd be more inclined to use something that spreads the beam out perpendicular to the linescan axis than trying to concentrate it. I've got some plastic holographic diffuser material here that creates a nice line from a spot beam. It diffuses the beam 40° on one axis and 0.2° on the perpendicular axis. That would create a line across the sensor. A small rod lens would do the same, albeit with a smaller "capture" area.

    My original idea of projecting the beam onto a screen or ground glass and imaging it onto the sensor with a slit lens would also work.

    Just out of curiosity, do you really need the 12mm lens holder, or are you working in a dark enough environment that ambient light is not a factor?

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • kbellvekbellve Posts: 5
    edited February 2012 Vote Up0Vote Down
    Okay, I see: everything is rotated 90 degrees from what I assumed.

    I think I'd be more inclined to use something that spreads the beam out perpendicular to the linescan axis than trying to concentrate it. I've got some plastic holographic diffuser material here that creates a nice line from a spot beam. It diffuses the beam 40° on one axis and 0.2° on the perpendicular axis. That would create a line across the sensor. A small rod lens would do the same, albeit with a smaller "capture" area.

    Yes, that would work but you would still need the same laser power. Concentrating the beam means we could spec a lower power laser. But it means much more to solve the alignment problem than trying to reduce the laser power.

    I have to look into plastic holographic diffuser...
    My original idea of projecting the beam onto a screen or ground glass and imaging it onto the sensor with a slit lens would also work.

    Just out of curiosity, do you really need the 12mm lens holder, or are you working in a dark enough environment that ambient light is not a factor?

    -Phil

    Well, light is always a factor. It brings up the background signal. We are testing with a 660nm laser now, but will move to an 808nm laser. We use a long pass filter to eliminate light below the laser wavelength. We are not currently using the 12mm lens holder, but thought we could put a cylindrical filter inside it.

    Cheers
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited February 2012 Vote Up0Vote Down
    Can you modulate the laser on and off? That way you can nearly eliminate the effect of ambient light by interleaving samples with the laser on and with it off, then subtracting the off-samples from the on-samples.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • robot08robot08 Posts: 4
    edited February 2012 Vote Up0Vote Down
    hey phil,

    I just bought this sensor and am hoping to be able to use it to tell if a light bulb is on from about 8 feet away. It is a 50 W halogen bulb. do you have any ideas on how i can acomplish this? I am also using an avr microcontroller to communicate with it. do i need some other kind of adaptor or anything to connect to it? i would really appreciate the help, thanks in advance.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited February 2012 Vote Up0Vote Down
    robot08,

    Welcome to the forum!

    What you propose should work. Just aim the camera at the bulb, and there should be enough difference in the response with it on vs. with it off to make an unequivocal determination. As far as interfacing with your AVR -- I take it you're not using the MoBo -- just follow the timing specs in the TSL1401R datasheet. For ease of interfacing, consider using Parallax's DB-Expander adapter board.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • robot08robot08 Posts: 4
    edited February 2012 Vote Up0Vote Down
    Phil,
    thank you i appreciate the response. the microcontroller i am using is an orangutan svp that uses the atmega1284p. just ordered the db expander. i am kind of new to all of this so sorry for all the annoying questions. can you please kind of walk me through some of this just to get started because i am not familiar with how exactly it works. do i only need si,clk,and ao to accomplish this? can you please give me like a basic summary on how to use si and clk? or what exactly i have to do to interface with the sensor?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited February 2012 Vote Up0Vote Down
    Please refer to the timing diagram in the TSL1401R datasheet, available at http://www.taosinc.com. The only signals required are SI, CLK, and AO. When SI is clocked in, it starts a new exposure and also begins the readout of the previous exposure. Thereafter, each clock presents a new voltage on AO, which represents the amount of light received by the associated pixel. After 128 clocks, AO is tristated, after which you can stop clocking, if you want, until the next SI. The exposure time is the time between SI pulses. That's really all there is to it.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • robot08robot08 Posts: 4
    edited February 2012 Vote Up0Vote Down
    ok thanks alot phil i appreciate the help. I am going to give it a try and will get back if i have any more questions.
  • robot08robot08 Posts: 4
    edited March 2012 Vote Up0Vote Down
    hey phil i understand you can change the lens on the camera right? I am going to be checking to see if the light bulb is on from 5 to 8 feet away. The camera is going to be mounted on a servo that is rotating it to look for the light bulb. The camera is mounted as high as the light bulb, so i dont really need the camera to see 5 to 8 feet high . Probably about a foot or so would work. I want to find out if the light bulb is on as quick as possible so do you think the lens on it should be replaced with a different one? if so which one would you recommend?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2012 Vote Up0Vote Down
    A longer focal-length lens would narrow the field of view. For example, a 16mm lens would give you a 4 ft. FOV at 8 feet distance. But, unless the bulb is really tiny, I don't think you will need a new lens. With the current lens and an 8-foot distance, each pixel will see 3/4 inch of the overall height. So a bulb whose radiant surface is 1.5 inches or more in diameter should be detectable. The thing to do is just to try it and see.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • kbellvekbellve Posts: 5
    edited February 2013 Vote Up0Vote Down
    I want to thank Parallax for offering the TSL1401-DB.


    It helped me build the first prototype to a device I call pgFocus.


    It is better described at http://valelab.ucsf.edu/~MM/MMwiki/index.php/PgFocus (Documentation is still a work in progress)


    In short, pgFocus is designed to stabilize focus on a microscope by using a reflected laser beam.


    The problem was focus drift over time. TIRF allows the imaging of a narrow 200nM space above the glass/water interface, so it doesn't take much drift to ruin TIRF imaging. Commercial focus correction devices don't integrate well with custom designed microscopes. Commercial versions are also very expensive and are called names like "Perfect Focus" and "Definite Focus". I decided to build my own and call it pgFocus. pg = "Pretty Good". I find it funny :D


    I am a software guy, but I taught myself basic circuit design and I had my first pgFocus prototype working and integrated into our microscope within a few months.


    I have just released the latest pgFocus design (see https://github.com/kbellve/pgFocus ) . The current design is a culmination of circuit design that I have learned in the last year. I might release a "shield" version of pgFocus eventually.

    Parallax, Sparkfun and Adafruit is on the bottom document layer of the PCB. Without you and the other companies, I couldn't have done what I did as quickly as I did it.


    TIA
  • macrobeakmacrobeak Posts: 193
    edited March 2013 Vote Up0Vote Down
    Phil, Do you think two TSL 1401 set on a x-y axis could be used to coordinate the position an object in the bottom of a pool? The object could have a bright led to make it contrast with the surroundings.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    macrobreak,

    With the lenses provided, no, because the object would have to lie on one of the axes to be seen. However, if you replace the lenses with slit lenses (like pinhole lenses, but longer in one dimension) at right angles to each sensor, the pinpoint source of light would be focused as a stripe across the one-dimensional sensor array. The LED would have to be VERY bright, for this to work, though.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • HylianSaviorHylianSavior Posts: 6
    edited March 2013 Vote Up0Vote Down
    Hi,

    I am trying to read output from this sensor, but I seem to be doing something wrong. I can read all 128 analog values, it seems, but the output shape is strange. I have three pins hooked up: SI, CLK, and AO, and I'm bitbanging all the pins. Could anyone point me in the right direction as to what I could be doing wrong? Thanks.

    Here is my code and a sample output:
    #pragma autostart //Start the program automatically on startup of the SuperPro#define SI_PIN DIGI_PIN0
    #define CLK_PIN DIGI_PIN1
    #define FLAG_PIN DIGI_PIN7
    #define ADC ADChannel0
    
    
    
    
    task main()
    {
        DigitalControl = SI_PIN | CLK_PIN | FLAG_PIN; //Set digital pins 0, 1, and 7 to outputs
        while(true)
        {
        int test = 1;
        LEDControl =  LED_RED;
          DigitalOut = 0;
            //Wait until NXT sets digi out pin 7 to 1 before starting a scan
            while((DigitalOut & FLAG_PIN) != FLAG_PIN) {}
            
            Timer1 = 1;
    
    
            //Toggle the SI pin to begin integration
            DigitalOut |= SI_PIN;
            DigitalOut |= CLK_PIN;
            DigitalOut ^= SI_PIN;
            DigitalOut ^= CLK_PIN;
            
            //Cycle junk data out of the shift register
            for(int i = 0; i < 128; i++)
            {
                DigitalOut |= CLK_PIN;
                DigitalOut ^= CLK_PIN;
            }
            
            while(Timer1 != 0) {}
            
            //Toggle SI pin to stop integration
            DigitalOut |= SI_PIN;
            DigitalOut ^= SI_PIN;
            
            //Read data out of the shift register and put them into the I2C shared memory addresses
            for(int *i = 0x20; i <= 0x3F; i++)
            {
                *i = 0;
                for(int j = 0; j < 4; j++)
                {
                     DigitalOut |= CLK_PIN;
                     Wait(3);
                    *i <<= 8;
                    *i |= (ADC /4); //Throw out the 2 LSB of the 10-bit DAC reading
                    DigitalOut ^= CLK_PIN;
                 }
            }
            
            //129th clock edge sends analog output of sensor to tri-state
            DigitalOut = CLK_PIN;
        }
    }
    

    Example output with graph:

    QCFypbd.png

    221228
    237
    247
    193
    201
    208
    215
    170
    175
    180
    186
    148
    152
    157
    163
    128
    133
    138
    143
    113
    117
    121
    124
    99
    102
    104
    108
    85
    88
    91
    96
    74
    78
    81
    84
    66
    69
    70
    72
    58
    59
    60
    63
    49
    51
    54
    56
    43
    46
    47
    48
    39
    40
    40
    41
    33
    33
    35
    37
    28
    30
    32
    33
    26
    27
    27
    27
    22
    22
    23
    24
    18
    19
    21
    22
    16
    18
    19
    18
    15
    15
    15
    15
    12
    12
    13
    15
    10
    11
    12
    13
    10
    11
    11
    10
    09
    08
    08
    08
    
    850 x 700 - 42K
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    I believe your problem is here:
            //Toggle SI pin to stop integration
            DigitalOut |= SI_PIN;
            DigitalOut ^= SI_PIN;
    
    The SI high needs to be clocked in, the same as you did before clocking out the junk data.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    Is there sample code for the SL1401 that runs on the Propeller Backpack board? Thanks.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    Yes, attached.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    Thanks Phil. That code does not work on the Backpack. How do the pins on the chip map? For example, the demo source that you sent calls sio.start with pins 31 = RX and 30 = TX. But those pins are not correct for the Q44 chip on the Backpack carrier -- they're for the D40 chip instead. I tried changing the call to sio.start(29,28.... which should be the equivalent pins on the Q44, but no joy. I looked at the CON statements in tsl1401-db_demo.spin and those values don't track the Q44 chip at all. I was able to plug the TL1401 into the DB Expander board and I tried that setup on a Basic Stamp HomeWork Board USB, and that gave me some output with the appropriate demo code, but using the TL1401 plugged directly on the Backpack board is what I'm after. Any ideas? I appreciate your help. 73, Doug, WD0UG
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    31 and 30 are port numbers, i.e. P31 and P30. They have nothing to do with which carrier is being used and are correct for the Backpack and any other Prop module that communicates with a PC. 'Same goes for the other pins listed in the CON section: it has nothing to do with which carrier is being used. P0-P31 are logical designators (i.e. port numbers), not physical pin numbers.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    Okay, but here's why I'm confused: according to the Propeller manual, on the D40 chip, pin 30 = XI and pin 31 = XO, while on the Q44 chip, pin 28 = XI and pin 29 = XO. Similarly P0 = pin 1 on the D40 and P0 = pin 41 on the Q44. So how does the code know one chip's pin map from another? Sending a signal to "port" P0 from software doesn't map it to the correct "pin," does it? The software for sio.start says "pin" not "port" so doesn't that mean that the software has to change when the chip/carrier arrangement changes? It seems that a different set of pin assignments should be included in the source code for Q44 vs. D40 target chips, right? Anyhow I can't get the Linescanner to work with the Backpack board. I appreciate your help.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    Dougworld wrote:
    Sending a signal to "port" P0 from software doesn't map it to the correct "pin," does it?
    Yes, it does, regardless of which package is being used. I think you might be confused by the loose usage of the word "pin" in the program when "port" is what's meant.

    Anyway, what sort of problem are you having getting the TSL1401-DB to work with the Backpack module?

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    The TSL does not work at all with the Backpack. As I said earlier, the same module connected to a DB-expander and plugged and wired into a BASIC stamp HomeWork Board USB does work, so I know that the TSL is functional. When I connect TSL to Backpack via the standard extension cable and download the program you sent (via a known-good PropPlug), I can see the download work ok (leds blink on Backpack) but then no output to the screen. I am running Propeller Tool v1.3.2 for the compile and download. I'm using Parallax Serial Terminal for debug display. Nothing shows up. I've used the Serial Terminal successfully on a program running on a D40 chip, so that's why I don't think that the pins are mapped "equivalently" from that setup to the Backpack. I've read what you said about ports and pins, and you're the guru, but I don't see how I can agree that pin x on chip 1 can somehow map automatically to pin y on chip 2. Thanks for your time Phil.
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    Okay, I found MY PROBLEM!! Plus I gained understanding of how it's possible for the different chips to map logical ports. Thanks for your patience!
    P.S. Note to myself: set the baud rate the same for the Backpack and the Serial Terminal, moron!
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    That piece of code seems to have a bug in tsl1401-DB_driver in the subroutine do_getstats. The Serial Terminal displays 128 hex bytes of data that is output from the TSL, followed by 4 hex bytes of maxloc,minloc, maxpix, minpix. But none of the 4 values seem to be correct. Here is 1 scan of the TSL1401, followed by the 4 stats bytes:
    04 03 03 03 02 02 02 01 01 01 01 01 02 01 01 01
    01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01
    01 01 01 01 01 01 01 01 01 01 02 01 01 01 01 01
    01 01 01 01 01 01 01 01 01 02 01 01 02 01 02 02
    01 01 02 02 01 02 01 01 02 02 04 04 04 03 04 04
    03 04 04 04 03 04 04 04 04 04 03 04 04 04 04 04
    03 02 01 01 02 02 01 01 01 01 01 01 02 01 01 02
    01 01 01 02 02 01 01 02 01 01 02 01 02 01 02 02
    01 05 7D 00
    Based on that buffer, I don't see any logic behind any of those last 4 status values, do you?
    Thanks again.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    The order of displayed status bytes is minpix, maxpix, minloc, maxloc. This is because the Prop is a little-endian machine.

    That said, I'm not yet sure why maxpix is 05 while the pixel at maxloc is 04, unless they're from different exposures.

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    I agree that they're from different exposures. I've done a lot of testing now and I don't think that there is always a correlation between the status bytes and the data from the TSL. By observation, it seems like the status changes asynchronously with the data. So how does one sync those 2 different value sets in the TSL? Thanks Phil.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 21,129
    edited March 2013 Vote Up0Vote Down
    The status bytes do, in fact, accurately portray the scan from which they came. In the demo program, however, the screen is constantly being updated. And when you freeze it, such as to copy and paste, it might have been half-way through displaying a scan, with the status bytes still showing from the previous scan. To see what I mean, replace the line,
        sio.tx(1)
    

    with this:
        sio.tx(13)
        sio.tx(13)
    

    -Phil
    “Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away. -Antoine de Saint-Exupery
  • DougworldDougworld Posts: 24
    edited March 2013 Vote Up0Vote Down
    You are right again! Thanks Phil.
Sign In or Register to comment.