Tracking whatsapp without coding

You can hide the “Last Seen” in whatsapp but the Online status is shown to strangers to and this can be used to monitor your sleeping pattern and can correlated with other persons online activity. A lot of data can be mined from this simple 1 bit data Online or offline.






If you want to track whatsapp with coding you may try WhatsSpy Public, the code repo is available on github the developer published it as a proof of concept. Now the repo admin says its no more working but you can use it with some modification to adopt with the new changes in communication protocol. Anyway its time consuming frustrating to update code which i don’t understand. So i tried my own method with minimal or no coding.

A screenshot from Whatspy Public, Showing online time of two different persons which may be a online meeting between both users.




The easyway is whatsapp web version, the UI shows user online or not(even from non friends/strangers). So here you just need a way to log the text in a specific node of the HTML  code rendered on the screen. That is very easy with Chrome Devtools. Open the Devtools (CTRL+SHIFT+I) find the ID of the element showing online status with selector tool. Now try the element on console whether you got the right name



So this is the right element, now write the script to repeat this with logging. i have added exceptions to avoid the error lines


setInterval(function() {
var dt = new Date();
var time = dt.getHours() + “:” + dt.getMinutes() + “:” + dt.getSeconds();
try {
var lastSeen = document.getElementsByClassName(“emojitext O90ur”)[0].innerText;
console.log(time +’ ‘+lastSeen);
catch(err) {

}, 5000);



Now for whatsapp web its necessary the phone client should be online too. Keeping a phone online always is not easy. Run it with Nox player(bluestack will not work). Now Connect the in web client, for this you need a camera on the nox. you may use a webcam or an android remote cam(droid cam) for this. Once both clients logged in run the script there. when you want to store the data right click on console to save the data collected.

But still if you have time try WhatsAPI or ChatAPI, you may extend that to very good revenue making Android/iphone App. All the Best.


Imaging for research

We are working on phenomics plants since 2014 although we are not directly involved in design of the systems still we have learnt things that matter in this type of imaging. here i am going list out few terms that used in imaging:

File:Lens aperture side.jpg

Lense Characteristics:

Chromatic aberration (abbreviated CA; also called chromatic distortion and spherochromatism) is an effect resulting from dispersion in which there is a failure of a lens to focus all colors to the same convergence point. It occurs because lenses have different refractive indices for different wavelengths of light. The refractive index of transparent materials decreases with increasing wavelength in degrees unique to each.

Spherical aberration is an optical effect observed in an optical device (lens, mirror, etc.) that occurs due to the increased refraction of light rays when they strike a lens or a reflection of light rays when they strike a mirror near its edge, in comparison with those that strike nearer the centre. It signifies a deviation of the device from the norm, i.e., it results in an imperfection of the produced image.

Defocus is the aberration in which an image is simply out of focus. This aberration is familiar to anyone who has used a camera, videocamera, microscope, telescope, or binoculars. Optically, defocus refers to a translation along the optical axis away from the plane or surface of best focus. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transitions. Fine detail in the scene is blurred or even becomes invisible. Nearly all image-forming optical devices incorporate some form of focus adjustment to minimize defocus and maximize image quality.

The degree of image blurring for a given amount of focus shift depends inversely on the lens f-number. Low f-numbers, such as f/1.4 to f/2.8, are very sensitive to defocus and have very shallow depths of focus. High f-numbers, in the f/16 to f/32 range, are highly tolerant of defocus, and consequently have large depths of focus. The limiting case in f-number is the pinhole camera, operating at perhaps f/100 to f/1000, in which case all objects are in focus almost regardless of their distance from the pinhole aperture. The penalty for achieving this extreme depth of focus is very dim illumination at the imaging film or sensor, limited resolution due to diffraction, and very long exposure time, which introduces the potential for image degradation due to motion blur.

The amount of allowable defocus is related to the resolution of the imaging medium. A lower-resolution imaging chip or film is more tolerant of defocus and other aberrations. To take full advantage of a higher resolution medium, defocus and other aberrations must be minimized.

f-number/focal ratio/f-ratio/f-stop of an optical system such as a camera lens is the ratio of the system’s focal length to the diameter of the entrance pupil. It is a dimensionless number that is a quantitative measure of lens speed, and an important concept in photography.  It is the reciprocal of the relative aperture. The f-number is commonly indicated using a hooked f with the format f/N, where N is the f-number.

Diaphragm(IRIS is a type of diaphgragm) is a thin opaque structure with an opening (aperture) at its center. The role of the diaphragm is to stop the passage of light, except for the light passing through the aperture. Thus it is also called a stop (an aperture stop, if it limits the brightness of light reaching the focal plane, or a field stop or flare stop for other uses of diaphragms in lenses). The diaphragm is placed in the light path of a lens or objective, and the size of the aperture regulates the amount of light that passes through the lens. The centre of the diaphragm’s aperture coincides with the optical axis of the lens system.

Lens speed refers to the maximum aperture diameter, or minimum f-number, of a photographic lens. A lens with a larger maximum aperture (that is, a smaller minimum f-number) is called a “fast lens” because it can achieve the same exposure with a faster shutter speed. Conversely, a smaller maximum aperture (larger minimum f-number) is “slow” because it delivers less light intensity and requires a slower (longer) shutter speed.

A fast lens speed is desirable in taking pictures in dim light, or with long telephoto lenses and for controlling depth of field and bokeh, especially in portrait photography,[1] and for sports photography and photojournalism.

Lenses may also be referred to as being “faster” or “slower” than one another; so an f/3.5 lens can be described as faster than an f/5.6.

Prime lens is either a photographic lens whose focal length is fixed, as opposed to a zoom lens, or it is the primary lens in a combination lens system.

Confusion can sometimes result due to the two meanings of the term if the context does not make the interpretation clear. Alternative terms primary focal length, fixed focal length, and FFL are sometimes used to avoid ambiguity.

For 35mm film and full frame digital cameras (in which the image area is 36 by 24 millimeters) prime lenses can be categorized by focal length as follows:

  • 14 to 21mm: Ultra-Wide — Because these lenses are usually used at very close subject distances the resulting perspective can provide a dramatic, often extreme image that can be used to selectively distort a scene’s natural proportions.
  • 24 to 35mm: Wide — these lenses capture a wider field of view than a standard lens. Because they tend to be used at shorter distances the resulting perspective can show some distortion.
  • 50 mm: Standard — with a focal length near the 44mm image diagonal.
  • 85 mm: Portrait — A short telephoto lens that allows a longer subject to camera distance, to produce pleasing perspective effects, while maintaining useful image framing.
  • 135 mm: Telephoto — these lenses are used by action and sports photographers to capture faraway objects.
  • 200 to 500 mm: Super Telephoto — these are specialized, bulky lenses for sports, action, and wildlife photography.

A zoom lens is a mechanical assembly of lens elements for which the focal length (and thus angle of view) can be varied, as opposed to a fixed focal length (FFL) lens (see prime lens).

A true zoom lens, also called a parfocal lens, is one that maintains focus when its focal length changes.[1] A lens that loses focus during zooming is more properly called a varifocal lens. Despite being marketed as zoom lenses, virtually all consumer lenses with variable focal lengths use varifocal design.

The convenience of variable focal length comes at the cost of complexity – and some compromises on image quality, weight, dimensions, aperture, autofocus performance, and cost. For example, all zoom lenses suffer from at least slight, if not considerable, loss of image resolution at their maximum aperture, especially at the extremes of their focal length range. This effect is evident in the corners of the image, when displayed in a large format or high resolution. The greater the range of focal length a zoom lens offers, the more exaggerated these compromises must become.

A varifocal lens is a camera lens with variable focal length in which focus changes as focal length (and magnification) changes, as compared to parfocal (“true”) zoom lens, which remains in focus as the lens zooms (focal length and magnification change). Many so-called “zoom” lenses, particularly in the case of fixed lens cameras, are actually varifocal lenses,[1] which give lens designers more flexibility in optical design trade-offs (focal length range, maximum aperture, size, weight, cost) than parfocal zoom. These are practical because of auto-focus, and because the camera processor can automatically adjust the lens to keep it in focus while changing focal length (“zooming”) making operation practically indistinguishable from a parfocal zoom.

A parfocal lens is a lens that stays in focus when magnification/focal length is changed. There is inevitably some amount of focus error, but small enough to be considered insignificant.

Zoom lenses used for moviemaking applications must have the parfocal ability in order to be of practical use. It is almost impossible to stay in correct focus (as done manually by the focus puller) while zooming.

A lens mount is an interface – mechanical and often also electrical – between a photographic camera body and a lens. It is confined to cameras where the body allows interchangeable lenses, most usually the rangefinder camera, single lens reflex type or any movie camera of 16 mm or higher gauge. Lens mounts are also used to connect optical components in instrumentation that may not involve a camera, such as the modular components used in optical laboratory prototyping which join via C-mount or T-mount elements. Read more at wikipedia

Bokeh  is the aesthetic quality of the blur produced in the out-of-focus parts of an image produced by a lens. Bokeh has been defined as “the way the lens renders out-of-focus points of light”. Differences in lens aberrations and aperture shape cause some lens designs to blur the image in a way that is pleasing to the eye, while others produce blurring that is unpleasant or distracting—”good” and “bad” bokeh, respectively. Bokeh occurs for parts of the scene that lie outside the depth of field. Photographers sometimes deliberately use a shallow focus technique to create images with prominent out-of-focus regions.

An aspheric lens or asphere is a lens whose surface profiles are not portions of a sphere or cylinder. In photography, a lens assembly that includes an aspheric element is often called an aspherical lens.

The asphere’s more complex surface profile can reduce or eliminate spherical aberration and also reduce other optical aberrations such as astigmatism, compared to a simple lens. A single aspheric lens can often replace a much more complex multi-lens system. The resulting device is smaller and lighter, and sometimes cheaper than the multi-lens design.[1] Aspheric elements are used in the design of multi-element wide-angle and fast normal lenses to reduce aberrations. They are also used in combination with reflective elements (catadioptric systems) such as the aspherical Schmidt corrector plate used in the Schmidt cameras and the Schmidt-Cassegrain telescopes. Small molded aspheres are often used for collimating diode lasers.

Aspheric lenses are also sometimes used for eyeglasses. Aspheric eyeglass lenses allow for crisper vision than standard “best form” lenses, mostly when looking in other directions than the lens optical center.

Metering mode refers to the way in which a camera determines the exposure. Cameras generally allow the user to select between spot, center-weighted average, or multi-zone metering modes. Various metering modes are provided to allow the user to select the most appropriate one for use in a variety of lighting conditions.

With spot metering, the camera will only measure a very small area of the scene (between 1-5% of the viewfinder area). This will by default be the very centre of the scene. The user can select a different off-centre spot, or to recompose by moving the camera after metering.

Depth of field (DOF), also called focus range or effective focus range, is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image. Although a lens can precisely focus at only one distance at a time, the decrease in sharpness is gradual on each side of the focused distance, so that within the DOF, the unsharpness is imperceptible under normal viewing conditions.

In some cases, it may be desirable to have the entire image sharp, and a large DOF is appropriate. In other cases, a small DOF may be more effective, emphasizing the subject while de-emphasizing the foreground and background. In cinematography, a large DOF is often called deep focus, and a small DOF is often called shallow focus.

Several other factors, such as subject matter, movement, camera-to-subject distance, lens focal length, selected lens f-number, format size, and circle of confusion criteria also influence when a given defocus becomes noticeable. The combination of focal length, subject distance, and format size defines magnification at the film / sensor plane.

DOF is determined by subject magnification at the film / sensor plane and the selected lens aperture or f-number. For a given f-number, increasing the magnification, either by moving closer to the subject or using a lens of greater focal length, decreases the DOF; decreasing magnification increases DOF. For a given subject magnification, increasing the f-number (decreasing the aperture diameter) increases the DOF; decreasing f-number decreases DOF.

Hyperfocal distance is a distance beyond which all objects can be brought into an “acceptable” focus. As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of a fixed-focus camera. The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable.

Focus stacking (also known as focal plane merging and z-stacking[1] or focus blending) is a digital image processing technique which combines multiple images taken at different focus distances to give a resulting image with a greater depth of field (DOF) than any of the individual source images.[2][3] Focus stacking can be used in any situation where individual images have a very shallow depth of field; macro photography and optical microscopy are two typical examples. Focus stacking can also be useful in landscape photography.


U-blox GNSS Modules and RTK

In last few years use of High Precision Positioning systems gained popularity in variety of application e.g. construction, agriculture and GIS. High precision GNSS systems with horizontal accuracy in centimeters helped in precision agriculture for planting in accurate positions as per plan and auto guidance for tractors. AutoGuidance in big farms helps farmers to save fertilsers and herbicides by preventing unnecessary reapplication over same patch of field. Vertical accuracy of GNSS service helps for land leveling applications.

But How they achieve centimeter level Accuracy. Do they use sane satellites ? Do they use same hardware architechture ?

Yes they use same satellites and same hardware but the methods to get the cordinates is different. They are generally called correction methods. In normal GNSS receivers there are various errors

  1. Satellite Clock Error
  2. Orbit Shift/ In accuracy in Laser Marking
  3. Variable Propagation Delay due to ionsperic variations
  4. Noise sourced from other sources
  5. Multipath Error

Few of these errors are static over time which can be easily corrected with a reference data of the Final cordinates or the PRN and azimuth angles. The oldest method use this technique its called Differential GPS or DGPS (DGNSS in Generic terms). Lets list out the correction techniques

  1. DGPS
  2. PPP
  3. RTK

The most accurate coorection method is RTK which need RAW Data from GNSS modules. Most of the GNSS?GPS module with RAW data costs very high and most of them are not available for individual buyers. Now these ublox Neo Series GPS Chips available abundantly in online shopping sites. These modules cost approx $15 or INR 1000/- . Although none of them seems to be a genuine product. The first one i ordered online was a ublox Neo-m8n which was a Neo-6M  when received so i returned the item. Next time i purchased Neo-6M from local market(Chandni chowk) which was  a ublox-7 version chip inside !! Again i ordered another one from amazon (REES52 Neo-7) this was a actual Neo-6 (but i was expecting Neo-7 inside )

Now how did i enabled RAW data from both Chips, The chips are not assumed to output RAW data as per the manufacturer and even they declare the discussion of related things on there official forum as illegal discussion. Fetching RAW data from these chips are actually a user level hack. They have some debug codes to check working of there GPS engines which spits out the RAW data and the exports decoded it as the data they actually need.

As the names are very confusing i tried the wrong commands from the ubloxraw page at last got the correct ones and some commands sniffed from the ucenter +serial sniffer. The ublox 7 series chip uses TRK-TRKD5+NAV-CLOCK+NAV-TIME to get RAW Data where as ublox 6 version chip gets the RAW data directly as RXM-RAW

For ublox 7 Proto 14.3(TRK-TRKD5+TRK-SFRBX+NAV-CLOCK+NAV-TIMEGPS Parser)
Change baudrate
B5 62 06 00 14 00 01 00 00 00 D0 08 00 00 00 C2 01 00 01 00 01 00 00 00 00 00 B8 42
B5 62 06 01 03 00 03 0A 01 18 5D
B5 62 06 01 03 00 03 0F 01 1D 67
B5 62 06 01 03 00 01 22 01 2E 87
B5 62 06 01 03 00 01 30 01 3C A3
For ublox 6 7.03 (RXM-RAW+RXM-SFRB)
ubx only
b5 62 06 00 14 00 01 00 00 00 d0 08 00 00 80 25 00 00 07 00 01 00 00 00 00 00 a0 a9 b5 62 06 00 01 00 01 08 22
////////////////////////////////////Key Code//////////////////////////////////////////
b5 62 09 01 10 00 c8 16 00 00 00 00 00 00 97 69 21 00 00 00 02 10 2b 22
b5 62 09 01 10 00 0c 19 00 00 00 00 00 00 83 69 21 00 00 00 02 11 5f f0
!HEX b5 62 09 01 10 00 c8 16 00 00 00 00 00 00 97 69 21 00 00 00 02 10 2b 22
!HEX b5 62 09 01 10 00 0c 19 00 00 00 00 00 00 83 69 21 00 00 00 02 11 5f f0
////////////////////////////////////Key Code//////////////////////////////////////////
Enable ubx raw
    b5 62 06 01 03 00 02 10 01 1d 66
Enable ubx sfrbx
   XXXXX b5 62 06 01 03 00 02 13 01 20 6c
    b5 62 06 01 03 00 02 11 01 1e 68
For RTKlib startup command .cmd file
!HEX b5 62 06 00 14 00 01 00 00 00 d0 08 00 00 80 25 00 00 07 00 01 00 00 00 00 00 a0 a9 b5 62 06 00 01 00 01 08 22
!HEX b5 62 09 01 10 00 c8 16 00 00 00 00 00 00 97 69 21 00 00 00 02 10 2b 22
!HEX b5 62 09 01 10 00 0c 19 00 00 00 00 00 00 83 69 21 00 00 00 02 11 5f f0
!HEX b5 62 06 01 03 00 02 10 01 1d 66
!HEX b5 62 06 01 03 00 02 11 01 1e 68
!HEX b5 62 09 01 10 00 c8 16 00 00 00 00 00 00 97 69 21 00 00 00 02 10 2b 22
!HEX b5 62 09 01 10 00 0c 19 00 00 00 00 00 00 83 69 21 00 00 00 02 11 5f f0
!HEX b5 62 06 01 03 00 02 10 01 1d 66
!HEX b5 62 06 01 03 00 02 11 01 1e 68
Some screenshots from the RTKNAV with NEo-7 and RTCM correction data from colombo srilanka. I have also tried RTK with Neo-6 as base directly coonected to laptop and Neo-7  connected over bluetooth but the accuracy was not good enough.
Now i want to test with genuine Neo-m8p chips but u-blox don’t want to sell modules to individuals they want only bulk buyers.
rtk3 rtknow

Electronic Design Abbreviations

ZVS: In a quasi-resonant zero-current/zero-voltage switch (ZCS/ZVS) “each switch cycle delivers a quantized ‘packet’ of energy to the converter output, and switch turn-on and turn-off occurs at zero current and voltage, resulting in an essentially lossless switch.”[32] Quasi-resonant switching, also known as valley switching, reduces EMI in the power supply.

TBU: Its a tradename of Transient Blocking Unit (TBU™) Electronic Current Limiter designed by bournes. TBU technology is designed to block a transient through a current disconnecting mechanism rather than diverting or shunting the surge to ground. This blocking technology virtually eliminates latency in the circuit protection design which results in surge protection for sensitive electronic equipment within nanoseconds.

AFE: Analog Front End

AMR: Automatic meter reading

APD: Avalanche Photo Diodes

BIST: Built-in self-test.

Baseline: The electrical signal from a sensor when no measured variable is present. Often referred to the output at no-load condition.

Beyond-the-Rails: Beyond-the-Rails™ Maxim’s name for a feature of an IC that can process inputs and provide output voltages that exceed the supply rails. The feature is achieved through on-chip integration of necessary supply rails.

Microsoft Serial ballPoint autoinstall

I have been using avr microcontrollers for last 1 year but never faced this problem. Yesterday I was trying to connect a Arduino mini pro with a USB-Serial interface(supplied by Decagon, DevName DecagonUCA, Chip:Silabs CP2024).  I have used this same device in same configuration several times in last 3/4 months.

But only change in code yesterday was that i was throwing some data continuously as the chip powers up. The Operating System (Win7) detects a new device Microsoft serial ballpoint. Now this makes two problems.

  1. The new device uses the serial port you are using. Makes it busy so you can’t use it.
  2. The device assumes the sensor readings you are sending as mouse pointer positions, so it moves the pointer randomly(assuming your data is random) and creates mouse clicks. Now you can’t uninstall the device as your mouse pointer is not in your control.

I removed the USBSerial cable and searched the name of the driver installed, google displayed a number of forum users complaining about this problem. Most of the users are using USB GPS device. I guess any device which continously sends alpha numeric data confused as serial ballpoint. Anyway i want to know what is the actual serial ballpoint protocol

———————————–Serial BallPoint Protocol Start——————————————————–

The old MicroSoft serial mouse, while no longer in general use, can be employed to provide a low cost input device, for example, coupling the internal mechanism to other moving objects. The serial protocol for the mouse is 1200 baud, 7 bit, 1 stop bit, no parity. Every time the mouse changes state (moved or button pressed) a three byte “packet” is sent to the serial interface. For reasons known only to the engineers, the data is arranged as follows, most notably the two high order bits for the x and y coordinates share the first byte with the button status.
D6 D5 D4 D3 D2 D1 D0
1st   byte  1  LB  RB  Y7  Y6 X7 X6
2nd byte  0  X5  X4  X3 X2 X1 X0
3rd  byte  0  Y5  Y4  Y3  Y2 Y1 Y0
LB is the state of the left button, 1 = pressed, 0 = released.
RB is the state of the right button, 1 = pressed, 0 = released
X0-7 is movement of the mouse in the X direction since the last packet. Positive movement is toward the right.
Y0-7 is movement of the mouse in the Y direction since the last packet. Positive movement is back, toward the user.
Sample C code to decode three bytes from the mouse passed in “s”, the button and position (x,y) are returned.

s should consist of 3 bytes from the mouse
void DecodeMouse(unsigned char *s,int *button,int *x,int *y)
*button = ‘n’; /* No button – should only happen on an error */
if ((s[0] & 0x20) != 0)
*button = ‘l’;
else if ((s[0] & 0x10) != 0)
*button = ‘r’;
*x = (s[0] & 0x03) * 64 + (s[1] & 0x3F);
if (*x > 127)
*x = *x – 256;
*y = (s[0] & 0x0C) * 16 + (s[2] & 0x3F);
if (*y > 127)
*y = *y – 256;


  • Alternative:

———————————–Serial BallPoint Protocol End——————————————–

Solution 1, (NoSerialMice):

  1. To disable the detection of devices on COM ports in Windows NT/2000/XP: Go to: My Computer – Right Click – Properties | Advanced tab | Startup and Recovery – Settings | Edit. Make a backup copy of the Boot.ini file, (copy and paste in another location)
  2. Remove the hidden, system, and read-only attributes from the Boot.ini file. (Use Windows Explorer and right click on file, then properties.)
  3. Using a text editor (such as Notepad) open the Boot.ini file.(Double click on file, or right click and choose Notepad from the “Open With” option.)
  4. Add “/NoSerialMice” to the end of each entry in the [operating systems] section of Boot.ini. See the example below for more information
  5. Save Boot.ini and quit Notepad
  6. Shutdown and restart Windows

NOTE: The /NoSerialMice option is not case sensitive.

Solution  2: Disable serial mouse

You can disable serial mouse when its connected, but mostly in this type problem the mouse pointer keeps oscillating randomly so you can’t choose the device manager or disable the device.

AVR Mk-II for Aurduino and back to AVRStudio

Drivers in desktop operating systems are not so simple as the embedded drivers, some times it take hours to solve the driver issues specially with the USB Debuggers.

Few Days back i was using arduino for low power applications, so excluded all the supporting components like programmers, UART link and used AVR ISP MkII for programming. But  arduino(open source program) use libusb(opensource) where as avrstudio(copyrighted) use jungo(propietory) for usb interface. And this needs the mkII driver to be changed to libusb. mkii will be a libusb device and in device manager it will show under libusb class. Here is some supporting link


Now if i want to revert back to old driver, i tried to uninstall and delete the mkII driver(libusb), then reinstalled avrstudio again. But nothing happened. No driver found for mkII.

Now i found this blog with similar issue, they were using a debugger from freescale using jungo based usb driver changed to libusb. The part i was missing is jungo deiver install. In device manager there should be component “Jungo driver” permanently in the list whether any debugger attached to port or not. I got the location of the driver in “C:\Program Files (x86)\Atmel\AVR Tools\usb64” mine is 64 bit OS it may be usb for x86

Now another trick about how to install the driver, right click computer name in device manger then choose “add legacy hardware”. Select manually find the driver select all driver(dont srelect any specific) now choose “have a disk” . Now show the path to the folder containing windrv6.sys now it will install Jungo windriver. Now if you have avr studio installed plugin the programmer mkII, it will automatically install the mkII driver.


Driving a laser diode

These diodes are not like normal Light Emitting diodes (LEDS), these diodes can use all the available power if not controlled. So it need to be attached to a current controlled DC source.

The Laser Diode(LD) module contains an Photo diode inbuilt, which monitors the output light intensity and used as a feedback for the control circuits. The terminals from laser diode are

  1. Laser Diode Cathode (LDA)
  2. Photo diode Anode (PDA)
  3. COMMON Positive


This is a simple description of cheapKeyChain Lasers that are sold for 5 bucks or so.The laser diode head has three pins labeled: LDC (Laser Diode Cathode), PDA (Photo Diode Anode) and COM+ (common Positive Terminal).Inside the laser diode head we find the laser diode itself and a photodiode, used to regulate the laser diode current with an external feedback loop.


A Schematic from MAXIM





Power Saving in 8 bit AVR

With increasing popularity of battery powered portable devices microcontroller manufacturers started making the controllers more efficient for the specific purpose. For battery powered devices the most important and common parameter is the power consumption. There are some applications like remote weather loggers which run for years with a set of AA batteries, which needs average power consumption to be in nano amps.

To accomplish these power consumption levels the manufacturers introduced a new feature with different names, Microchips Nanowatt, Texas Instruments MSP430 series and from my favourite atmel its Picopower. All these techniques may work differently but the objective is same.

Atmels AVR 8bit has the following 6 Modes

  • Idle
  • ADC Noise Reduction
  • Power save
  • Power Down
  • StandBy
  • Extended Standby


sleep modes in 8 bit AVR


For the New controllers with PINChange interrupt the following table(Pinchange added as a wake up source)

for the controllers with pinchange interrupt
for the controllers with pinchange interrupt

Now what about the coding,

There are inbuilt functions for all the related job in WinAVR, so you can use the same in AVR Studio or Arduino. There are few arduino libraries e.g.  jeelib (  for arduino based on the winavr functions, but i think the base winavr functions are best.

just include <avr/power.h> and <avr/sleep.h>

avr/sleep.h includes macros set_sleep_mode(MODE), sleep_enable(), sleep_disable(), sleep_cpu().

#define sleep_enable() \
do { \
} while(0)
#define sleep_disable() \
do { \
} while(0)
#define sleep_cpu() \
do { \
__asm__ __volatile__ ( sleep \n\t :: ); \
} while(0)


#define sleep_mode() \
do { \
sleep_enable(); \
sleep_cpu(); \
sleep_disable(); \
} while (0)


and avr/power.h includes functions for disabling peripherals. function to enable each peripheral,  function to disable each peripheral, function for disabling all, function to enable all.

in simple codes


#include <avr/sleep.h>

….code lines…




these above two lines are enough to make a device sleep.

but before going to sleep you must enable interrupt and a method for wake-up from sleep mode, additionally you can disable the brownout detector (for least power consumption) in run time from the codes, but this functionality is available for some specific chips in pico series although you can disable it outside of the code by setting the fuse bits.

Example code

#include <avr/interrupt.h>
#include <avr/sleep.h>
if (some_condition)
However arduino people found a bug in the above code, if u r planning to wake up through a interrupt, and normal case we dettach the interrupt in the ISR, so if the interrupt is triggered before the system goes to sleep, the system will sleep without a wakeup hook. So modified the code as below. They modified the sequence and added disable_sleep in ISR which will inhibit the previous case.
attachInterrupt(0, pin2_isr, LOW);
/* 0, 1, or many lines of code here */
/* wake up here */
void pin2_isr()
pin2_interrupt_flag = 1;


Interrupts on AVR

Interrupt is neither same  for all hardware’s nor for the compilers. Some hardwares support multi level interrupt or nested interrupt and some chips use single interrupt flag. The Atmel architecture interrupt works as following

1. As Interrupt triggered processor finishes pending instruction

2. Stops fetching further instruction

3. Clear Global interrupt enable bit(This is why arduino denies nested interrupt, but you can use)

4.  Push PC(program counter) on stack

5. Jump to interrupt vector(specific address where the corresponding interrupt handler expected to be)

6. Next is the ISR code in case of C compiler do a lot job for you and if you are using assembly all that have to do youself.

Push the status register to stack and also any other register that you may use in this ISR block

Execute the actual ISR handler code i.e. whatever you want to do when the specific interrupt occours

When the actual task is done, rollback the things you changed i.e. POP the stack to status register and other register you had pussed

7. Now last assembly code RETI which rollback the PC from stack and Reenables Global Interrupt

So in the whole process all except the step 6 and 7 are automatic user need not worry about the steps.


ISR in C

use variables with volatile type modifier other compiler optimiser may think the varible is static(assuming not changed in main())

  • Keep the ISR code as short possible in execution time not in length.
  • Don’t activate or deactivate interrupt inside ISR
  • Don’t call another function that uses interrupt

Now the above rules are for the ideal people, you can do it if u know what the hardware and the compiler does. Suppose you have to do some of the banned things immediately after the interrupt triggered, i have to methods one is the method being ideal another is to break the rules.

A.If your main loop contains things that only run when a interrupt is triggered and interrupt are not overlapping, in this case u can set special flag to activate a part of main code (code u want to run in ISR), the flag need to set in the original ISR and cleared in main loop part of ISR.


B. Next method is really complicated. You need to mess with the hardware and the compiler, you know when interrupt occurs global interrupt enable is cleared ,PC pushed to stack by hardware and Status register and other registers moved to stack by compiler, So here

if you want to get another interrupt you have to set GIE and if interrupt occours it will again push pc and but the compiler part is complex.

Some people make multilevel interrupt with software. The main interrupt handler acts as the first interrupt handler which catches the interrupt event and stores necessary data and exit ISR(the basic isr) and start the secondary interrupt handler. exiting the fast interrupt handler triggers RETI so all interrupts resumed.


MSP430 with IAR

First of all the question is why IAR why not CCS or GCC, here is my answer

You are learning all these not just for fun, if it is then you may with gcc also. but if you are learning both for fun and career then you should prefer one which is used by industry, and both TI’s CCS and IAR EW costs same, so why not choose the one with multiple compilers in the Same IDE design although they sold all of them individually.

Download the installer from IAR Systems page . Now you can install this with two different modes One is code size limited version Second is full Version Time limited Licences validity. Now click Project-> Create New Project, Now you can choose the language C, C++ or assembly and for c and C++ you can choose template or project with main().

Anyway all these things are useless for geeks  like you. The real question is the headers and the libraries, where are the standard functions and MACROs, how is the C C++ standard differ from other C standards(although embedded C is always a NonStandard C).

Step 1 should be to download The Family User Guide, like if i am using MSP430F149, i should read MSP430x1xx Family User Guide i.e.

Default UI based project adds io430.h this includes it need you to define your chip name in define statement i.e. #define MSP430F149

But i will prefer to add the specific header file instead of these chains i.e. msp430f149.h with hardware description of the specific chip.

For delay you may use __delay_cycles(x), which gives delay of x instruction cycles, and one instruction cycles duration=(1/MCLK frequency in Hz)Seconds