Saturday, February 27, 2010

Body Area Network (BAN)


A Body Area Network is formally defined by IEEE 802.15 as, "a communication standard optimized for low power devices and operation on, in or around the human body (but not limited to humans) to serve a variety of applications including medical,
consumer electronics / personal entertainment and other" [IEEE 802.15]. In more common terms, a Body Area Network is a system of devices in close proximity to a persons body that cooperate for the benefit of the user. This paper discusses several uses of the BAN technology As IEEE mentioned, the most obvious application of a BAN is in the medical sector, however there are also more recreational uses to BANs. This paper will discuss the technologies surrounding BANs, as well as several common applications for BANs. At the end of the paper we will briefly discuss the challenges associated with BANs and some solutions that are on the horizon.

BAN technology is still an emerging technology, and as such it has a very short history.
BAN technology emerges as the natural byproduct of existing sensor network technology
and biomedical engineering. Professor Guang-Zhong Yang was the first person to
formally define the phrase "Body Sensor Network" (BSN) with publication of his book
Body Sensor Networks in 2006. BSN technology represents the lower bound of power
and bandwidth from the BAN use case scenarios. However, BAN technology is quite
flexible and there are many potential uses for BAN technology in addition to BSNs.

  Download :     Reports & Presentation (.zip)


read more...

Friday, February 26, 2010

iDEN (Integrated Digital Enhanced Network )


                           iDEN is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time division multiple access TDMA. Notably, iDEN is designed, and licensed, to operate on individual frequencies that may not be contiguous. iDEN operates on 25kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (IS-54 and IS-136) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz,and is capable of serving the same number of subscribers per channel as iDEN. iDEN supports either three or six interconnect users (phone users) per channel, and either six or twelve dispatch users (push-to-talk users) per channel. Since there is no Analogue component of iDEN, mechanical duplexing in the handset is unnecessary, so Time Domain Duplexing is used instead, the same way that other digital-only technolgies duplex their handsets. Also, like other digital-only technologies, hybrid or cavity duplexing is used at the Base Station (Cellsite).

iDEN technology is a highly innovative, cutting-edge system of technologies developed by Motorola to create an ideal, complete wireless communications system for today's fast-paced, busy lifestyle. Advanced capabilities bring together the features of dispatch radio, full-duplex telephone interconnect, short messaging service and data transmission.

iDEN technology offers you more than just a wireless phone. It's a Motorola complete communications system that you hold in your hand. Combining speakerphone, voice command, phone book, voice mail, digital two-way radio, mobile Internet and e-mail, wireless modems, voice activation, and voice recordings so that you can virtually recreate your office on the road.

  Download :     Full Report (.doc)


read more...

Wednesday, February 24, 2010

Pixie Dust


                          In each of the past five years, hard drive capacities have doubled, keeping storage costs low and allowing technophiles and PC users to sock away more data. However, storage buffs believed the rate of growth could continue for only so long, and many asserted that the storage industry was about to hit the physical limit for higher capacities. But according to IBM, a new innovation will push back that limit. The company is first to mass produce computer hard disk drives using a revolutionary new type of magnetic coating that is eventually expected to quadruple the data density of current hard disk drive products -- a level previously thought to be impossible, but crucial to continue feeding the information-hungry Internet economy. For consumers, increased data density will help hasten the transition in home entertainment from passive analog technologies to interactive digital formats.

The key to IBM's new data storage breakthrough is a three atom-thick layer of the element ruthenium, a precious metal similar to platinum, sandwiched between two magnetic layers. That only a few atoms could have such a dramatic impact caused some IBM scientists to refer to
the ruthenium layer informally as "pixie dust". Known technically as "antiferromagnetically-coupled (AFC) media," the new multilayer coating is expected to permit hard disk drives to store 100 billion bits (gigabits) of data per square inch of disk area by 2003. Current hard drives can store 20 gigabits of data per square inch. IBM began shipping Travelstar hard drives in May 2001 that are capable of storing 25.7 gigabits per square inch. Drives shipped later in the year are expected to be capable of 33% greater density. In information technology, the term "pixie dust" is often used to refer to a technology that seemingly does the impossible. In the past decade, the data density for magnetic hard disk drives has increased at a phenomenal pace: doubling every 18 months and, since 1997, doubling every year, which is much faster than the vaunted Moore's Law for integrated circuits. It was assumed in the storage industry that the upper limit would soon be reached. The superparamagnetic effect has long been predicted to
appear when densities reached 20 to 40 gigabits per square inch - close to the data density of current products.

IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing. The company, which plans to implement the process across their entire line of products, chose not to publicize the technology in advance. Many companies have focused research on the use of AFC in hard drives; a number of vendors, such as Seagate Technology and Fujitsu, are expected to follow IBM's lead. AFC will be used across all IBM hard drive product lines. Prices of hard drives are unlikely to increase dramatically because AFC increases the density and storage capacity without the addition of expensive disks, where data is stored, or of heads, which read data off the disks. AFC will also allow smaller drives to store more data and use less power, which could lead to smaller and quieter devices. Developed by IBM Research, this new magnetic media uses multilayer interactions and is expected to permit longitudinal recording to achieve a future data density of 100 gigabits/inch2 without suffering from the projected data loss due to thermal instabilities. This new media will thus delay for several years the impact of superparamagnetism in limiting future
areal density increases. It also requires few changes to other aspects of the hard-disk-drive design, and will surely push back in time the industry's consideration of more complex techniques proposed for very high-density magnetic recording, such as, perpendicular recording, patterned media or thermally-assisted writing.

  Download :     Full Report (.pdf)


read more...

Proteomics


                          Proteomics is something new in the field of biotechnology. It is basically the study of the proteome, the collective body of proteins made by a person’s cells and tissues. Since it is proteins, and to a much lesser extent, other types of biological molecules that are directly involved in both normal and disease associated biochemical processes, a more complete understanding of the disease may be gained by directly looking at the proteins present within a
diseased cell or tissue and this is achieved through the study of the proteome, Proteomics. For, Proteomics, we need 2-D electrophoresis equipment toseparate the proteins, mass spectrometry to identify them and x-ray crystallography to know more of the structure and function of the proteins. These equipments are essential in the study of proteomics.

The exact definition of proteomics varies depending on whom you ask, but most of the scientists agree that it can be broken into three main activities: identifying all the proteins made in a given cell, tissue or organism; determining how these proteins join forces to form networks akin to electrical circuits; and outlining the precise three-dimensional structure of the proteins in
an effort to find their Achilles’ heels-that is, where drugs might turn their activity on or off. Though the task seems straightforward, it is not as simple as it seems.

The critical pathway of proteome research includes:
  • Sample collection
  • Protein separation
  • Protein identification
  • Protein characterization
  • Bioinformatics
These are the major steps involved in the proteome science studies.

  Download :     Full Report (.doc)


read more...

Smart Pixel Arrays


                           High speed smart pixel arrays (SPAs) hold great promise as an enabling technology for board-to-board interconnections in digital systems. SPAs may be considered an extension of a class of optoelectronic components that have existed for over a decade, that of optoelectronic integrated circuits (OEICs). The vast majority of development in OEICs has involved the integration of electronic receivers with optical detectors and electronic drivers with optical sources or modulators. In addition, very little of this development has involved more than a single optical channel. But OEICs have underpinned much of the advancement in serial fiber links. SPAs encompass an extension of these optoelectronic components into arrays in which each element of the array has a signal processing capability. Thus, a SPA may be described as an array of optoelectronic circuits for which each circuit possesses the property of signal processing and, at a minimum, optical input or optical output (most SPAs will have both optical input and output). The name smart pixel is combination of two ideas, "pixel" is an image
processing term denoting a small part, or quantized fragment of an image, the word "smart" is coined from standard electronics and reflects the presence of logic circuits. Together they describe a myriad of devices. These smart pixels can be almost entirely optical in nature, perhaps using the non-linear optical properties of a material to manipulate optical data, or they can be mainly electronic, for instance a photoreceiver coupled with some electronic switching.

  Download :     Full Report (.pdf)


read more...

Intel MMX Technology


                           The Intel MMX™ technology comprises a set of instructions to the Intel architecture (IA) that are designed to greatly enhance the performance of advanced media and communications applications. These extensions (which include new registers, data types and instructions) are combined with the Single Instruction, Multiple Data (SIMD) Execution model to accelerate the performance of applications such as motion video, combined graphics with video, image processing, audio synthesis, speech synthesis and compression, 2D and 3D graphics, which typically use compute-intensive algorithms to accomplish the purpose. All existing soft wares that don’t make use of this technology will also run on the processor without
modification. Presented below is an elementary treatise on this technology in a programmer’s point of view.

The MMX™ register set consists of eight 64-bit registers . The MMX™ instructions access the MMX™ registers directly using the register names MM0 through MM7. These registers can only be used to perform calculations on the MMX™ data types; they can never be used to address
memory. Addressing of MMX™ instruction operands in memory are handled by using the standard IA addressing modes (immediate, register mode etc.) and the general purpose registers.

  Download :     Full Report (.pdf)


read more...

VISNAV


                          Now days there are several navigation systems for positioning the objects. Several research efforts have been carried out in the field of Six Degrees Of Freedom estimation for rendezvous and proximity operations. One such navigation system used in the field of Six Degrees Of Freedom position and attitude estimation is the VISion based NAVigation system. It is aimed at achieving better accuracies in Six Degrees Of Freedom estimation using a more simpler and robust approach.

The VISNAV system uses a Position Sensitive Diode (PSD) sensor for 6 DOF estimation. Output current from the PSD sensor determines the azimuth and elevation of the light source with respect to the sensor. By having four or more light source called beacons in the target frame at known positions the six degree of freedom data associated with the sensor is calculated. The beacon channel separation and demodulation are done on a fixed point digital signal processor (DSP) Texas Instruments TMS320C55x using digital down conversion, synchronous detection and multirate signal processing techniques. The demodulated sensor
currents due to each beacon are communicated to a floating point DSP Texas Instruments TMS320VC33 for subsequent navigation solution by the use of colinearity equations.
101seminartopics.com Among other competitive systems  a differential global positioning system (GPS) is limited to midrange accuracies, lower bandwidth, and requires complex infrastructures. The sensor systems based on differential GPS are also limited by geometric dilution of precision, multipath errors, receiver errors, etc.These limitations can be overcome by using the DSP embedded VISNAV system

  Download :     Full Report (.pdf)


read more...

Single photon emission computed tomography (SPECT)


                          Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE
PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient’s specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio
nuclides to the patient’s body. SPECT dates from the early 1960 are when the idea of emission
traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980’s.

  Download :     Full Report (.doc)


read more...

Quantum Dot Lasers


                           The infrastructure of the Information Age has to date relied upon advances in
microelectronics to produce integrated circuits that continually become smaller, better, and less expensive. The emergence of photonics, where light rather than electricity is manipulated, is posed to further advance the Information Age. Central to the photonic revolution is the development of miniature light sources such as the Quantum dots(QDs). Today, Quantum Dots manufacturing has been established to serve new datacom and telecom markets. Recent progress in microcavity physics, new materials, and fabrication technologies has enabled a new generation of high performance QDs. This presentation will review commercial QDs and their applications as well as discuss recent research, including new device structures such as composite resonators and photonic crystals Semiconductor lasers are key components in a host of widely used technological products, including compact disk players and laser printers, and they will play critical roles in optical communication schemes. The basis of laser operation depends on the creation of non-equilibrium populations of electrons and holes, and coupling of
electrons and holes to an optical field, which will stimulate radiative emission. . Other benefits of quantum dot active layers include further reduction in threshold currents and an increase in differential gain-that is, more efficient laser operation.

Since the 1994 demonstration of a quantum dot (QD) semiconductor laser, the research progress in developing lasers based on QDs has been impressive. Because of their fundamentally different physics that stem from zero-dimensional electronic states, QD lasers now surpass the established planar quantum well laser technology in several respects. These include their minimum threshold current density, the threshold dependence on temperature, and range of wavelengths obtainable in given strained
layer material systems. Self-organized QDs are formed from strained-layer epitaxy. Upon reaching such conditions, the growth front can spontaneously reorganize to form 3-dimensional islands. The greater strain relief provided by the 3-dimensionally structured crystal surface prevents the formation of dislocations. When covered with additional epitaxy, the coherently strained islands form the QDs that trap and isolate individual electron-hole pairs to create efficient light emitters.

  Download :     Full Report (.doc)


read more...

Web Spoofing


                          The Web is currently the pre-eminent medium for electronic service delivery to remote users. As a consequence, authentication of servers is more important than ever. Even sophisticated users base their decision whether or not to trust a site on browser cues—such as location bar information, SSL icons, SSL warnings, certificate information, response time, etc.
In the seminal work on web spoofing, Felten et al showed how a malicious server could forge some of these cues—but using approaches that are no longer reproducible. However, subsequent evolution of Web tools has not only patched security holes—it has also added new
technology to make pages more interactive and vivid. In this paper, we explore the feasibility of
web spoofing using this new technology—and we show how, in many cases, every one of the
above cues can be forged.

Nearly every aspect of social, government, and commercial activity is moving into electronic
settings. TheWorldWideWeb is the de facto standard medium for these services. Inherent
properties of the physical world make it sufficiently difficult to forge a convincing storefront or
ATM that successful attacks create long-cited anecdotes . As a consequence, users of physical
services—stores, banks, newspapers—have developed a reasonably effective intuition of when to trust that a particular service offering is exactly what it appears to be. However, moving from
“bricks and mortar” to electronic introduces a fundamental new problem: bits are malleable.
Does this intuition still suffice for the new electronic world? When one clicks on a link that says “Click Here to go to TrustedStore.Com,” how does one know that’s where one has been taken?
Answering these questions require examining how users make judgments about whether
to trust a particular Web page for a particular service. Indeed, the issue of user trust judgment is largely overlooked; research addressing how to secure Web servers, how to secure the client-server connection, and how to secure client-side management risk being rendered moot, if the final transmission of trust information the human user is neglected.

  Download :     Full Report (.pdf)


read more...

Voice User Interface


                          In its most generic sense a voice portal can be defined as “speech enabled access to Web based information”. In other words, a voice portal provides telephone users with a natural language interface to access and retrieve Web content. An Internet browser can provide Web access from a computer but not from a telephone. A voice portal is a way to do that.The voice portal market is exploding with enormous opportunities for service providers to grow business and revenues. Voice based internet access uses rapidly advancing speech recognition technology to give users any time, anywhere communication and access-the Human
Voice- over an office, wireless, or home phone. Here we would describe the various technology factors that are making voice portal the next big opportunity on the web, as well as the various approaches service providers and developers of voice portal solutions can follow to maximize this exciting new market opportunity.

For a voice portal to function, one of the most important technology we have to include is a good VUI (Voice User Interface).There has been a great deal of development in the field of interaction between human voice and the system. And there are many other fields they have started to get implemented. Like insurance has turned to interactive voice response (IVR) systems to provide telephonic customer self-service, reduce the load on call-center staff, and cut overall service costs. The promise is certainly there, but how well these systems perform-and, ultimately, whether customers leave the system satisfied or frustrated-depends in large part on the user interface. Many IVR applications use Touch-Tone interfaces-known as DTMF (dual-tone multi-frequency)-in which customers are limited to making selections from a menu. As transactions become more complex, the effectiveness of DTMF systems decreases. In fact, IVR and speech recognition consultancy Enterprise Integration Group (EIG) reports that customer utilization rates of available DTMF systems in financial services, where transactions are primarily numeric, are as high as 90 percent; in contrast, customers' use of insurers' DTMF systems is less than 40 percent. Enter some more acronyms. Automated speech recognition (ASR) is the engine that drives today's voice user interface (VUI) systems. These let customers break the 'menu barrier' and perform more complex transactions over the phone. "In many cases the increase in self-service when moving from DTMF to speech can be dramatic," said EIG president Rex Stringham.

The best VUI systems are "speaker independent"-they understand naturally spoken dialog regardless of the speaker. And that means not only local accents, but regional dialects, local phrases such as "pop" versus "soda," people who talk fast (you know who you are), and all the
various nuances of speech. Those nuances are good for human beings; they allow us to recognize each other by voice. For computers, however, they make the process much more difficult. That's why a handheld or pocket computer still needs a stylus, and why the 'voice dialing' offered by some cell-phone companies still seems high-tech.Voice recognition is tough. And sophisticated packages not only can recognize a wide variety of speakers, they also allow experienced users to interrupt menu prompts ("barge-in") and can capture compound
instructions such as "I'd like to transfer a thousand dollars from checking to savings" in one command rather than several.

These features are designed to not only overcome limitations of DTMF but to increase customer use and acceptance of IVR systems. The hope is that customers will eventually be comfortable telling a machine "I want to add a driver to my Camry's policy." Besides taking some of the load off customer service representatives, VUI vendors promise an attractive ROI to help get these systems into insurers' IT budgets. ASR systems can be enabled with voice authentication, eliminating the need for PINs and passwords. Call centers themselves will likely transform into units designed to support customers regardless of whether contact comes from a telephone, the Web, e-mail, or a wireless device. At the same time, the 'voice Web' is evolving, where browsers or Wireless Application Protocol (WAP)-enabled devices display information based on what the user vocally asks for. "We're definitely headed toward multi-modal applications," Ehrlich predicts. ASR vendors are working to make sure that VUI evolves to free staff from dealing with voice-related channels; it's better to have them supporting the various modes of service that are just now beginning to emerge.


  Download :     Full Report (.doc)


read more...

Intelligent Voice Response System (IVRS)


                          In today’s competitive world any business must build flexible systems that adapt easily to evolving requirements of the critical business processes. IVRS is one such system that
transforms the traditional business model into customer centric model. IVRS is historically interactive speech memory driven that walk the caller through a series of prompts where they respond to questions by pressing the combination of one or more buttons of the phone keypad.
The decision tree associated with the prompts and the responses will route the caller to information they desire. These IVRS systems are typically utilized to check bank account balance, buy and sell stocks, check the show times for your favorite movie. In telephony, Intelligent Voice Response, or IVR, is a phone technology that allows a computer to detect voice and touch tones using a normal phone call. The IVR system can respond with pre-recorded or dynamically generated audio to further direct callers on how to proceed. IVR systems can be used to control almost any function where the interface can be broken down into a series of simple menu choices. Once constructed IVR systems generally scale well to handle large call volumes.

  Download :     Full Report (.pdf)


read more...

BLAST


                          The explosive growth of both the wireless industry and the Internet is creating a huge market opportunity for wireless data access. Limited internet access, at very low speeds, is already available as an enhancement to some existing cellular systems. However those systems were designed with purpose of providing voice services and at most short messaging, but not fast data transfer. Traditional wireless technologies are not very well suited to meet the demanding requirements of providing very high data rates with the ubiquity, mobility and portability characteristics of cellular systems. Increased use of antenna arrays appears to be the only means of enabling the type of data rates and capacities needed for wireless internet and multimedia services. While the deployment of base station arrays is becoming universal it is really the simultaneous deployment of base station and terminal arrays that can unleash unprecedented levels of performance by opening up multiple spatial signaling dimensions .

Theoretically, user data rates as high as 2 Mb/sec will be supported in certain environments, although recent studies have shown that approaching those might only be feasible under extremely favorable conditions-in the vicinity of the base station and with no other users competing for band width. Some fundamental barriers related to the nature of radio channel as well as to the limited band width availability at the frequencies of interest stand in the way of high data rates and low cost associated with wide access.


  Download :     Full Report (.pdf)


read more...

Embedded DRAM


                           Even though the word DRAM has been quite common among us for many decades, the development in the field of DRAM was very slow. The storage medium reached the present state of semiconductor after a long scientific research. Once the semiconductor storage medium was well accepted by all, plans were put forward to integrate the logic circuits associated with the DRAM along with the DRAM itself. However, technological complexities and economic justification for such a complex integrated circuit are difficult hurdles to overcome. Although scientific breakthroughs are numerous in the commodity DRAM industry, similar techniques are not always appropriate when highperformance logic circuits are included on the same substrate. Hence, eDRAM pioneers have begun to develop numerous integration schemes.

This seemingly subtle semantic difference significantly impacts mask count, system performance, peripheral circuit complexity, and total memory capacity of eDRAM products. Furthermore, corporations With aggressive commodity DRAM technology do not have expertise in the design of complicated digital functions and are not able to assemble a design team to
complete the task of a truly merged DRAM-logic product. Conversely, small application specific integrated circuit (ASIC) design corporations, unfamiliar with DRAM- specific elements and design practice, cannot carry out an efficient merged logic design and therefore mar the beauty of the original intent to integrate. Clearly, the reuse of process technology is an enabling lhetor en route to cost-effective eDRAM technology. By the same. account, modern circuit designers should be familiar with the new elements of eDRAM technology so that they can efficiently reuse DRAM-specific structures and elements in other digital functions. The reuse of additional electrical elements is a methodology that will make eDRAM more than just a memory’ interconnected to a few million Boolean gates.

  Download :     Full Report (.pdf)


read more...

HTAM


                          The amazing growth of the Internet and telecommunications is powered by ever-faster systems demanding increasingly higher levels of processor performance. To keep up with this demand we cannot rely entirely on traditional approaches to processor design. Microarchitecture techniques used to achieve past processor performance improvement–superpipelining, branch prediction, super-scalar execution, out-of-order execution, caches–have made microprocessors increasingly more complex, have more transistors, and consume more power. In fact, transistor counts and power are increasing at rates greater than processor performance. Processor architects are therefore looking for ways to improve performance at a greater rate than transistor counts and power dissipation. Intel’s Hyper-Threading Technology is one solution.

Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a micro architecture perspective, this means that instructions from both logical processors will persist and execute
simultaneously on shared execution resources. This paper describes the Hyper-Threading Technology architecture, and discusses the microarchitecture details of Intel's first implementation on the Intel Xeon processor family. Hyper-Threading Technology is an important
addition to Intel’s enterprise product line and will be integrated into a wide variety of products.

  Download :     Full Report (.pdf)


read more...

HVAC System


                          Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors. Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings. In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all
these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.


  Download :     Full Report (.pdf)


read more...

Hybridoma technology


                          Hybridoma technology is a technology of forming hybrid cell lines (called Hybridoma) by fusing a specific antibody-producing B cell with a myeloma (B cell cancer) cell that is selected for its ability to grow in tissue culture and for an absence of antibody chain synthesis. The antibodies produced by the hybridoma are all of a single specificity and are therefore monoclonal antibodies (in contrast to polyclonal antibodies). The production of monoclonal antibodies was invented by Cesar Milstein, Georges J. F. Köhler and Niels Kaj Jerne in 1975.

The use of monoclonal antibodies is numerous and includes the prevention, diagnosis, and treatment of disease. For example, monoclonal antibodies can distinguish subsets of B cells and T cells, which is helpful in identifying different types of leukemia.

Monoclonal antibodies (mAb or moAb) are monospecific antibodies that are the same because they are made by one type of immune cell which are all clones of a unique parent cell. Given almost any substance, it is possible to create monoclonal antibodies that specifically bind to that substance; they can then serve to detect or purify that substance.

 Courtesy : Anusha Thampi V.V (SCT College of Engineering , TVM)

  Download :     Full Report (.doc)

read more...

Magneto-Optical current transformer (MOCT)


                          An accurate electric current transducer is a key component of any power system instrumentation. To measure currents power stations and substations conventionally employ inductive type current transformers with core and windings. For high voltage applications, porcelain insulators and oil-impregnated materials have to be used to produce insulation between the primary bus and the secondary windings. The insulation structure has to be designed carefully to avoid electric field stresses, which could eventually cause insulation
breakdown. The electric current path of the primary bus has to be designed properly to minimize the mechanical forces on the primary conductors for through faults. The reliability of conventional highvoltage current transformers have been questioned because of their violent destructive failures which caused fires and impact damage to adjacent apparatus in the switchyards, electric damage to relays, and power service disruptions.

With short circuit capabilities of power systems getting larger, and the voltage levels going higher the conventional current transformers becomes more and more bulky and costly also the saturation of the iron core under fault current and the low frequency response make it difficult to obtain accurate current signals under
power system transient conditions. In addition to the concerns, with the computer control techniques and digital protection devices being introduced into power systems, the conventional current transformers have caused further difficulties, as they are likely to introduce electromagnetic interference through the ground loop into the digital systems. This has required the use of an auxiliary current transformer or optical isolator to avoid such problems. It appears that the newly emerged Magneto-optical current transformer technology provides a solution for many of the above mentioned problems.

The MOCT measures the electric current by means of Faraday Effect, which was first observed by Michael Faraday 150 years ago. The Faraday Effect is the phenomenon that the orientation of polarized light rotates under the influence of the
magnetic fields and the rotation angle is proportional to the strength of the magnetic field component in the direction of optical path. The MOCT measures the rotation angle caused by the magnetic field and converts it into a signal of few volts proportional to the electric currant. It consist of a sensor head located near the current carrying conductor, an electronic signal processing unit and fiber optical cables linking to these two parts. The sensor head consist of
only optical component such as fiber optical cables, lenses, polarizers, glass prisms, mirrors etc. the signal is brought down by fiber optical cables to the signal processing unit and there is no need to use the metallic wires to transfer the signal. Therefore the insulation structure of an MOCT is simpler than that of a conventional current transformer, and there is no risk of fire or explosion by the MOCT. In addition to the insulation benefits, a MOCT is able to provide high immunity to electromagnetic interferences, wider frequency response, large dynamic range and low outputs which are compatible with the inputs of analog to digital converters. They are ideal for the interference between power systems and computer systems. And there is a growing
interest in using MOCTs to measure the electric currents.

  Download :     Full Report (.doc)


read more...

NAVBELT AND GUIDECANE


                          Recent revolutionary achievements in robotics and bioengineering have given scientists and engineers great opportunities and challenges to serve humanity. This seminar is about “NAVBELT AND GUIDECANE”, which are two computerised devices based on advanced mobile robotic navigation for obstacle avoidance useful for visually impaired people. This is “Bioengineering for people with disabilities”. NavBelt is worn by the user like a belt and is equipped with an array of ultrasonic sensors. It provides acoustic signals via a set of stereo earphones that guide the user around obstacles or displace a virtual acoustic panoramic image of the traveller’s surroundings. One limitation of the NavBelt is that it is exceedingly difficult for the user to comprehend the guidance signals in time, to allow fast work. A newer device, called GuideCane, effectively overcomes this problem. The GuideCane uses the same mobile robotics technology as the NavBelt but is a wheeled device pushed ahead of the user via an attached cane. When the Guide Cane detects an obstacle, it steers around it. The user immediately feels this steering action and can follow the Guide Cane’s new path easily without any conscious effort. The mechanical, electrical and software components, usermachine
interface and the prototypes of the two devices are described below.

   Download :     Full Report (.doc)


read more...

Optical Switching


                           Explosive information demand in the internet world is creating enormous needs for capacity expansion in next generation telecommunication networks. It is expected that the data- oriented network traffic will double every year. Optical networks are widely regarded as the ultimate solution to the bandwidth needs of future communication systems. Optical fiber links deployed between nodes are capable to carry terabits of information but the electronic switching at the nodes limit the bandwidth of a network. Optical switches at the nodes will overcome this limitation. With their improved efficiency and lower costs, Optical switches provide the key to both manage the new capacity Dense Wavelength Division Multiplexing (DWDM) links as well as gain a competitive advantage for provision of new band width hungry services. However, in an optically switched network the challenge lies in overcoming signal impairment and network related parameters. Let us discuss the present status, advantages and challenges and future
trends in optical switches.

Optical switches will switch a wavelength or an entire fiberform one pathway to another, leaving the data-carrying packets in a signal untouched. An electronic signal from electronic processor will set the switch in the right position so that it directs an incoming fiber – or wavelengths within that fiber- to a given output fiber. But none of the wavelengths will be converted to electrons for processing. Optical switching may eventually make obsolete existing lightwave technologies based on the ubiquitous SONET (Synchronous Optical Network) communications standard, which relies on electronics for conversion and processing of individual packets. In tandem with the gradual withering away of Asynchronous Transfer Mode (ATM), another phone company standard for packaging information.

   Download :     Full Report (.doc)


read more...

SKY X Technology

                          Satellites are attractive option for carrying internet and other IP traffic to many locations across the globe where terrestrial options are limited or [censored] prohibitive. But data networking on satellite is faced with overcoming the large latency and high bit error rate typical of satellite communications as well as the asymmetric bandwidth design of most satellite network.Satellites are ideal for providing internet and private network access over long distance and to remote locations. However the internet protocols are not optimized for satellite conditions. So the throughput over the satellite networks is restricted to only a fraction of available bandwidth.Mentat , the leading supplies of TCP/IP to the computer industry have overcome their limitations with the development of the Sky X product family.The Sky X system replaces TCP over satellite link with a protocol optimized for the long latency, high loss and asymmetric bandwidth conditions of the typical satellite communication. The Sky X family consists of Sky X Gateway, Sky X Client/Server and Sky X OEM products.Sky X products increase the performance of IP over satellite by transparency replacing. The Sky X Gateway works by intercepting the TCP connection from client and converting the data to Sky X protocol for transmission over the satellite. The Sky X Client /Server product operates in a similar manner except that the Sky X client software is installed on each end users PC.Connection from applications running on the PC is intercepted and send over the satellite using the Sky X protocol.

The Sky X Gateway and Sky X Client/Servers systems replaces TCP over satellite link with a protocol optimized for the long latency, high loss and asymmetric bandwidth conditions of the typical satellite communication. Adding the Sky X system to a satellite network allows users to take full advantage of the available bandwidth. The Sky X Gateway transparently enhances the performance of all users on a satellite network without any modifications to the end clients and servers. The Sky X Client and the Sky X Server enhance the performance of data transmissions over satellites directly to end user PC’s, thereby increasing Web performance by 3 times or more and file transfer speeds by 10 to 100 times. The Sky X solution is entirely transparent to end users, works with all TCP applications and does not require any modifications to end client and servers.

   Download :     Full Report (.doc)     Reports and PPT (.zip)



read more...

SAP R/3


                           After the Internet, sap r/3 is one of the hottest topics in the computer industry, and the company that developed it, SAP AG, has become one of the successful in the software market. The SAP R/3 system is targeted to most industries: manufacturing, retail, oil and gas, electricity, health care, pharmaceutical, banking, insurance, telecommunications, transport, automotive, chemical, and so on. All hard ware vendors, without exception, are fully engaged to partner with SAP: currently, AT&T, Bull, Compaq, Data General, Digital, Hewlett-Packard, IBM, Pyramid, Sequent, Siemens-Nixdorf, and SUN has supported and certified SAP R/3 platforms. SAP AG was found in 1972 by four former IBM employees. Since its foundation, SAP has made significant development and marketing efforts on standard application software, being a global market player with its R/2 system for mainframe applications and its R/3 system for open client/server technologies. The company name SAP stands for Systems, Applications
and Products in Data Processing. It is a standard software package that can be configured in multiple areas and adapted to specific needs of the company. To support those needs, SAP includes large number of business functions, leaving room for further enhancements or adaptability to business practice changes.

   Download :     Full Report (.pdf)


read more...

Biomagnetism


Biomagnetism is a combination of two sciences; Physics and Biology. It is the science where specifically designed magnets and their energy fields are used to affect the living system- the human body or what is called the Body electric. There are some basic physical laws that come into play with the body electric. The body electric is the energy flow found in the human body. This energy flow is the collective result of minute electrical currents and cellular charge values that runs the body and all its function. Biomagnetism can change and elevate the electrical currents and charges thereby increasing the efficient of the body’s functional metabolism. A new field of scientific research called Biomagnetism was evolved after the major discovering of magnetic field associated with the flow of electric current in the human body. Biomagnetism deals with the study of magnetic contaminants of the body. The science of Biomagnetism applies a technology that was originally developed for the measurement of extremely small magnetic field in physics.

In Biomagnetism, magnetism fields produced by organs or by magnetic contaminants of the body are studied. The example of the fields arising from iron-bearing proteins in human liver. Magnetic particles may be found in the lungs and stomach where these are commonly introduced by environmental exposures particularly for workers in industries dealing with iron or steel. Biomagnetism is a science and should be taken seriously. When proper protocols are followed Biomagnets can help the body heal itself of even
chronic and long term conditions.

   Download :     Full Report (.doc)


read more...

Tuesday, February 2, 2010

Steam Turbine


A steam turbine is a mechanical device that extracts thermal energy from pressurized steam, and converts it into rotary motion. Its modern manifestation was invented by Sir Charles Parsons in 1884.
Definitions of steam turbine:
  • Turbine in which steam strikes blades and makes them turn
  • A steam turbine is a mechanical device that extracts thermal energy from pressurized steam, and converts it into rotary motion. Its modern manifestation was invented by Sir Charles Parsons in 1884.
  • A system of angled and shaped blades arranged on a rotor through which steam is passed to generate rotational energy. Today, normally used in power stations
  • A device for converting energy of high-pressure steam (produced in a boiler) into mechanical power which can then be used to generate electricity.
  • Equipment unit flown through by steam, used to convert the energy of the steam into rotational energy.
A machine for generating mechanical power in rotary motion from the energy of steam at temperature and pressure above that of an available sink. By far the most widely used and most powerful turbines are those driven by steam. Until the 1960s essentially all steam used in turbine cycles was raised in boilers burning fossil fuels (coal, oil, and gas) or, in minor quantities, certain waste products. However, modern turbine technology includes nuclear steam plants as well as production of steam supplies from other sources.

The illustration shows a small, simple mechanical-drive turbine of a few horsepower. It illustrates the essential parts for all steam turbines regardless of rating or complexity: (1) a casing, or shell, usually divided at the horizontal center line, with the halves bolted together for ease of assembly and disassembly; it contains the stationary blade system; (2) a rotor carrying the moving buckets (blades or vanes) either on wheels or drums, with bearing journals on the ends of the rotor; (3) a set of bearings attached to the casing to support the shaft; (4) a governor and valve system for regulating the speed and power of the turbine by controlling the steam flow, and an oil system for lubrication of the bearings and, on all but the smallest machines, for operating the control valves by a relay system connected with the governor; (5) a coupling to connect with the driven machine; and (6) pipe connections to the steam supply at the inlet and to an exhaust system at the outlet of the casing or shell.Steam turbines are ideal prime movers for driving machines requiring rotational mechanical input power. They can deliver constant or variable speed and are capable of close speed control. Drive applications include centrifugal pumps, compressors, ship propellers, and, most important, electric generators.

   Download :     Full Report (.doc)


read more...

The Socket Interface


                           We must have an interface between the application programs and the protocol software in order to use network facilities. My seminar is on a model of an interface between application programs and TCP/IP protocols. The standard of TCP/IP protocol do not specify exactly how application programs interact with the protocol software. Thus the interface architecture is not standardized; its design lies outside of scope of the protocol suite. It is further should be noticed that it is inappropriate to tie the protocols to a particular interface because no single interface architecture works well on all systems. In particular, because protocol software resides in a computer’s operating system, interface details depend on the operating system.

In spite of lack of standards, a programmer must know about the such interfaces to be able to use TCP/IP. Although I have chosen UNIX operating system in order to explain the model, it has widely accepted and is used in many systems.One thing more, the operations that I will list here, will have no standard in any sense.

   Download :     Full Report (.doc)


read more...

Service Oriented Architecture (SOA)


                           SOA is a design for linking business and computational resources (principally organizations, applications and data) on demand to achieve the desired results for service consumers (which can be end users or other services). Service-orientation describes an architecture that uses loosely coupled services to support the requirements of business processes and users. Resources on a network in a SOA environment are made available as independent services that can be accessed without knowledge of their underlying platform implementation. These concepts can be applied to business, software and other types of producer/consumer systems.
The main drivers for SOA adoption are that it links computational resources and promotes their reuse. Enterprise architecture believes that SOA can help businesses respond more quickly and cost-effectively to changing market conditions. This style of architecture promotes reuse at the macro (service) level rather than micro (objects) level.
The following guiding principles define the ground rules for development, maintenance, and usage of the SOA
  • Reuse, granularity, modularity, compos ability, componentization, and interoperability
  • Compliance to standards (both common and industry-specific)
  • Services identification and categorization, provisioning and delivery, and monitoring and tracking.
One obvious and common challenge faced is managing service metadata. Another challenge is providing appropriate levels of security. Interoperability is another important aspect in the SOA implementations.

SOA implementations rely on a mesh (Mesh consists of semi-permeable barrier made of connected strands of metal, fiber, or other flexible/ductile material. Mesh is similar to web or net in that it has many attached or woven strands.)Of software services. Services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank-statement, or placing an online booking or airline ticket order. Instead of services embedding calls to each other in their source code they use defined protocols that describe how services pass and parse messages, using description metadata.
SOA developers associate individual SOA objects by using orchestration (Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services.). In the process of orchestration the developer associates software functionality (the services) in a non-hierarchical arrangement (in contrast to a class hierarchy) using a software tool that contains a complete list of all available services, their characteristics, and the means to build an application utilizing these sources.

   Download :     Full Report (.doc)


read more...

PSYCHO ACOUSTICS


                           Advances in digital audio technology are fueled by two sources: hardware developments and new signal processing techniques. When processors dissipated tens of watts of power and memory densities were on the order of kilobits per square inch, portable playback devices like an MP3 player were not possible. Now, however, power dissipation, memory densities, and processor speeds have improved by several orders of magnitude.

Advancements in signal processing are exemplified by Internet broadcast applications: if the desired sound quality for an internet broadcast used 16-bit PCM encoding at 44.1 KHz, such an application would require a 1.4 Mbps (2 x 16 x 44k) channel for a stereo signal! Fortunately new bit rate reduction techniques in signal processing for audio of this quality are constantly being released.

Increasing hardware efficiency and an expanding array of digital audio representation formats are giving rise to a wide variety of new digital audio applications. These applications include portable music playback devices, digital surround sound for cinema, high-quality digital radio and television broadcast, Digital Versatile Disc (DVD), and many others.

This paper introduces digital audio signal compression, a technique essential to the implementation of many digital audio applications. Digital audio signal compression is the removal of redundant or otherwise irrelevant information from a digital audio signal, a process that is useful for conserving both transmission bandwidth and storage space. We begin by defining some useful terminology. We then present a typical “encoder” (as compression algorithms are often called) and explain how it functions. Finally consider some standards that employ digital audio signal compression, and discuss the future of the field.

Psychoacoustics is the study of subjective human perception of sounds. Effectively, it is the study of acoustical perception. Psychoacoustic modeling has long-since been an integral part of audio compression. It exploits properties of the human auditory system to remove the redundancies inherent in audio signals that the human ear cannot perceive. More powerful signals at certain frequencies ‘mask’ less powerful signals at nearby frequencies by de-sensitizing the human ear’s basilar membrane (which is responsible for resolving the frequency components of a signal). The entire MP3 phenomenon is made possible by the confluence of several distinct but interrelated elements: a few simple insights into the nature of human psychoacoustics, a whole lot of number crunching, and conformance to a tightly specified format for encoding and decoding audio into compact bitstreams.

   Download :     Full Report (.doc)


read more...

Passive Millimeter Wave


                           Passive millimeter-wave (PMMW) imaging is a method of forming images through the passive detection of naturally occurring millimeter-wave radiation from a scene. Although such imaging has been performed for decades (or more, if one includes microwave radiometric imaging), new sensor technology in the millimeter-wave regime has enabled the generation of PMMW imaging at video rates and has renewed interest in this area. This interest is, in part, driven by the ability to form images during the day or night; in clear weather or in low-visibility conditions, such as haze, fog, clouds, smoke, or sandstorms; and even through clothing. This ability to see under conditions of low visibility that would ordinarily blind visible or infrared (IR) sensors has the potential to transform the way low-visibility conditions are dealt with. For the military, low visibility can become an asset rather than a liability.

In the commercial realm, fog-bound airports could be eliminated as a cause for flight delays or diversions. For security concerns, imaging of concealed weapons could be accomplished in a nonintrusive manner with PMMW imaging. Like IR and visible sensors, a camera based on PMMW sensors generates easily interpretable imagery in a fully covert manner; no discernible radiation is emitted, unlike radar and lidar. However, like radar, PMMW sensors provide penetrability through a variety of low-visibility conditions (moderate/heavy rainfall is an exception). In addition, the underlying phenomenology that governs the formation of PMMW images leads to two important features. First, the signature of metallic objects is very different from natural and other backgrounds. Second, the clutter variability is much less in PMMW images than in other sensor images. Both of these characteristics lead to much easier automated target detection with fewer false alarms.

The wide range of military imaging missions that would benefit from an imaging capability through low-visibility conditions, coupled with its inherent covertness, includes surveillance, precision targeting, navigation, aircraft landing, refueling in clouds, search and rescue, metal detection in a cluttered environment, and harbor navigation/surveillance in fog. Similarly, a number of civilian missions would benefit, such as commercial aircraft landing aid in fog, airport operations in fog, harbor surveillance, highway traffic monitoring in fog, and concealed weapons detection in airports and other locations. This article introduces the concept of PMMW imaging, describes the phenomenology that defines its performance, explains the technology advances that have made these systems a reality, and presents some of the missions in which these sensors can be used.

   Download :     Full Report (.doc)


read more...

OVONIC UNIFIED MEMORY


                           Ovonic unified memory (OUM) is an advanced memory technology that uses a chalcogenide alloy (GeSbTe).The alloy has two states: a high resistance amorphous state and a low resistance polycrystalline state. These states are used for the representation of reset and set states respectively. The performance and attributes of the memory make it an attractive alternative to flash memory and potentially competitive with the existing non volatile memory technology.

Almost 25% of the world wide chip markets are memory devices, each type used for their specific advantages: the high speed of an SRAM, the high integration density of a DRAM, or the nonvolatile capability of a FLASH memory device.The industry is searching for a holy grail of future memory technologies to service the upcoming market of portable and wireless devices. These applications are already available based on existing memory technology, but for a successful market penetration. A higher performance at a lower price is required.

The existing technologies are characterized by the following limitations. DRAMs are difficult to intergrate.SRAMs are expensive. FLASH memory can have only a limited number of read and write cycles.EPROMs have high power requirement and poor flexibility.There is a growing need for nonvolatile memory technology for high density stand alone embedded CMOS application with faster write speed and higher endurance than existing nonvolatile memories. OUM is a promising technology to meet this need. R.G.Neale, D.L.Nelson, and Gorden.E.Moore originally reported a phase-change memory array based on chalcogenide materials in 1970. Improvements in phase-change materials technology subsequently paved the way for development of commercially available rewriteable CDs and DVD optical memory disks. These advances, coupled with significant technology scaling and better understanding of the fundamental electrical device operation, have motivated development of the OUM technology at the present day technology node.

   Download :     Full Report (.doc)


read more...

Obstacle Avoidance


                           Real-time obstacle avoidance is one of the key issues to successful applications of mobile robot systems. All mobile robots feature some kind of collision avoidance, ranging from primitive algorithms that detect an obstacle and stop the robot short of it in order to avoid a collision, through sophisticated algorithms, that enable the robot to detour obstacles. The latter algorithms are much more complex, since they involve not only the detection of an obstacle, but also some kind of quantitative measurements concerning the obstacle's dimensions. Once these have been determined, the obstacle avoidance algorithm needs to steer the robot around the obstacle and resume motion toward the original target. Autonomous navigation represents a higher level of performance, since it applies obstacle avoidance simultaneously with the robot steering toward a given target. Autonomous navigation, in general, assumes an environment with known and unknown obstacles, and it includes global path planning algorithms [3] to plan the robot's path among the known obstacles, as well as local path planning for real-time obstacle avoidance. This article, however, assumes motion in the presence of unknown obstacles, and therefore concentrates only on the local obstacle avoidance aspect.

One approach to autonomous navigation is the wall-following method . Here the robot navigation is based on moving alongside walls at a predefined distance. If an obstacle is encountered, the robot regards the obstacle as just another wall, following the obstacle's contour until it may resume its original course. This kind of navigation is technologically less demanding, since one major problem of mobile robots) the determination of their own position) is largely facilitated. Naturally, robot navigation by the wall-following method is less versatile and is suitable only for very specific applications. One recently introduced commercial system uses this method on a floor-cleaning robot for long hallways .

A more general and commonly employed method for obstacle avoidance is based on edge detection. In this method, the algorithm tries to determine the position of the vertical edges of the obstacle and consequently attempts to steer the robot around either edge. The line connecting the two edges is considered to represent one of the obstacle's boundaries. This method was used in our own previous research [5,6], as well as in several other research projects. A disadvantage with obstacle avoidance based on edge detecting is the need of the robot to stop in front of an obstacle in order to allow for a more accurate measurement.

   Download :     Full Report (.doc)


read more...

INTELLIGENT WIRELESS VIDEO CAMERA USING COMPUTER


                          The intelligent wireless video camera described in this paper is designed using wireless video monitoring system, for detecting the presence of a person who is inside the restricted zone. This type of automatic wireless video monitors is quite suitable for the isolated restricted zones, where the tight security is required.The principle of remote sensing is utilized in this, to detect the presence of any person who is very near to reference point with in the zone.

A video camera collects the images from the reference points and then converts into electronic signals. The collected images are converted from visible light into invisible
electronic signals inside a solid-state imager. These signals are transmitted to the monitor.

In this paper for the demonstration purpose three reference points are taken. Each reference point is arranged with two infrared LED’s and one lamp. This arrangement is made to detect the presence of a person who is near the reference point. The reference point is nothing but restricted area, when any person comes near to any reference point, then immediately that particular reference point output will become high and this high signal is fed to the computer. Now the computer energizes that particular reference point lamp and rotates the video camera towards that reference point for collecting the images at that particular reference point. To rotate the video camera towards interrupted reference point, stepper motor is used.


   Download :     Full Report (.doc)


read more...

Hyper Threading


                           Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a microarchitecture perspective, this means that instructions from both logical processors will persist and execute simultaneously on shared execution resources.

The first implementation of Hyper-Threading Technology was done on the IntelXeonprocessor MP. In this implementation there are two logical processors on each physical processor. The logical processors have their own independent architecture state, but they share nearly all the physical execution and hardware resources of the processor. The goal was to implement the technology at minimum cost while ensuring forward progress on logical processors, even if the other is stalled, and to deliver full performance even when there is only one active logical processor.

The potential for Hyper-Threading Technology is tremendous; our current implementation has only just begun to tap into this potential. Hyper-Threading Technology is expected to be viable from mobile processors to servers; its introduction into market segments other than servers is only gated by the availability and prevalence of threaded applications and workloads in those markets.

   Download :     Full Report (.doc)


read more...

Hyper LAN


                          Recently, demand for high-speed Internet access is rapidly increasing and a lot of people enjoy broadband wired Internet access services using ADSL ( Asymmetric Digital Subscriber Line) or cable modems at home. On the other hand , the cellular phone is getting very popular and users enjoy its location-free and wire-free services. The cellular phone also enables people to connect their laptop computers to the Internet in location-free and wire-free manners. However ,present cellular systems like GSM (Global System for Mobile communications) can provide much lower data rates compared with those provided by the wired access systems, over a few Mbps(Mega bit per second).Even in the next generation cellular system, UMTS ( Universal Mobile Telecommunications System), the maximum data rate of its initial service is limited up to 384 kbps; therefore even UMTS cannot satisfy users’ expectation of high-speed wireless Internet access. Hence, recently, Mobile Broadband System (MBS) is getting popular and important and wireless LAN (Local Area Network) such as ETSI (European Telecommunication Standardization Institute) standard HIPERLAN (High PErformance Radio Local Area Network) type2 (denoted as H/2) is regarded as a key towards providing high speed wireless access in MBS. H/2 aims at providing high speed multimedia services, security of services , handover when roaming between local and wide area as well as between corporate and public networks. It also aims at providing increased throughput of datacom as well as video streaming applications. It operates in the 5 GHz band with a 100 MHz spectrum. WLAN is W-ATM based and is designed to extend the services of fixed ATM networks to mobile users. H/2 is connection oriented with a connection duration of 2 ms or multiples of that. Connections over the air are time-division multiplexed . H/2 allows interconnection into virtually any type of fixed network technology and can carry Ethernet frames, ATM cells and IP packets. Follows dynamic frequency allocation. Offers bit rates of 54 Mbps.

   Download :     Full Report (.doc)


read more...

Hand Free Driving


                           All of us would like to drive our car with a mobile held in one hand, talking to the other person. But we should be careful; we don’t know when the car just before us applies the break and everything is gone. A serious problem encountered in most of the cities, National Highways, where any mistake means no ‘turning back’! There comes the tomorrows technology; Hand free driven car. Utilizing the modern technological approach in Robotics.

All around the world almost 45% of the accidents occur by mistakes of the driver. In some cases the driver is engaged in some other affair than driving. In USA the highways are so crowded that in some situations mistake on the part of one person on the road can lead to serious accidents. Most of these accidents are fatal. One such accident took place in the year 1997, on a foggy morning the on a heavily traffic highway a series of collisions took place in which 5 lost their life and more than 40 injured. The victims of such accidents are either severely injured, some even risk their life by their careless driving. This was the main reason behind this project work put forward by the Delphi-Delco electronic systems and General Motors Corporation. It was called the Automotive Collision Avoidance Systems (ACAS) field operation program.

It is the Automotive Collision Avoidance System (ACAS). The ACAS/FOT Program has assembled a highly focused technical activity with the goal of developing a comprehensive FCW system that is seamlessly integrated into the vehicle infrastructure.. The FCW system incorporates the combined ACC & rear-ends CW functionality. The ACC feature will only be operational when engaged by the driver. On the other hand, the FCW feature will provide full-time operating functionality whenever the host vehicle is in use (above a certain min speed). This feature is effective in detecting, assessing, and alerting the driver of potential hazard conditions associated with rear-end crash events in the forward region of the host vehicle. This is accomplished by implementing an expandable system architecture that uses a combination of: (a) a long range forward radar-based sensor that is capable of detecting and tracking vehicular traffic, and (b) a forward vision-based sensor which detects and tracks lanes. The proposed program effort is focused on providing warnings to the driver, rather than taking active control of the vehicle.

   Download :     Full Report (.doc)


read more...

Fuzzy Logic In Embedded Systems


                           A digitally - programmable analogue Fuzzy Logic Controller (FLC) is presented. Input and output signals are processed in the analog domain whereas the parameters of the controller are stored in a built-in digital memory. Some new functional blocks have been designed whereas others were improved towards the optimization of the power consumption, the speed and the modularity while keeping a reasonable accuracy, as it is needed in several analogue signal processing applications. A nine-rules, two-inputs and one-output prototype was fabricated and successfully tested using a standard CMOS 2.4? Technology, showing good agreement with the expected performances, namely: from 2.22 to 5.26 Mflips (Mega fuzzy logic inferences per second) at the pin terminals (@CL=13pF), 933-µW power consumption per rule (@Vdd=5V) and 5 bits of resolution. Since the circuit is intended for a subsystem embedded in an application chip (@CL ≤5pF) up to 8 Mflips may be expected.

In the last years the application of Fuzzy Logic has been extended beyond the classical Process Control area where it has been employed from the beginning. Signal Processing, Image Processing, Power Electronics, seem to be others niches where this soft-computing technique can meet a broad range of applications. As real time processing mode need ever faster, more autonomous and less power consuming circuits the choice of on-chip controllers become an interesting option. Digital Fuzzy Logic chips provide enough performance for general applications but their speed is limited, if compared with their analogue counterparts. Furthermore, in real-time applications digital fuzzy processors needs A/D and D/A converters to interface sensors and actuators, respectively.

On the other hand, pure analog processors suffer the lack of suppleness since full analog programmability is only feasible in special technologies allowing analog storage devices (i.e.: floating gate transistors). However, in the frame of standard CMOS technologies, a trade-off between accuracy and flexibility is achieved when a finite discrete set of analogue parameters is provided. For instance, a voltage parameter can be settled by using a binary-scaled set of currents sources yielding a discrete set of voltage drops through a linear resistor. In such a case, it is possible to use a digital memory to store a given binary combination of the set of currents. This technique gives rise to the so-called Mixed-Signal analogue computation circuits . It has been shown that analogue current-mode FLCs lend themselves to simple rules-evaluation and aggregation circuits that can work at a reasonable speed. If some of the unwanted current-to-voltage and/or voltage to- current intermediate converters can be avoided, the delay through cascaded operators may be even shortened and higher speeds achieved. This is interesting when Fuzzifiers and defuzzifiers circuits are being designed for these circuits interact normally with a voltage-mode controlled environment. On the other hand, to reduce die silicon area and power consumption some building blocks can be shared without altering functionality. As a result a relatively low-complexity layout can be obtained which leads to an additional gain of speed. In this work, a low-power digitally programmable analogue Fuzzy Logic Controller (Mixed-Signal FLC) is introduced intended for embedded subsystems, as it is required for analogue signal processing applications of Medium-accuracy (i.e.: non-linear filtering , power electronics , etc). Keeping in mind the above exposed issues, new operators were designed while others were optimized achieving a flexible and high performance controller notwithstanding the limits imposed by the technology that was used for the demonstrator.


   Download :     Full Report (.doc)


read more...

SKYLIGHTS


                           Adding a skylight is one of the quickest and easiest ways to make any room of your home lighter and brighter, adding an open and airy feeling. There are two basic types of skylights for residential use – flat glass and domed acrylic – and each have some advantages.
Domed acrylic skylights are less expensive than glass, and their convex shape tends to let the rain wash accumulated dust and dirt off a little easier. The acrylic dome is mounted in an aluminum frame, which is in turn mounted on a 2x6 box called a "curb." Once the hole is cut in the roof to the manufacturer’s specifications, the curb is constructed on-site to raise the skylight above the level of the roof sheathing. Site- built or factory-supplied flashings are used to seal the roofing around the curb.

Domed skylights are available in clear, smoked, bronze or other tints. Most are double- or triple-glazed in order to achieve the level of energy efficiency required by the building codes. Several sizes are available, with the most common being 2x2, 2x4 and 4x4 feet.
Flat glass skylights come mounted in a wood or integrated rubber and metal framework, and require no additional curb construction. After the hole is cut, the skylight frame is simply attached to the roof sheathing with L- brackets, then the installation is completed using the factory- supplied flashing kit. Easy installation, superior insulating qualities, less tendency to scratch and a cleaner finished appearance all add to the popularity and somewhat higher cost of glass skylights.

Glass skylights also have a greater number of optional accessories. These include tempered, laminated or wire glass; shades and blinds for light control; glass tints for heat retention or to block sunlight; and the ability to open fully or partially for ventilation. At least one company, Velux – a leading manufacturer of quality glass skylights that are available at most local home centers and lumber yards – even offers an electric motor coupled to a rain sensor that automatically shuts the skylight if it detects rain.

   Download :     Full Report (.pdf)


read more...

Introducing Bio-engineering to the Road Network


                           Bio-engineering is the use of vegetation, either alone or in conjunction with civil engineering structures, to reduce instability and erosion on slopes. It should be a fundamental part of the design and construction of all roads in rural (and urban) hill areas, mainly because it provides one of the best ways to armour slopes against erosion. Because of the steep and dynamic slopes found in the Himalayas, most hill roads are engineered near to the margin of safety. Bio-engineering is an effective way of enhancing civil engineering structures to increase stability as far as possible. It is relatively low in cost uses local materials and skill, and provides livelihoods benefits through. economically useful products..

A study has shown that many roadside slopes in Himachal Pradesh (HP) suffer from a range of
instability and erosion problems, many of which are conducive to the use of low cost remedies such as bio-engineering. The Public Works Department (PWD) is examining alternatives to standardise civil engineering approaches and in particular is looking at the possibilities offered by bio-engineering, through the experience gathered in other parts of the Himalayas over the last few decades Between 1987 and 1990, the PWD’s Horticulture Wing was involved in soil conservation work to resolve shallow failures on road cut slopes. Though this programme has diminished with time, it still demonstrates the inherent capabilities that can be harnessed to good effect.

This paper describes the main types of slope instability found in Himachal Pradesh, their causes (natural and man made), treatment options to safeguard the road network and reduce long term maintenance costs; approaches to bio-engineering that are appropriate to the bio-physical and socioeconomic conditions found in the state; institutional mechanisms for these to be successful; capacity enhancement means and tools; and examples of success and failures from other parts of the world. It also documents early experience in the introduction of this type of approach through specific pilots of critical road sections that have been considered under the World Bank funded Himachal State Roads Project.

   Download :     Full Report (.pdf)


read more...
Related Posts with Thumbnails