Thursday, January 28, 2010

Remote Media Immersion (RMI)


                        
The charter of the Integrated Media Systems Center (IMSC) at the University of Southern California (USC) is to investigate new methods and technologies that combine multiple modalities into highly effective, immersive technologies, applications and environments. One of the results of these research efforts is the Remote Media Immersion (RMI) system. The goal of the RMI is to create and develop a complete aural and visual environment that places a participant or group of participants in a virtual space where they can experience events that occurred in different physical locations. RMI technology can effectively overcome the barriers of time and space to enable, on demand, the realistic recreation of visual and aural cues recorded in widely separated locations.


The focus of the RMI effort is to enable the most realistic recreation of an event possible while streaming the data over the Internet. Therefore, we push the technological boundaries much beyond what current video-on-demand or streaming media systems can deliver. As a consequence, high-end rendering equipment and significant transmission bandwidth are required. The RMI project integrates several technologies that are the result of research efforts at IMSC. The current operational version is based on four major components that are responsible for the acquisition, storage, transmission, and rendering of high quality media.


The Remote Media Immersion (RMI) system is the result of a unique blend of multiple cutting-edge media technologies to create the ultimate digital media delivery platform. The main goal is to provide an immersive user experience of the highest quality. RMI encompasses all end-to-end aspects from media acquisition, storage, transmission up to their final rendering. Specifically, the Yima streaming media server delivers multiple high bandwidth streams, transmission error and flow control protocols ensure data integrity, and high-definition video combined with immersive audio provide highest quality rendering. The RMI system is operational and has been successfully demonstrated in small and large venues. Relying on the continued advances in electronics integration and residential broadband improvement, RMI demonstrates the future of on-demand home entertainment.


   Download :     Full Report (.doc)


read more...

NANO SENSORS AND DETECTORS-THEIR APPLICATIONS (NEMS)


                          Nanotechnology is an extremely powerful emerging technology, which is expected to have a substantial impact on medical technology now and in the future. The potential impact ofnovel nanomedical applications on disease diagnosis, therapy, and prevention is foreseen to change health care in a fundamental way. Biomedical nanotechnology presents revolutionary opportunities in the fight against many diseases . An area with near-term potential is detecting molecules associated with diseases such as cancer, diabetes mellitus, neurodegenerative diseases, as well as detectingmicroorganisms and viruses associated with infections, such as pathogenic bacteria, fungi, and HIV viruses. Macroscale devices constructed from exquisitely sensitive nanoscale components, such as micro-/nanocantilevers, nanotubes, and nanowires, can detect even the rarest biomolecular signals at a very early stage of the disease.

Development of these devices is in the proof-of-concept phase, though entering the market may be sooner than expected. However, a different approach of molecular sensing in vivo involves the use of implantable sensors which is still hampered by unwanted biofouling impairing long-term stability of continuous sensors caused by blood components and factors of the immune system. Nanotechnology might yield nano-structured surfaces preventing this non-specific protein adsorption.


   Download :     Full Report (.doc)


read more...

Micro Electro Mechanical Systems


                          "Micromechatronic is the synergistic integration of microelectromechanical systems, electronic technologies and precision mechatronics with high added value."

This field is the study of small mechanical devices and systems .they range in size from a few microns to a few millimeters. This field is called by a wide variety of names in different parts of the world: micro electro mechanical systems (MEMS), micromechanics, Microsystems technology (MST), micro machines .this field which encompasses all aspects of science and technology, is involved with things one smaller scale. Creative people from all technical disciplines have important contributions to make.

Welcome to the micro domain, a world now occupied by an explosive new technology known as MEMS (Micro Electro Mechanical systems), a World were gravity and inertia are no longer important, but the effects of atomic forces and surface science dominate.

MEMS are the next logical step in the silicon revolution. The silicon revolution began over three decades ago; with the introduction of the first integrated circuit .the integrated circuit has changed virtually every aspect of our lives. The rapid advance in number of transistors per chip leads to integrated circuit with continuously increasing capability and performance. As time has progressed, large, expensive, complex systems have been replaced by small, high performance, inexpensive integrated circuits.

MEMS is a relatively new technology which exploits the existing microelectronics infrastructure to create complex machines with micron feature sizes .these machines can have many functions, including sensing, communication and actuation. Extensive application of these devices exists in both commercial and defense systems.

   Download :     Full Report (.doc)


read more...

Chameleon Chips


                           Chameleon chips are chips whose circuitry can be tailored specifically for the problem at hand. Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS). An FPGA is covered with a grid of wires. At each crossover, there's a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U.S. are developing techniques to rewire FPGA-like chips anytime--and even software that can map out circuitry that's optimized for specific problems.

The chips still won't change colors. But they may well color the way we use computers in years to come. It is a fusion between custom integrated circuits and programmable logic.in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market.

   Download :     Full Report (.doc)


read more...

Biometric Voting system


                          It has always been an arduous task for the election commission to conduct free and fair polls in our country, the largest democracy in the world. Crores of rupees have been spent on this to make sure that the elections are riot free. But, now- a -days it has become common for some forces to indulge in rigging which may eventually lead to a result contrary to the actual verdict given by the people.This paper aims to present a new voting system employing biometrics in order to avoid rigging and to enhance the accuracy and speed of the process. The system uses thumb impression for voter identification as we know that the thumb impression of every human being has a unique pattern. Thus it would have an edge over the present day voting systems.

As a pre-poll procedure, a database consisting of the thumb impressions of all the eligible voters in a constituency is created. During elections, the thumb impression of a voter is entered as input to the system. This is then compared with the available records in the database. If the particular pattern matches with any one in the available record, access to cast a vote is granted. But in case the pattern doesn’t match with the records of the database or in case of repetition, access to cast a vote is denied or the vote gets rejected. Also the police station nearby to the election poll booth is informed about the identity of the imposter. All the voting machines are connected in a network, through which data transfer takes place to the main host. The result is instantaneous and counting is done finally at the main host itself. The overall cost for conducting elections gets reduced and so does the maintenance cost of the systems.

   Download :     Full Report (.doc)


read more...

AUTOMATIC VEHICLE LOCATOR


                           Is your car or a vehicle stolen or is it not visible in the thickest snow or is one among the several cars present? Do you wa nt to know the arrival of the bus for which you are waiting? Are your children going alone in a vehicle and you want to track their moments? Does your cargo consists of costly load and want to protect them? Do you want to keep track of your little playing kids about where they are?

ANS: Automatic Vehicle Locator. This Paper gives us a novel approach of using certain GPS technology in tracking not only vehicles, but even children and to protect precious goods. So this technology has gained a lot of importance in the recent years. This paper tells us how this technology works, its applications. It is still under research and development stage.

Automatic vehicle location (AVL) is a computer -based vehicle tracking system. For transit, the actual real-time position of each vehicle is determined and relayed to a control center. Actual position determination and relay techniques vary, depending on the needs of the transit system and the technologies employed. Transit agencies often incorporate other advanced system features in conjunction with AVL system implementation. Simple AVL systems include: computer -aided dispatch software, mobile data terminals, emergency alarms, and digital communications. More sophisticated AVL Systems may integrate: real-time passenger information,automatic passenger counters, and automated fare payment systems. Other components that may be integrated with AVL systems include automatic stop annunciation, automated destination signs, Vehicle component monitoring, and Traffic signal priority. AVL technology allows improved schedule adherence and timed transfers, more accessible passenger information, increased availability of data for transit management and planning, efficiency/productivity improvements in transit services .


   Download :     Full Report (.doc)


read more...

4G Wireless Technology


                          As the virtual centre of excellence in mobile and personal communications (Mobile VCE) moves into its second core research programme it has been decided to set up a fourth generation (4G) visions group aimed at harmonising the research work across the work areas and amongst the numerous researchers working on the programme. This paper outlines the initial work of the group and provides a start to what will become an evolving vision of 4G. A short history of previous generations of mobile communications systems and a discussion of the limitations of third generation (3G) systems are followed by a vision of 4G for 2010 based on five elements: fully converged services, ubiquitous mobile access, diverse user devices, autonomous networks and software dependency. This vision is developed in more detail from a technology viewpoint into the key areas of networks and services, software systems and wireless access.

The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.

   Download :     Full Report (.doc)


read more...

Monday, January 25, 2010

Honda Asimo Robot


ASIMO (アシモ ashimo) is a humanoid robot created by Honda. Standing at 130 centimeters (4 feet 3 inches) and weighing 54 kilograms (114 pounds), the robot resembles a small astronaut wearing a backpack and can walk or run on two feet at speeds up to 6 km/h (4.3 mph), matching EMIEW. ASIMO was created at Honda's Research & Development Wako Fundamental Technical Research Center in Japan. It is the current model in a line of eleven that began in 1986 with E0.

Officially, the name is an acronym for "Advanced Step in Innovative MObility". Honda's official statements claim that the robot's name is not a reference to science fiction writer and inventor of the Three Laws of Robotics, Isaac Asimov.

As of February 2009, there are over 100 ASIMO units in existence. Each one costs under $1 million (¥106,710,325 or €638,186 or £504,720) to manufacture, and some units are available to be hired out for $166,000 (¥17,714,316 or €105,920 or £83,789) per year.

ASIMO has hip, knee, and foot joints. Robots have joints that researchers refer to as "degrees of freedom." A single degree of freedom allows movement either right and left or up and down. ASIMO has34 degrees of freedom spread over different points of its body in order to allow it to move freely. There are three degrees of freedom in ASIMO's neck, seven on each arm and six on each leg. The number of degrees of freedom necessary for ASIMO's legs was decided by measuring human joint movement while walking on flat ground, climbing stairs and running.

   Download :     Full Report (.doc)



read more...

Microsoft Silverlight


                           Microsoft Silverlight is a web browser plugin that provides support for rich internet applications such as animation, vector graphics and audio-video playback. Silverlight competes with products such as Adobe Flash, Adobe Flex, Adobe Shockwave, JavaFX, and Apple QuickTime. Now in beta-testing, version 2.0 brings improved interactivity and support for .NET languages and development tools. Silverlight was developed under the codename Windows Presentation Foundation/Everywhere (WPF/E). It is compatible with multiple web browser products used on Microsoft Windows and Mac OS X operating systems. Mobile devices, starting with Windows Mobile 6 and Symbian (Series 60) phones, will also be supported.

A third-party free software implementation named Moonlight is under development to
bring compatible functionality to GNU/Linux.Silverlight provides a retained mode graphics system, similar to WPF and integrates multimedia, graphics, animations and interactivity into a single runtime. It is being designed to work in concert with XAML and is scriptable with JavaScript. XAML can be used for marking up the vector graphics and animations. Textual content created with Silverlight would be more searchable and indexable than that created with Flash as it is not compiled, but represented as text (XAML). Silverlight can also be used to create Windows Sidebar gadgets for Windows Vista. Silverlight supports playback of WMV, WMA and MP3 media content across all supported browsers without requiring Windows
Media Player, the Windows Media Player ActiveX control or Windows Media browser plugins. Because Windows Media Video 9 is an implementation of the SMPTE VC-1 standard, Silverlight also supports VC-1 video, though still only in an ASF file format. Furthermore, the
Software license agreement says VC-1 is only licensed for the "personal and non-commercial use of a consumer". Silverlight does not support playback of H.264 video. Silverlight makes it possible to dynamically load XML content that can be manipulated through a DOM interface, a
technique that is consistent with conventional Ajax techniques. Silverlight exposes a Downloader object which can be used to download content, like scripts, media assets or other data, as may be required by the application. With version 2.0, the programming logic can be written in any .NET language, including some common dynamic programming languages like Ruby and Python.

A Silverlight application being edited in Microsoft Visual Studio.Silverlight applications can be written in any .NET programming language. As such, any development tools which can be used with .NET languages can work with Silverlight, provided they can target the Silverlight CoreCLR for hosting the application, instead of the .NET Framework CLR. Microsoft has positioned Microsoft Expression Blend versions 2.0 and 2.5 for designing the UI of Silverlight 1.0 and 2
applications respectively. Visual Studio 2008 can be used to develop and debug Silverlight applications. To create Silverlight projects and let the compiler target CoreCLR, Visual Studio 2008 requires the Silverlight Tools for Visual Studio which is available as a beta release

   Download :     Full Report (.pdf)



read more...

Saturday, January 23, 2010

FINFET


                           Since the fabrication of MOSFET, the minimum channel length has been shrinking continuously. The motivation behind this decrease has been an increasing interest in high speed devices and in very large scale integrated circuits. The sustained scaling of conventional bulk device requires innovations to circumvent the barriers of fundamental physics constraining the conventional MOSFET device structure. The limits most often cited are control of the density and location of dopants providing high I on /I off ratio and finite subthreshold slope and quantum-mechanical tunneling of carriers through thin gate from drain to source and from drain to body. The channel depletion width must scale with the channel length to contain the off-state leakage I off. This leads to high doping concentration, which degrade the carrier mobility and causes junction edge leakage due to tunneling. Furthermore, the dopant profile control, in terms of depth and steepness, becomes much more difficult. The gate oxide thickness tox must also scale with the channel length to maintain gate control, proper threshold voltage VT and performance. The thinning of the gate dielectric results in gate tunneling leakage, degrading the circuit performance, power and noise margin.

Alternative device structures based on silicon-on-insulator (SOI) technology have emerged as an effective means of extending MOS scaling beyond bulk limits for mainstream high-performance or low-power applications .Partially depleted (PD) SOI was the first SOI technology introduced for high-performance microprocessor applications. The ultra-thin-body fully depleted (FD) SOI and the non-planar FinFET device structures promise to be the potential "future" technology/device choices.

In these device structures, the short-channel effect is controlled by geometry, and the off-state leakage is limited by the thin Si film. For effective suppression of the off-state leakage, the thickness of the Si film must be less than one quarter of the channel length.
The desired VT is achieved by manipulating the gate work function, such as the use of midgap material or poly-SiGe. Concurrently, material enhancements, such as the use of a) high-k gate material and b) strained Si channel for mobility and current drive improvement, have been actively pursued.

As scaling approaches multiple physical limits and as new device structures and materials are introduced, unique and new circuit design issues continue to be presented. In this article, we review the design challenges of these emerging technologies with particular emphasis on the implications and impacts of individual device scaling elements and unique device structures on the circuit design. We focus on the planar device structures, from continuous scaling of PD SOI to FD SOI, and new materials such as strained-Si channel and high-k gate dielectric.

   Download :     Full Report (.doc)



read more...

Night Vision Technology



Night vision technology, by definition, literally allows one to see in the dark. Originally developed for military use, it has provided the United States with a strategic military advantage, the value of which can be measured in lives. Federal and state agencies now routinely utilize the technology for site security, surveillance as well as search and rescue. Night vision equipment has evolved from bulky optical instruments in lightweight goggles through the advancement of image intensification technology.The first thing you probably think of when you see the words night vision is a spy or action movie you've seen, in which someone straps on a pair of night-vision goggles to find someone else in a dark building on a moonless night. With the proper night-vision equipment, you can see a person standing over 200 yards (183 m) away on a moonless, cloudy night! Night vision can work in two very different ways, depending on the technology used.
  • Image enhancement - This works by collecting the tiny amounts of light, including the lower portion of the infrared light spectrum, that are present but may be imperceptible to our eyes, and amplifying it to the point that we can easily observe the image.
  • Thermal imaging - This technology operates by capturing the upper portion of the infrared light spectrum, which is emitted as heat by objects instead of simply reflected as light. Hotter objects, such as warm bodies, emit more of this light than cooler objects like trees or buildings.
   Download :     Full Report (.pdf)



read more...

Gaming Consoles



Gaming consoles have proved themselves to be the best in digital entertainment. Gaming consoles were designed for the sole purpose of playing electronic games and nothing else. A gaming console is a highly specialised piece of hardware that has rapidly evolved since its inception incorporating all the latest advancements in processor technology, memory, graphics, and sound among others to give the gamer the ultimate gaming experience.

Research conducted in 2002 show that 60% of US residents aged six and above play computer games. Over 221 million computer and video games were sold in the U.S. Earlier research found that 35% of U.S. residents surveyed said that video games were the most entertaining media activity while television came in a distant second at 18%. The U.S. gaming industry reported sales of over $ 6.5 billion in the fiscal year 2002-03. Datamonitor estimates that online gaming revenues will reach $ 2.9 billion by 2005. Additional research has found that 90% of U.S. households with children has rented or owned a computer or video game and that U.S. children spend an average of 20 minutes a day playing video games. Research conducted by Pew Internet and American Life Project showed that 66% of American teenagers play or download games online. While 57% of girls play online, 75% of boys reported to having played internet games. This has great impact on influencing online game content and multiplayer capability on websites.

The global computer and video game industry, generating revenue of over 20 billion U.S. dollars a year, forms a major part of the entertainment industry. The sales of major games are counted in millions (and these are for software units that often cost 30 to 50 UK pounds each), meaning that total revenues often match or exceed cinema movie revenues. Game playing is widespread; surveys collated by organisations such as the Interactive Digital Software Association indicate that up to 60 per cent of people in developed countries routinely play computer or video games, with an average player age in the mid to late twenties, and only a narrow majority being male. Add on those who play the occasional game of Solitaire or Minesweeper on the PC at work, and one observes a phenomenon more common than buying a newspaper, owning a pet, or going on holiday abroad.

   Download :     Full Report (.doc)



read more...

Mobile IP


                           While Internet technologies largely succeed in overcoming the barriers of time and distance, existing Internet technologies have yet to fully accommodate the increasing mobile computer usage. A promising technology used to eliminate this current barrier is Mobile IP. The emerging 3G mobile networks are set to make a huge difference to the international business community. 3G networks will provide sufficient bandwidth to run most of the business computer applications while still providing a reasonable user experience. However, 3G networks are not based on only one standard, but a set of radio technology standards such as cdma2000, EDGE and WCDMA. It is easy to foresee that the mobile user from time to time also would like to connect to fixed broadband networks, wireless LANs and, mixtures of new technologies such as Bluetooth associated to e.g. cable TV and DSL access points.

In this light, a common macro mobility management framework is required in order to allow mobile users to roam between different access networks with little or no manual intervention. (Micro mobility issues such as radio specific mobility enhancements are supposed to be handled within the specific radio technology.) IETF has created the Mobile IP standard for this purpose.

Mobile IP is different compared to other efforts for doing mobility management in the sense that it is not tied to one specific access technology. In earlier mobile cellular standards, such as GSM, the radio resource and mobility management was integrated vertically into one system. The same is also true for mobile packet data standards such as CDPD, Cellular Digital Packet Data and the internal packet data mobility protocol (GTP/MAP) of GPRS/UMTS networks. This vertical mobility management property is also inherent for the increasingly popular 802.11 Wireless LAN standard.

Mobile IP can be seen as the least common mobility denominator - providing seamless macro mobility solutions among the diversity of accesses. Mobile IP is defining a Home Agent as an anchor point with which the mobile client always has a relationship, and a Foreign Agent, which acts as the local tunnel-endpoint at the access network where the mobile client is visiting. Depending on which network the mobile client is currently visiting; its point of attachment Foreign Agent) may change. At each point of attachment, Mobile IP either requires the availability of a standalone Foreign Agent or the usage of a Co-located care-of address in the mobile client itself.


   Download :     Full Report (.doc)



read more...

MPEG 7


                           As more and more audiovisual information becomes available from many sources around the world, many people would like to use this information for various purposes. This challenging situation led to the need for a solution that quickly and efficiently searches for and/or filters various types of multimedia material that’s interesting to the user.

For example, finding information by rich-spoken queries, hand-drawn images, and humming improves the user-friendliness of computer systems and finally addresses what most people have been expecting from computers. For professionals, a new generation of applications will enable high-quality information search and retrieval. For example, TV program producers can search with “laser-like precision” for occurrences of famous events or references to certain people, stored in thousands of hours of audiovisual records, in order to collect material for a program. This will reduce program production time and increase the quality of its content.
MPEG-7 is a multimedia content description standard, (to be defined by September 2001), that addresses how humans expect to interact with computer systems, since it develops rich descriptions that reflect those expectations.

MPEG-7 is being developed by the Moving Pictures Expert Group (MPEG) a working group of ISO/IEC. Unlike the preceding MPEG standards (MPEG-1, MPEG-2, MPEG-4) which have mainly addressed coded representation of audio-visual content, MPEG-7 focuses on representing information about the content, not the content itself.The goal of the MPEG-7 standard, formally called the "Multimedia Content Description Interface", is to provide a rich set of standardized tools to describe multimedia content.

   Download :     Full Report (.doc)   Presentation (.ppt)



read more...

Friday, January 22, 2010

SURFACE PLASMON RESONANCE


                           Surface plasmon resonance (SPR) is a phenomenon occurring at metal surfaces(typically gold and silver) when an incident light beam strikes the surface at a particular angle.Depending on the thickness of a molecular layer at the metal surface,the SPR phenomenon results in a graded reduction in intensity of the reflected light.Biomedical applications take advantage of the exquisite sensitivity of SPR to the refractive index of the medium next to the metal surface, which makes it possible to measure accurately the adsorption of molecules on the metal surface an their eventual interactions with specific ligands. The last ten years have seen a tremendous development of SPR use in biomedical applications.

The technique is applied not only to the measurement in real time of the kinetics of ligands receptor interactions and to the screening of lead compounds in the pharmaceutical industry, but also to the measurement DNA hybridization, enzyme- substrate interactions, in polyclonal antibody characterization, epitope mapping, protein conformation studies and label free immunoassays. Conventional SPR is applied in specialized biosensing instruments. These instruments use expensive sensor chips of limited reuse capacity and require complex chemistry for ligand or protein immobilization. Laboratory has successfully applied SPR with colloidal gold particles in buffered solutions. This application offers many advantages over conventional SPR. The support is cheap, easily synthesized, and can be coated with various proteins or protein ligand complexes by charge adsorption. With colloidal gold, the SPR phenomenon can be monitored in any UV spectrophotometer. For high throughput applications we have adapted the technology in an automated clinical chemistry analyzer. This simple technology finds application in label free quantitative immunoassay techniques for proteins and small analytes, in conformational studies with proteins as well as real time association dissociation measurements of receptor ligand interactions for high throughput screening and lead optimization.

   Download :     Full Report (.doc)



read more...

RTOS- Real-time operating systems


                           Real-time systems play a considerable role in our society, and they cover a spectrum from the very simple to the very complex. Examples of current real-time systems include the control of domestic appliances like washing machines and televisions, the control of automobile engines, telecommunication switching systems, military command and control systems, industrial process control, flight control systems, and space shuttle and aircraft avionics.

All of these involve gathering data from the environment, processing of gathered data, and providing timely response. A concept of time is the distinguishing issue between real-time and non-real-time systems. When a usual design goal for non-real-time systems is to maximize system's throughput, the goal for real-time system design is to guarantee, that all tasks are processed within a given time. The taxonomy of time introduces special aspects for real-time system research.

Real-time operating systems are an integral part of real-time systems. Future systems will be much larger, more widely distributed, and will be expected to perform a constantly changing set of duties in dynamic environments. This also sets more requirements for future real-time operating systems.

   Download :     Full Report (.doc)



read more...

GASOLINE DIRECT INJECTION (GDI)


                           Gasoline direct injection (GDI) engine technology has received considerable attention over the last few years as a way to significantly improve fuel efficiency without making a major shift away from conventional internal combustion technology. In many respects, GDI technology represents a further step in the natural evolution of gasoline engine fueling systems. Each step of this evolution, from mechanically based carburetion, to throttle body fuel injection, through multi-point and finally sequential multi-point fuel injection, has taken advantage of improvements in fuel injector and electronic control technology to achieve incremental gains in the control of internal combustion engines. Further advancements in these technologies, as well as continuing evolutionary advancements in combustion chamber and intake valve design and combustion chamber flow dynamics, have permitted the production of GDI engines for automotive applications. Mitsubishi, Toyota and Nissan all market four- stroke GDI engines in Japan.

Sophisticated high-pressure injectors capable of producing very fine, well-defined fuel sprays, coupled with advanced charge air control techniques, now make stable GDI combustion feasible. There are impediments to widespread GDI introduction, however, especially in compliance with stringent emission standards. This report addresses both the efficiencies inherent in GDI technology and the emissions constraints that must be addressed before GDI can displace current spark-ignition engine technology.

   Download :     Full Report (.doc)



read more...

F1 Cars



Car racing is one of the most technologically advanced sports in the world today. Race Cars are the most sophisticated vehicles that we see in common use. It features exotic, high-speed, open-wheel cars racing all around the world. The racing teams have to create cars that are flexible enough to run under all conditions. This level of diversity makes a season of F1 car racing incredibly exciting. The teams have to completely revise the aerodynamic package, the suspension settings, and lots of other parameters on their cars for each race, and the drivers have to be extremely agile to handle all of the different conditions they face. Their carbon fiber bodies, incredible engines, advanced aerodynamics and intelligent electronics make each car a high-speed research lab. A F1 Car runs at speeds up to 240 mph, the driver experiences G-forces and copes with incoming data so quickly that it makes Car driving one of the most demanding professions in the sporting world. F1 car is an amazing machine that pushes the physical limitations of automotive engineering. On the track, the driver shows off his professional skills by directing around an oval track at speeds.

Formula One Grand Prix racing is a glamorous sport where a fraction of a second can mean the difference between bursting open the bubbly and struggling to get sponsors for the next season's competition. To gain those extra milliseconds, all the top racing teams have turned to increasingly sophisticated network technology.

Much more money is spent in F1 these days. This results highest tech cars. The teams are huge and they often fabricate their entire racers. F1's audience has grown tremendously throughout the rest of the world.In an average street car equipped with air bags and seatbelts, occupants are protected during 35-mph crashes into a concrete barrier. But at 180 mph, both the car and the driver have more than 25 times more energy. All of this energy has to be absorbed in order to bring the car to a stop. This is an incredible challenge, but the cars usually handle it surprisingly well F1 Car driving is a demanding sport that requires precision, incredibly fast reflexes and endurance from the driver. A driver's heart rate typically averages 160 beats per minute throughout the entire race. During a 5-G turn, a driver's arm -- which normally weighs perhaps 20 pounds -- weighs the equivalent of 100 pounds. One thing that the G forces require is constant training in the weight room. Drivers work especially on muscles in the neck, shoulders, arms and torso so that they have the strength to work against the Gs. Drivers also work a great deal on stamina, because they have to be able to perform throughout a three-hour race without rest. One thing that is known about F1 Car drivers is that they have extremely quick reflexes and reaction times compared to the norm. They also have extremely good levels of concentration and long attention spans. Training, both on and off the track, can further develop these skills.


   Download :     Full Report (.doc)



read more...

DSTATCOM - Distribution SATATic COMpensator


                          Shunt Connected Controllers at distribution and transmission levels usually fall under two catogories - Static Synchronous Generators (SSG) and Static VAr Compensators (SVC).

A Static Synchronous Generator (SSG) is defined by IEEE as a self-commutated switching power converter supplied from from an appropriate electric energy source and operated to produce a set of adjustable multiphase voltages , which may be coupled to an ac power system for the purpose of exchanging independently controllable real and reactive power. When the active energy source (usually battery bank, Superconducting Magnetic Energy Storage etc) is dispensed with and replaced by a DC Capacitor which can not absorb or deliver real power except for short durations the SVG becomes a Static Synchronous Compensator (STATCOM) . STATCOM has no long term energy support in the DC Side and can not exchange real power with the ac system ; however it can exchange reactive power. Also , in principle, it can exchange harmonic power too. But when a STATCOM is designed to handle reactive power and harmonic currents together it gets a new name – Shunt Active Power Filter. So a STATCOM handles only fundamental reactive power exchange with the ac system.

STATCOMs are employed at distribution and transmission levels – though for different purposes. When a STATCOM is employed at the distribution level or at the load end for power factor improvement and voltage regulation alone it is called DSTATCOM. When it is used to do harmonic filtering in addition or exclusively it is called Active Power Filter. In the transmission system STATCOMs handle only fundamental reactive power and provide voltage support to buses. In addition STATCOMs in transmission system are also used to modulate bus voltages duting transient and dynamic disturbances in order to improve transient stability margins and to damp dynamic oscillations.

IEEE defines the second kind of Shunt Connected Controller called Static VAr Compensator (SVC) as a shunt connected static var generator or absorber whose output is adjusted to exchange capacitive or inductive current so as to maintain or control specific parameters of the electrical power system (typically bus voltage).Thyristor-switched or thyristor-controlled capacitors/inductors and combinations of such equipment with fixed capacitors and inductors come under this.This has been covered in an earlier lecture and this lecture focusses on STACOMs at distribution and transmission levels.

PWM Voltage Source Inverter based Static VAr Compensators (referred to as SVC here onwards) began to be considered a viable alternative to the existing passive shunt compensators and Thyristor Controlled Reactor (TCR ) based compensators from mid-eighties onwards. The disadvantages of capacitor/inductor compensation are well known. TCRs could overcome many of the disadvantages of passive compensators. However they suffered from two major disadvantages ;namely slow response to a VAr command and injection of considerable amount of harmonic currents into the power system which had to be cancelled by special transformers and filtered by heavy passive filters.

   Download :     Full Report (.doc)   Presentation (.ppt)



read more...

ASSYMETRIC DIGITAL SUBSCRIBER LINE (ADSL)



Digital Subscriber Lines (DSL) are used to deliver high-rate digital data over existing ordinary phone-lines. A new modulation technology called Discrete Multitone (DMT) allows the transmission of high speed data. DSL facilitates the simultaneous use of normal telephone services, ISDN, and high speed data transmission, e.g., video. DMT-based DSL can be seen as the transition from existing copper-lines to the future fiber-cables. This makes DSL economically interesting for the local telephone companies. They can offer customers high speed data services even before switching to fiber-optics.

DSL is a newly standardized transmission technology facilitating simultaneous use of normal telephone services, data transmission of 6 M bit/s in the downstream and Basic- rate Access (BRA). DSL can be seen as a FDM system in which the available bandwidth of a single copper-loop is divided into three parts. The base band occupied by POTS is split from the data channels by using a method which guarantees POTS services in the case of ADSL-system failure (e.g. passive filters).

   Download :     Full Report (.doc)



read more...

Monday, January 18, 2010

Digital Smell


In this modern age, computers have verified the cause of their existence. They have virtually taken over in every field of today’s fast life. Gone are the days when applications of computers were limited to official use only. Today computers have important place in every household purpose, and mainly internet has taken over whole world.There are various causes due to which computers have their own stand in our life. It provides a very good facility of fast processing, sound and picture. The virtual reality concept has provided very good features to the computer systems. The concept of virtual reality is introduced by the computer programmers to provide more attachments to the user. There are several concepts of the virtual reality that are available such as digital smell, virtual theater, electronics hand gloves, multipoint surround sound system, 3d goggles.

The digital smell is basically a hardware software combination. The hardware part of digital smell will produce the smell, and the software part will evaluate the smell equation and generate specific signals for specific smell and finally that smell will be produced by the device. The hardware device is a device like speaker, like speaker this device is also connected to the computer system. For this device there is also a driver program which will evaluate the digital equation for generating specific gas.

Until now, online communication involved only three of our senses - hearing, touch, and sight. New technology is being developed to appeal to our sense of smell. DigiScents, an interactive media company, is creating iSmell Digital Scent Technology, new software which will enable scents to be broadcast from the Web.Coding of aromas would be downloaded to computer similar to graphics images as audible sounds. Ultimately users will be able to create and modify their own fragrances and post them on the internet (2000). Also discussed the potential for creating smell capture cameras, which could add fragrances coding to images and sounds.
The "Savor the World" tagline illustrates the California-based company's aims to tap into the power of scent as a communication tool. "DigiScents combines the power of science with the fact that the sense of smell is as powerful and emotional trigger as any other sense," the Web site states.

This new technology will make it possible to send and receive scented e-mails and to add scent elements to Web sites, to name just a few of its applications.In future these devices will play very well role in our life, such as in Theater, Televisions, internet etc.

   Download :     Full Report (.doc)   Presentation (.ppt)



read more...

SATRACK


                          According to the dictionary guidance is the ‘process of guiding the path of an object towards a given point, which in general may be moving’. The process of guidance is based on the position and velocity if the target relative to the guided object. The present day ballistic missiles are all guided using the global positioning system or GPS.GPS uses satellites as instruments for sending signals to the missile during flight and to guide it to the target. SATRACK is a system that was developed to provide an evaluation methodology for the guidance system of the ballistic missiles. This was developed as a comprehensive test and evaluation program to validate the integrated weapons system design for nuclear powered submarines launched ballistic missiles.this is based on the tracking signals received at the missile from the GPS satellites. SATRACK has the ability to receive record, rebroadcast and track the satellite signals. SATRACK facility also has the great advantage that the whole data obtained from the test flights can be used to obtain a guidance error model. The recorded
data along with the simulation data from the models can produce a comprehensive guidance error model. This will result in the solution that is the best flight path for the missile.

   Download :     Full Report (.pdf)



read more...

Robocode


                          Robocode is an environment in which virtual robots, developed in Java, can battle against each other. The robots simulate tanks in a battlearena, and in order to find other robots they are equipped with radars. A robot can move forwards and backwards at di®erent speeds and turn left and right. The radar and turret can be turned left or right independently of each other and the rest of the tank. And finally, the gun can be fired. When setting up a battle, it is possible to watch the battle played out on the screen, or just letting the computer simulate the battle without showing the 12 Robocode graphics. The latter will complete the battles faster, because the battle does not have to be rendered on the screen. When an enemy robot is spotted with the radar, an event is generated, and the appropriate action can be taken by our robot. It is possible to get information about the robot being spotted, such as velocity, heading,
remaining energy, name, the angle between the heading of your own robot and the robot being spotted, and the distance to that robot.

During the game these pieces of information will form the basis for the actions to be taken by our robot. For example when spotting an enemy robot, the gun could simply be told to firre. But in which direction is the turret currently pointing? Obviously, the gun has to be pointing in the direction of the robot you want to fire at. But with knowledge about the enemy robot's heading and speed, there are further considerations that can be taken into account when computing the direction in which the gun should be fired, because you have to compensate for the fact that the enemy is on the move. Such considerations will optimize the chances of hitting the target. Robots in Robocode can battle against each other in teams. By communicating with each other, they can exchange information about where they have spotted opponent robots etc. And based upon a chosen strategy, a robot might choose to run away from opponents or perhaps letting your team gather round an opponent robot, and try to take it out. The purpose of this chapter is to describe the environment in which the robots of Robocode fight. In order to understand this world it is necessary to understand the laws of physics that control it.

   Download :     Full Report (.pdf)



read more...

Light Tree


                          The concept of light tree is introduced in a wavelength routed optical network
which employs wavelength -division multiplexing (WDM).A light tree is a point to point multipoint all optical channel, which may span multiple fiber links. Hence, a light tree enables single-hop communication between a source node and a set of destination nodes. Thus, a light tree based virtual topology can significantly reduce the hop distance, thereby increasing the network throughput. A light path is an all-optical channel, which may be used to carry circuit switched traffic, and it may span multiple fiber links. Assigning a particular wavelength to it sets these up. We refer light tree as a point to multi point extension of light path. In the near future, WANs will be based on WDM optical networks. So far, all architectures that have been proposed for WDM WANs have only considered the problem of providing unicast services. In addition to unicast services future WDM WANs need to provide multicast and broadcast services. A novel WDM WAN architecture based on light trees that are capable of supporting broadcasting and multicasting over a wide-area network by employing a minimum number of opto-electronic devices was discussed. Such WDMWAN can provide a very high bandwidth optical layer, which efficiently routes unicast, broadcast and multicast packet-switch traffic.


   Download :     Full Report (.pdf)



read more...

Fluorescent Multi-layer Disc


C3D or constellation 3D’s innovative technology enables the recording,reading and storing of information on many layers within a storage media.Flourescent materials are embedded in the pits and grooves of each layer of the media and information is then stored and retrieved using the principles of fluorescence ,instead of optical reflection as currently used with CDs and DVDs .The media can be produced in card or disk format of any size. This technology holds the promise of exciting new applications and vast commercial potential. The ability to store data on multiple layers within the media allows the creation of compact removable storage devices with amazing capacity.
Constellation 3D’s planned first generation disk, in standard (120mm diameter and 1.2mm thick)DVD format, will store up to 140 gigabytes in 10 layers .As production techniques using this technology evolve ,the number of layers and the distance between them will shrink ,eventually allowing terabytes of data to be stored on a single disk. Factors such as the ability to read simultaneously from multiple data layers allow exponential increase in data access and retrieval speeds, eventually resulting in retrieval speeds of 1 gigabyte per second.

Other new technologies, such as the simultaneous reading of multiple sectors within a single layer, can bring yet further increase in speed and provides true 3-dimensional data access/retrieval time. The implications of all of this for the data storage industry are enormous, as the technology offers quantum improvements in storage capacity, access/retrieval speeds and cost per gigabyte –in a compact, rugged and portable media format. A whole new range of applications and devices will be spawned, which take advantage of these superior capabilities.

   Download :     Full Report (.doc)   Presentation (.ppt)



read more...

H.323


                          The H.323 standard provides a foundation for audio, video, and data communications across IP-based networks, including the Internet. By complying with H.323, multimedia products and applications from multiple vendors can interoperate, allowing users to communicate without concern for compatibility. H.323 will be the keystone for LAN-based products for consumer, business, entertainment, and professional applications.H.323 is an umbrella recommendation from the International Telecommunications Union (ITU) that sets standards for multimedia communications over Local Area Networks (LANs) that do not provide a guaranteed Quality of Service (QoS). These networks dominate today’s corporate desktops and include packet-switched TCP/IP and IPX over Ethernet, Fast Ethernet and Token Ring network technologies. Therefore, the H.323 standards are important building blocks for a broad new range of collaborative, LAN-based applications for multimedia communications.

The H.323 specification was approved in 1996 by the ITU’s Study Group 16. Version 2 was approved in January 1998. The standard is broad in scope and includes both stand-alone devices and embedded personal computer technology as well as point-to-point and multipoint conferences. H.323 also addresses call control, multimedia management, and bandwidth management as well as interfaces between LANs and other networks.H.323 is part of a larger series of communications standards that enable videoconferencing across a range of networks. Known as H.32X, this series includes H.320 and H.324, which address ISDN and PSTN communications, respectively.

   Download :     Full Report (.doc)



read more...

Genetic Programming


                          Genetic programming (GP) is an automated method for creating a working computer program from a high-level problem statement of a problem.Starting with a primordial ooze of thousands of randomly created computer programs, a population of programs is progressively evolved over a series of generations. The evolutionary search uses the Darwinian principle of survival of the fittest and is patterned after naturally occurring operations, including crossover (sexual recombination), mutation, gene duplication, gene deletion, and certain developmental processes by which embryos grow into fully developed organisms. There are now 36 instances where genetic programming has automatically produced a computer program that is competitive with human performance.

In this section we present genetic programming, being the fourth member of the evolutionary algorithm family. Besides the particular representation (using trees as chromosomes) it differs from other EA strands in its application area. While the EAs are typically applied to optimization problems, GP could be rather positioned in machine learning. In terms of nature of this deferent problem types, most other EAs are used for finding some input realizing maximum payoff, whereas GP is used to seek models with maximum fit. Clearly, once maximization is introduced, modelling problems can be seen as special cases of optimization. This, in fact, is the basis of using evolution for such tasks: models are treated as individuals, their fitness being the model quality to be maximized.

   Download :     Full Report (.doc)



read more...

Futex


                          In recent years, that is in past 5 years Linux has seen significant growth as a server operating system and has been successfully deployed as an enterprise for Web, file and print servicing. With the advent of Kernel Version 2.4, Linux has seen a tremendous boost in scalability and robustness which further makes it feasible to deploy even more demanding enterprise applications such as high end database, business intelligence software ,application servers, etc. As a result, whole enterprise business suites and middleware such as SAP, Websphere, Oracle, etc., are now available on Linux.

For these enterprise applications to run efficiently on Linux, or on any other operating system, the OS must provide the proper abstractions and services. Usually these enterprise applications and applications suites or software are increasingly built as multi process / multithreaded applications. These application suites are often a collection of multiple independent subsystems. Despite functional variations between these applications often they require to communicate with each other and also sometimes they need to share a common state. Examples of this are database systems, which typically maintain shared I/O buffers in user space.

Access to such shared state must be properly synchronized. Allowing multiple processes to access the same resources in a time sliced manner or potentially consecutively in the case of multiprocessor systems can cause many problems. This is due to the need to maintain
data consistency, maintain true temporal dependencies and to ensure that each thread will properly release the resource as required when it has completed its action. Synchronization can be established through locks. There are mainly two types of locks: - Exclusive locks and shared locks. Exclusive locks are those which allows only a single user to access the protected entity, while shared locks are those which implements the multiple reader – single writer semantics. Synchronization implies a shared state, indicating that a particular resource is available or busy, and a means to wait for its availability. The latter one can either be accomplished through busy-waiting or through an explicit / implicit call to the scheduler.

   Download :     Full Report (.doc)



read more...

Groupware Technology


                           Groupware is technology designed to be used by groups of people for sharing information. Groupware applications are becoming more and more popular now.
Groupware is an environment where all users can share their documents. It is a platform where they can perform daily task of communicating, collaborating and coordinating with others. It automates business processes by using workflow management and collaborated computing techniques.Groupware applications like e-mail, workflow systems, group calendars, chat systems, decision support system are easy but very powerful.As Groupware is advantageous over single user system, it has high demand and many companies are specializing in developing Groupware based applications.

Groupware is technology designed to facilitate the work of groups. This technology may be used to communicate, cooperate, coordinate, solve problems, compete, or negotiate. While traditional technologies like the telephone qualify as groupware, the term is ordinarily used to refer to a specific class of technologies relying on modern computer networks, such as email, newsgroups, videophones, or chat.


   Download :     Full Report (.doc)



read more...

Sunday, January 17, 2010

Smart Dust



The current ultra modern technologies are focusing on automation and miniaturization. The decreasing computing device size, increased connectivity and enhanced interaction with the physical world have characterized computing history. Recently, the popularity of small computing devices, such as hand held computers and cell phones; rapidly flourishing internet group and the diminishing size and cost of sensors and especially transistors have accelerated these strengths. The emergence of small computing elements, with sporadic connectivity and increased interaction with the environment, provides enriched opportunities to reshape interactions between people and computers and spur ubiquitous computing researches.

Smart dust is tiny electronic devices designed to capture mountains of information about their surroundings while literally floating on air. Nowadays, sensors, computers and communicators are shrinking down to ridiculously small sizes. If all of these are packed into a single tiny device, it can open up new dimensions in the field of communications.The idea behind 'smart dust' is to pack sophisticated sensors, tiny computers and wireless communicators in to a cubic-millimeter mote to form the basis of integrated, massively distributed sensor networks. They will be light enough to remain suspended in air for hours. As the motes drift on wind, they can monitor the environment for light, sound, temperature, chemical composition and a wide range of other information, and beam that data back to the base station, miles away.

These Smart Dust elements are also known as motes. This concept is also called wireless sensing networks. At one point, just about every issue of Popular Science, Discover and Wired today contains a blurb about some new application of the mote idea. For example, the military plans to use them to gather information on battlefields, and engineers plan to mix them into concrete and use them to internally monitor the health of buildings and bridges.There are thousands of different ways that motes might be used, and as people get familiar with the concept they come up with even more. It is a completely new paradigm for distributed sensing and it is opening up a fascinating new way to look at computers.

   Download :     Full Report (.doc)   Presentation (.ppt)



read more...

Page Rank


                          The interest of a Web page is strictly related to its content and to the subjective
readers’ cultural background, a measure of the page authority can be provided that only
depends on the topological structure of the Web. PageRank is a noticeable way to attach a
score to Web pages on the basis of the Web connectivity. In this seminar, I present inside PageRank to disclose its fundamental properties concerning stability, complexity of computational scheme, and critical role of parameters involved in the computation. The role of graphical structure of the Web is thoroughly investigated and some theoretical results which highlight a number of interesting properties of PageRank are established. And then I explain the notion of energy, which simply represents the sum of the PageRank for all pages of a given community, and propose a general circuit analysis that allows us to understand the distribution of PageRank. In addition, the derived energy balance equations make it possible to understand the way different Web communities interact each other, the role of dangling pages (pages with no outlinks), and the secrets for promotion of Web pages. After that I explain about convergence and different optimization techniques.

   Download :     Full Report (.pdf)



read more...

Saturday, January 16, 2010

Blue Brain


                           Human brain, the most valuable creation of God. The man is called intelligent because of the brain .Today we are developed because we can think, that other animals can not do .But we loss the knowledge of a brain when the body is destroyed after the death of man. That knowledge might have been used for the development of the human society. What happen if we create a brain and up load the contents of natural brain into it.

“Blue brain” –The name of the world’s first virtual brain. That means a machine that can function as human brain. Today scientists are in research to create an artificial brain that can think, response, take decision, and keep anything in memory. The main aim is to upload human brain into machine. So that man can think, take decision without any effort. After the death of the body, the virtual brain will act as the man .So, even after the death of a person we will not loose the knowledge, intelligence, personalities, feelings and memories of that man that can be used for the development of the human society. No one has ever understood the complexity of human brain. It is complex than any circuitry in the world. So, question may arise “Is it really possible to create a human brain?” The answer is “Yes”. Because what ever man has created today always he has followed the nature. When man does not have a device called computer, it was a big question for all .But today it is possible due to the technology. Technology is growing faster than every thing. IBM is now in research to create a virtual brain. It is called “Blue brain “.If possible, this would be the first virtual brain of the world.

We can say Virtual brain is an artificial brain, which does not actually the natural brain, but can act as the brain .It can think like brain, take decisions based on the past experience, and response as the natural brain can. It is possible by using a super computer, with a huge amount of storage capacity, processing power and an interface between the human brain and this artificial one .Through this interface the data stored in the natural brain can be up loaded into the computer .So the brain and the knowledge, intelligence of anyone can be kept and used for ever, even after the death of the person.

   Download :     Full Report (.doc)   Presentation (.ppt)



read more...

Friday, January 15, 2010

3G to 4G


                          Wireless phone standards have a life of their own. You can tell, because they're
spoken of reverently in terms of generations. There's great-granddad who's pioneering story pre-dates cellular, grandma and grandpa analog cellular, mom and dad digital cellular, 3G wireless just starting to make a place for itself in the world, and the new baby on the way, 4G. Most families have a rich history of great accomplishments, famous ancestors, skeletons in the closets and wacky in-laws. The wireless scrapbook is just as dynamic. There is success, infighting and lots of hope for the future. Here's a brief snapshot of the colorful world of wireless. First of all, this family is the wireless telephone family. It is just starting to compete with the wireless Internet family that includes Wi-Fi and the other 802 wireless IEEE standards. But it is a completely different set of standards. The only place the two are likely to merge is in a marriage of phones that support both the cellular and Wi-Fi standards. Wireless telephone started with what you might call 0G if you can remember back that far. The great ancestor is the mobile telephone service that became available just after World War II. In those pre-cell days, you had a mobile operator to set up the calls and there were only a handful of channels available.

   Download :     Full Report (.doc)



read more...

Internet Protocol Television (IPTV)


                           As a result of broadband service providers moving from offering Connectivity to services, the discussion surrounding broadband entertainment has increased significantly. The Broadband Services Forum BSF) membership has identified a number of services that require significant focus in this decade; one of these is Internet Protocol Television (IPTV). This paper provides a high level, vendor&agnostic overview of what IPTV is and how it works.

IPTV, essentially, has two components:

Part 1: Internet Protocol (IP): specifies the format of packets and the addressing scheme. Most
networks combine IP with a higher&level protocol. Depending on the vendor solution, user
datagram protocol (UDP) is the most typical higher level protocol. The protocol establishes a
virtual connection between a destination and a source. IP allows you to address a package of
information and drop it in the system, but there’s no direct link between you and the recipient.

Part 2: Television (TV): specifies the medium of communication that operates through the
transmission of pictures and sounds. We all know TV, but here we are referring to the services
that are offered for the TV, like linear and on demand programming. Add the two components
together (IP+TV) and you have: IPTV: specifies the medium of communication of pictures and
sound that operates over an IP Network. Note: It is important to point out that IPTV services
usually operate over a private IP network and not the public Internet. In a private IP network
specifically designed for IPTV, a service provider can ensure quality of service (QoS) for
consumers. QoS refers to giving certain IP traffic a higher priority than other IP traffic. In an
IPTV network, TV signals are given the highest priority. As a result, the TV service is
instantaneous; there is no downloading involved for the linear or on&demand content. An IPTV service model offers a complete broadcaster and “cable programmer” channel line&up, including live programming delivered in real time. Additionally, it can offer a video on demand (VOD) service and enables the broadband service provider to develop new and unique services to differentiate their offering from competitors


   Download :     Full Report (.doc)



read more...

Management Information System (MIS)


                          A management information system (MIS) is a system or process that provides the information necessary to manage an organization effectively. MIS and the information it generates are generally considered essential components of prudent and reasonable business decisions. The importance of maintaining a consistent approach to the development, use, and review of MIS systems within the institution must be an ongoing concern of both bank management and OCC examiners. MIS should have a clearly defined framework of guidelines, policies or practices, standards, and procedures for the organization. These should be followed throughout the institution in the development, maintenance, and use of all MIS. MIS is viewed and used at many levels by management. It should be supportive of the institution's longer-term strategic goals and objectives. To the other extreme it is also those everyday financial accounting systems that are used to ensure basic control is maintained over financial record keeping activities. Financial accounting systems and subsystems are just one type of institutional MIS. Financial accounting systems are an important functional element or part of the total MIS structure. However, they are more narrowly focused on the internal balancing of an institution's books to the general ledger and other financial accounting subsystems. For example, accrual adjustments, reconciling and correcting entries used to reconcile the financial systems to the general ledger are not always immediately entered into other MIS systems. Accordingly, although MIS and accounting reconcilement totals for related listings and activities should be similar, they may not necessarily balance.

   Download :     Full Report (.pdf)



read more...

Thursday, January 14, 2010

Virtual Surgery

                          Rapid change in most segments of the society is occurring as a result of increasingly more sophisticated, affordable and ubiquitous computing power. One clear example of this change process is the internet, which provides interactive and instantaneous access to information that must scarcely conceivable only a few years ago.Same is the case in the medical field. Adv in instrumentation, visualisation and monitoring have enabled continual growth in the medical field. The information revolution has enabled fundamental changes in this field. Of the many disciplines arising from this new information era, virtual reality holds the greatest promise. The term virtual reality was coined by Jaron Lanier, founded of VPL research, in the late 1980’s. Virtual reality is defined as human computer interface that simulate realistic environments while enabling participant interaction, as a 3D digital world that accurately models actual environment, or simply as cyberspace.

Virtual reality is just beginning to come to that threshold level where we can begin using Simulators in Medicine the way that the Aviation industry has been using it for the past 50 Years — to avoid errors.In surgery, the life of the patient is of utmost importance and surgeon cannot experiment on the patient body. VR provide a good tool to experiment the various complications arise during surgery.

Virtual surgery, in general is a Virtual Reality Technique of simulating surgery procedure, which help Surgeons improve surgery plans and practice surgery process on 3D models. The simulator surgery results can be evaluated before the surgery is carried out on real patient. Thus helping the surgeon to have clear picture of the outcome of surgery. If the surgeon finds some errors, he can correct by repeating the surgical procedure as many number of times and finalising the parameters for good surgical results. The surgeon can view the anatomy from wide range of angles. This process, which cannot be done on a real patient in the surgery, helps the surgeon correct the incision, cutting, gain experience and therefore improve the surgical skills.


   Download :     Full Report (.doc)



read more...

Air Muscles


Air muscle is essentially a robotic actuator which is replacing the conventional pneumatic cylinders at a rapid pace. Due to their low production costs and very high power to weight ratio, as high as 400:1, the preference for Air Muscles is increasing. Air Muscles find huge applications in biorobotics and development of fully functional prosthetic limbs, having superior controlling as well as functional capabilities compared with the current models. This paper discusses Air Muscles in general, their construction, and principle of operation, operational characteristics and applications.

Robotic actuators conventionally are pneumatic or hydraulic devices. They have many inherent disadvantages like low operational flexibility, high safety requirements, and high cost operational as well as constructional etc. The search for an actuator which would satisfy all these requirements ended in Air Muscles. They are easy to manufacture, low cost and can be integrated with human operations without any large scale safety requirements. Further more they offer extremely high power to weight ratio of about 400:1. As a comparison electric motors only offer a power ration of 16:1. Air Muscles are also called McKibben actuators named after the researcher who developed it.

   Download :     Full Report (.doc)



read more...

Ethical Hacking


                           Ethical hacking ,also known as penetration testing or white-hat hacking, involves the same tools, tricks, and techniques that hackers use, but with one major difference that Ethical hacking is legal. Ethical hacking is performed with the target’s permission. The intent of ethical hacking is to discover vulnerabilities from a hacker’s viewpoint so systems can be better secured. It’s part of an overall information risk management program that allows for ongoing security improvements. Ethical hacking can also ensure that vendors’ claims about the security of their products are legitimate.

A hacker is a person who is interested in a particular subject and have an immense knowledge on that subject. In the world of computers a hacker is a person intensely interested in the arcane and recondite workings of any computer operating system. Most often, hackers are programmers with advance knowledge of operating systems and programming languages. Eric Raymond, compiler of "The New Hacker's Dictionary", defines a hacker as a clever programmer.

   Download :     Full Report (.doc)



read more...

Wear Debris Analysis

                           Since the world’s resources of material and energy are getting progressively, by necessity, there is growing involvement in studies of wear on a global basis. Wear of sliding components result in reduced mechanical efficiency and an irretrievable loss of material in the form of wear debris. Wear at the interface between moving particles is a normal characteristic of machine operation. The kind and rate of wear depend on the machine type. Lubrication is provided between the moving surface to minimize the wear but during operations millions minute wear particles entering the lubricating oil. These particles are in suspension in the oil, larger particles may be trapped by filter while others generally too small to be removed, remain in suspension in the circulating oil.

Condition based monitoring has, in the past, been referred to as an art, when quite clearly it is a science, and despites the cost of machine, surprisingly little attention has been devoted to this science from the viewpoint of understanding and modeling failure mechanisms and the study of probability to failure. Predictive maintenance technique has now become common exercises as they maximize the machine availability time and minimize the cost of maintenance, since the machine can be stopped just before as impending problem in an other wise healthy machine

Fault detection using vibration analysis is difficult in very low speed – high load noisy machines. In the case of slow speed bearing the vibration generated by damaged components is very low, usually close to the floor noise and difficult to identify. In these situations, Wear Debris Analysis has proven useful in providing supporting evidence on the bearing or gear status. It also provides information on the wear mechanism, which is involved.

   Download :     Full Report (.doc)



read more...
Related Posts with Thumbnails