Thursday, January 30, 2014

New Battery Technologies Could Intensely Change Electronics Market



Ever consider how much we depend on batteries on a daily basis – and how disruptive a dead or weak battery can be?
Cell phones, lap top computers – and perhaps even your car, if you drive an electric/hybrid – cannot function without long-lasting, rechargeable batteries.
It's shocking news, but battery technology is still in its infancy. However, that technology is going through a major and rapid evolution; which in turn could affect a wide spectrum of electronic products we all rely on.
Just this week, a research team from Stanford University and the Department of Energy's SLAC National Accelerator Laboratory said they've developed the first self-healing battery electrode.
Related: Tesla Updates & Expands Battery Supply Agreement with Panasonic
The breakthrough, according to a Stanford press release, could dramatically improve a battery's storage capacity and performance, “opening a new and potentially commercially viable path for making the next generation of lithium ion batteries for electric cars, cell phones and other devices.”
Meanwhile, Ford (NYSE: F [FREE Stock Trend Analysis]), the U.S. Department of Energy and the Michigan Economic Development Corporation and other groups invested in the $8 million battery research laboratory that opened last month at the University of Michigan.
The facility will test new battery technologies, to help industries determine which pilot projects are the most durable, cost-efficient and lightweight ahead of any production commitments.
“This lab will give us a stepping-stone between the research lab and the production environment, and a chance to have input much earlier in the development process,” Ted Miller, who guides Ford's battery research, said in a press statement. “This is sorely needed, and no one else in the auto industry has anything like it.”
Related: Steve Minnihan: Lithium-Ion To Take The Lead As Battery Technologies Focus on Backup Markets
And earlier this year, researchers at the University of Illinois at Urbana-Champaign announced the development of a new lithium-ion battery technology that is reportedly 2,000 times more powerful than current batteries.
Those researchers, according to the ExtremeTech web site, say their development could be more than an evolutionary step in battery development but “a new enabling technology… it breaks the normal paradigms of energy sources. It’s allowing us to do different, new things.”
And what's more, these new batteries are small – supposedly the most powerful microbatteries ever documented.
“This is a whole new way to think about batteries,” William King, the mechanical science and engineering professor who lead the research group, said in a university press release.

“A battery can deliver far more power than anybody ever thought,” he continued. “In recent decades, electronics have gotten small. The thinking parts of computers have gotten small. And the battery has lagged far behind. This is a microtechnology that could change all of that. Now the power source is as high-performance as the rest of it.”

Visit Us : @ Hyperjet

Tuesday, January 28, 2014

How Solar Cells Work






You've probably seen calculators with solar cells -- devices that never need batteries and in some cases, don't even have an off button. As long as there's enough light, they seem to work forever. You may also have seen larger solar panels, perhaps on emergency road signs, call boxes, buoys and even in parking lots to power the lights.
Although these larger panels aren't as common as solar-powered calculators, they're out there and not that hard to spot if you know where to look. In fact, photovoltaics -- which were once used almost exclusively in space, powering satellites' electrical systems as far back as 1958 -- are being used more and more in less exotic ways. The technology continues to pop up in new devices all the time, from sunglasses to electric vehicle charging stations.
The hope for a "solar revolution" has been floating around for decades -- the idea that one day we'll all use free electricity fro­m the sun. This is a seductive promise, because on a bright, sunny day, the sun's rays give off approximately 1,000 watts of energy per square meter of the planet's surface. If we could collect all of that energy, we could easily power our homes and offices for free.
In this article­, we will examine solar cells to learn how they convert the sun's energy directly into electricity. In the process, you will learn why we're getting closer to using the sun's energy on a daily basis, and why we still have more research to ­do before the process becomes cost-effective.



Photovoltaic Cells: Converting Photons to Electrons

The solar cells that you see on calculators and satellites are also called photovoltaic (PV) cells, which as the name implies (photo meaning "light" and voltaic meaning "electricity"), convert sunlight directly into electricity. A module is a group of cells connected electrically and packaged into a frame (more commonly known as a solar panel), which can then be grouped into larger solar arrays, like the one operating at Nellis Air Force Base in Nevada.
Photovoltaic cells are made of special materials called semiconductors such as silicon, which is currently used most commonly. Basically, when light strikes the cell, a certain portion of it is absorbed within the semiconductor material. This means that the energy of the absorbed light is transferred to the semiconductor. The energy knocks electrons loose, allowing them to flow freely.
PV cells also all have one or more electric field that acts to force electrons freed by light absorption to flow in a certain direction. This flow of electrons is a current, and by placing metal contacts on the top and bottom of the PV cell, we can draw that current off for external use, say, to power a calculator. This current, together with the cell's voltage (which is a result of its built-in electric field or fields), defines the power (or wattage) that the solar cell can produce.
That's the basic process, but there's really much more to it. On the next page, let's take a deeper look into one example of a PV cell: the single-crystal silicon cell.

How Silicon Makes a Solar Cell

Silicon has some special chemical properties, especially in its crystalline form. An atom of sili­con has 14 electrons, arranged in three different shells. The first two shells -- which hold two and eight electrons respectively -- are completely full. The outer shell, however, is only half full with just four electrons. A silicon atom will always look for ways to fill up its last shell, and to do this, it will share electrons with four nearby atoms. It's like each atom holds hands with its neighbors, except that in this case, each atom has four hands joined to four neighbors. That's what forms the crystalline structure, and that structure turns out to be important to this type of PV cell.
The only problem is that pure crystalline silicon is a poor conductor of electricity because none of its electrons are free to move about, unlike the electrons in more optimum conductors like copper. To address this issue, the silicon in a solar cell has impurities -- other atoms purposefully mixed in with the silicon atoms -- which changes the way things work a bit. We usually think of impurities as something undesirable, but in this case, our cell wouldn't work without them. Consider silicon with an atom of phosphorous here and there, maybe one for every million silicon atoms. Phosphorous has five electrons in its outer shell, not four. It still bonds with its silicon neighbor atoms, but in a sense, the phosphorous has one electron that doesn't have anyone to hold hands with. It doesn't form part of a bond, but there is a positive proton in the phosphorous nucleus holding it in place.
When energy is added to pure silicon, in the form of heat for example, it can cause a few electrons to break free of their bonds and leave their atoms. A hole is left behind in each case. These electrons, called free carriers, then wander randomly around the crystalline lattice looking for another hole to fall into and carrying an electrical current. However, there are so few of them in pure silicon, that they aren't very useful.
But our impure silicon with phosphorous atoms mixed in is a different story. It takes a lot less energy to knock loose one of our "extra" phosphorous electrons because they aren't tied up in a bond with any neighboring atoms. As a result, most of these electrons do break free, and we have a lot more free carriers than we would have in pure silicon. The process of adding impurities on purpose is called doping, and when doped with phosphorous, the resulting silicon is called N-type ("n" for negative) because of the prevalence of free electrons. N-type doped silicon is a much better conductor than pure silicon.
The other part of a typical solar cell is doped with the element boron, which has only three electrons in its outer shell instead of four, to become P-type silicon. Instead of having free electrons, P-type ("p" for positive) has free openings and carries the opposite (positive) charge.


Anatomy of a Solar Cell

B­efore now, our two separate pieces of silicon were electrically neutral; the interesting part begins when you put them together. That's because without an electric field, the cell wouldn't work; the field forms when the N-type and P-type silicon come into contact. Suddenly, the free electrons on the N side see all the openings on the P side, and there's a mad rush to fill them. Do all the free electrons fill all the free holes? No. If they did, then the whole arrangement wouldn't be very useful. However, right at the junction, they do mix and form something of a barrier, making it harder and harder for electrons on the N side to cross over to the P side. Eventually, equilibrium is reached, and we have an electric field separating the two sides.
This electric field acts as a diode, allowing (and even pushing) electrons to flow from the P side to the N side, but not the other way around. It's like a hill -- electrons can easily go down the hill (to the N side), but can't climb it (to the P side).
When light, in the form of photons, hits our solar cell, its energy breaks apart electron-hole pairs. Each photon with enough energy will normally free exactly one electron, resulting in a free hole as well. If this happens close enough to the electric field, or if free electron and free hole happen to wander into its range of influence, the field will send the electron to the N side and the hole to the P side. This causes further disruption of electrical neutrality, and if we provide an external current path, electrons will flow through the path to the P side to unite with holes that the electric field sent there, doing work for us alo­ng the way. The electron flow provides the current, and the cell's electric field causes a voltage. With both current and voltage, we have power, which is the product of the two.
There are a few more components left before we can really use our cell. Silicon happens to be a very shiny material, which can send photons bouncing away before they've done their job, so
an antireflective coating is applied to reduce those losses. The final step is to install something that will protect the cell from the elements -- often a glass cover plate. PV modules are generally made by connecting several individual cells together to achieve useful levels of voltage and current, and putting them in a sturdy frame complete with positive and negative terminals.


Visit us : @Hyperjet 

Saturday, January 25, 2014

Difference Between 1G, 2G, 2.5G, 3G, & 4G





1G is the first generation celullar network that existed in 1980s. It transfer data (only voice) in analog wave, it has limitation because there are no encryption, the sound quality is poor and the speed of transfer is only at 9.6kbps.

2G is the second one, improved by introducing the concept of digital modulation, which means converting the voice(only) into digital code(in your phone) and then into analog signals(imagine that it flys in the air). Being digital, they overcame some of the limitations of 1G, such as it omits the radio power from handsets making life more healthier, and it has enhanced privacy.

2.5G is a transition of 2G and 3G. In 2.5G, the most popular services like SMS (short messaging service), GPRS, EDGE, High Speed Circuit switched data, and more had been introduced.

3G is the current generation of mobile telecommunication standards. It allows simultaneous use of speech and data services and offers data rates of up to 2 Mbps, which provide servcies like video calls, mobile TV, mobile Internet and downloading. There are a bunch of technologies that fall under 3G, like WCDMA, EV-DO, and HSPA and others.

In telecommunications, 4G is the fourth generation of cellular wireless standards. It is a successor to the 3G and 2G families of standards. In 2008, the ITU-R organization specified the IMT-Advanced (International Mobile Telecommunications Advanced) requirements for 4G standards, setting peak speed requirements for 4G service at 100 Mbit/s for high mobility communication (such as from trains and cars) and 1 Gbit/s for low mobility communication (such as pedestrians and stationary users)

A 4G system is expected to provide a comprehensive and secure all-IP based mobile broadband solution to laptop computer wireless modems, smartphones, and other mobile devices. Facilities such as ultra-broadband Internet access, IP telephony, gaming services, and streamed multimedia may be provided to users.

PRE-4G technologies such as mobile WiMAX and Long term evolution (LTE) have been on the market since 2006 and 2009 respectively, and are often branded as 4G. The current versions of these technologies did not fulfill the original ITU-R requirements of data rates approximately up to 1 Gbit/s for 4G systems. Marketing materials use 4G as a description for LTE and Mobile-WiMAX in their current forms.



 





Visit us : @Hyperjet   

 

Wednesday, January 22, 2014

Google Announces 'Smart' Contact Lenses That Monitor Glucose Levels






Google has announced that it is testing a prototype for a contact lens that would help people with diabetes manage their disease.

In a press release distributed Thursday, the company said that the lens it is designing would measure glucose in tears continuously using a wireless chip and miniaturized glucose sensor. Google says that using the lenses would be a less invasive method of measuring glucose levels than finger-pricking.
It also claims that the more frequent testing would consequently reduce the risks associated with infrequent glucose testing such as kidney failure and blindness.

The contact lenses were developed during the past 18 months in the clandestine Google X lab that also came up with a driver-less car, Google's Web-surfing eyeglasses and Project Loon, a network of large balloons designed to beam the Internet to unwired places.

“We wondered if miniaturized electronics — think chips and sensors so small they look like bits of glitter, and an antenna thinner than a human hair — might be a way to crack the mystery of tear glucose and measure it with greater accuracy,” Google said in its press release.

“We hope a tiny, super sensitive glucose sensor embedded in a contact lens could be the first step in showing how to measure glucose through tears, which in the past has only been theoretically possible.”

The chip and sensor would be embedded between two layers of soft contact lens material, while a pinhole in the lens would allow fluid from the surface of the eye to seep into the sensor.
Palo Alto Medical Foundation endocrinologist Dr. Larry Levin said it was remarkable and important that a tech firm like Google is getting into the medical field, and that he'd like to be able to offer his patients a pain-free alternative from either pricking their fingers or living with a thick needle embedded in their stomach for constant monitoring.

"Google, they're innovative, they are up on new technologies, and also we have to be honest here, the driving force is money," he told The Associated Press.
Worldwide, the glucose monitoring devices market is expected to be more than $16 billion by the end of this year, according to analysts at Renub Research.

The Google team built the wireless chips in clean rooms, and used advanced engineering to get integrated circuits and a glucose sensor into such a small space.
Researchers also had to build in a system to pull energy from incoming radio frequency waves to power the device enough to collect and transmit one glucose reading per second. The embedded electronics in the lens don't obscure vision because they lie outside the eye's pupil and iris.
Google is now looking for partners with experience bringing similar products to market. Google officials declined to say how many people worked on the project, or how much the firm has invested in it.

An early, outsourced clinical research study with real patients was encouraging, but there are many potential pitfalls yet to come, said University of North Carolina diabetes researcher Dr. John Buse, who was briefed by Google on the lens last week.

"This has the potential to be a real game changer," he said, "but the devil is in the details."
While excited about their prototype, Google warned that there is still a lot more work that needs to be done before it could be turned into a useable product.


Visit us : @Hyperjet 


Thursday, January 16, 2014

GPON - Gigabit Passive Optical Network




Introduction and Market Overview: The Need for Fiber
The way people use the Internet today creates a great demand for very high bandwidth: More and more workers are telecommuting. Consumers watch multiple HDTV channels, often on several TVs in the same household at the same time. They upload and download multimedia files and use bandwidth-hungry peer-to-peer services. They play online games that demand high speeds and immediate reactivity. Web 2.0-based communities and hosted services such as social networking sites and wikis are pervasive, fostering interactivity, collaboration and data-sharing while generating a need for capacity. Bringing optical fiber to every home is the definitive response to such demands for greater bandwidth.
How does GPON work?

www.hyperjet.lk



GPON has been called "elegant" for its ability to share bandwidth dynamically on a single optical fiber. Like any shared medium, GPON provides burst mode transmission with statistical usage capabilities. This enables dynamic control and sharing of upstream and downstream bandwidth using committed and excess information rate (CIR and EIR) parameters. Users can be assured of receiving their committed bandwidth under peak demand conditions, and of receiving superior service when network utilization is low. While subscribers rarely require sustained rates of 100 Mb/s each, bursting beyond this to the full line rate of a PON system (about 1.25 Gb/s upstream or 2.5 Gb/s downstream in the case of GPON) is easily enabled using the right subscriber interface. This allows a GPON to be used for many years even if subscribers have a regular need to transmit beyond an engineered guaranteed limit of 100 Mb/s.

GPON was developed with the support of the FSAN (Full Service Access Network) Group and the ITU (International Telecommunication Union). These organizations bring the major stakeholders in the telecoms industry together to define common specifications, ensuring full interworking between OLTs and ONTs. The IEEE (Institute of Electrical and Electronics Engineers) has also defined a PON standard, called Ethernet PON or EPON. The EPON standard was launched earlier than GPON and has been deployed successfully. IEEE specs are however restricted to the lower optical and media access layers of networks, and full interoperability for EPON must therefore be managed in a specific case-by-case way at every implementation. Additionally, EPON runs at only 1 Gb/s, upstream as well as downstream, providing a lower bandwidth than GPON. These factors make EPON a less attractive technology choice for providers making FTTH investment decisions today.

Why choose GPON?

When planning a fiber-to-the-home (FTTH) evolution for their access networks, service providers can choose between three generic FTTH architectures: point-to-point; active Ethernet; and passive optical networking (PON) such as GPON.

"Point-to-point" is an Ethernet FTTH architecture similar in structure to a twisted-pair cable phone network; a separate, dedicated fiber for each home exists in the service provider's hub location. The point-to-point architecture has merits for small-scale deployments such as citynets, but is not suitable for large-scale deployments due to its poor scalability in terms of hub location space or the number of required hub locations, power consumption and feeder fibers.

An "active Ethernet" architecture is based on the same deployment model as fiber to the node (FTTN) with active street cabinets; it is therefore feasible as a complement or migration path towards FTTH for larger deployments in very high-speed digital subscriber line (VDSL)-dominated environments. 

GPON is a fully optical architecture option that offers the best of all worlds. A GPON system consists of an optical line terminal (OLT) that connects several optical network terminals (ONTs) together using a passive optical distribution network (ODN). Like active Ethernet, it aggregates users in what is called the "outside plant" or OSP, which means no mess of fibers in a central office somewhere; like point-to-point, it avoids the need for active electronics in the field by employing a passive OSP device (the optical splitter). Being a passive device, the GPON splitter requires no cooling or powering and is therefore extremely stable; in fact, it virtually never fails.

Bringing Fiber to the Home: Benefits of GPON

One way of providing fiber to the home is through a Gigabit Passive Optical Network, or GPON (pronounced 'djee-pon').
GPON is a point-to-multipoint access mechanism. Its main characteristic is the use of passive splitters in the fiber distribution network, enabling one single feeding fiber from the provider's central office to serve multiple homes and small businesses. 
GPON has a downstream capacity of 2.488 Gb/s and an upstream capacity of 1.244 Gbp/s that is shared among users. Encryption is used to keep each user's data secured and private from other users. Although there are other technologies that could provide fiber to the home, passive optical networks (PONs) like GPON are generally considered the strongest candidate for widespread deployments.


Visit us : @Hyperjet

Popular Posts

Recent Posts

Text Widget