Cheese is a generic term for a diverse group of milk-based food products. Cheese is produced in wide-ranging flavors, textures, and forms.
Cheese consists of proteins and fat from milk, usually the milk of cows, buffalo, goats, or sheep. It is produced by coagulation of the milk protein casein. Typically, the milk is acidified and addition of the enzyme rennet causes coagulation. The solids are separated and pressed into final form. Some cheeses have molds on the rind or throughout. Most cheeses melt at cooking temperature.
Hundreds of types of cheese are produced. Their styles, textures and flavors depend on the origin of the milk (including the animal's diet), whether they have been pasteurized, the butterfat content, the bacteria and mold, the processing, and aging. Herbs, spices, or wood smoke may be used as flavoring agents. The yellow to red color of many cheeses, such as Red Leicester, is formed from adding annatto.
For a few cheeses, the milk is curdled by adding acids such as vinegar or lemon juice. Most cheeses are acidified to a lesser degree by bacteria, which turn milk sugars into lactic acid, then the addition of rennet completes the curdling. Vegetarian alternatives to rennet are available; most are produced by fermentation of the fungus Mucor miehei, but others have been extracted from various species of the Cynara thistle family.
Cheese is valued for its portability, long life, and high content of fat, protein, calcium, and phosphorus.
Cheese is more compact and has a longer shelf life than milk, although
how long a cheese will keep may depend on the type of cheese; labels on
packets of cheese often claim that a cheese should be consumed within
three to five days of opening. Generally speaking, hard cheeses last
longer than soft cheeses, such as Brie or goat's milk cheese. Cheesemakers
near a dairy region may benefit from fresher, lower-priced milk, and
lower shipping costs. The long storage life of some cheese, especially
if it is encased in a protective rind, allows selling when markets are
favorable. Additional ingredients may be added to some cheeses, such as black peppers, garlic, chives or cranberries.
A specialist seller of cheese is sometimes known as a cheesemonger.
To become an expert in this field, like wine or cooking, requires some
formal education and years of tasting and hands-on experience. This
position is typically responsible for all aspects of the cheese
inventory; selecting the cheese menu, purchasing, receiving, storage,
and ripening.
Minggu, 09 Desember 2012
Variations Bakso
- Bakso urat: bakso filled with tendons and coarse meat
- Bakso ayam: chicken bakso
- Bakso bola tenis or bakso telur: tennis ball sized bakso with boiled chicken egg wrapped inside
- Bakso gepeng: flat bakso
- Bakso ikan: fish bakso (fish ball)
- Bakso udang: shrimp bakso
- Bakso Malang: A bowl of bakso dish from Malang city, East Java; complete with noodle, tofu, siomay and fried wonton
- Bakso keju: new recipe bakso filled with cheese
Origin Bakso
The name Bakso originated from bak-so (肉酥, Pe̍h-ōe-jī: bah-so·), the Hokkien pronunciation for "shredded meat" (Rousong). This suggests that bakso has Indonesian Chinese cuisine origin. Today most of the bakso vendors are Javanese from Wonogiri (a town near Solo) and Malang. Bakso Solo and Bakso Malang are the most popular variant; the name comes from the city it comes from, Solo in Central Java and Malang in East Java.
In Malang, Bakso Bakar (roasted bakso) is also popular. As most
Indonesians are Muslim, generally Bakso is made from beef or is mixed
with chicken.
Bakso
Bakso or baso is Indonesian meatball or meat paste made from beef surimi and is similar in texture to the Chinese beef ball, fish ball, or pork ball. Bakso is commonly made from beef with a small quantity of tapioca flour,
however bakso can also be made from other ingredients, such as chicken,
fish, or shrimp. Bakso are usually served in a bowl of beef broth, with yellow noodles, bihun (rice vermicelli), salted vegetables, tofu, egg (wrapped within bakso), Chinese green cabbage, bean sprout, siomay or steamed meat dumpling, and crisp wonton, sprinkled with fried shallots and celery.
Bakso can be found all across Indonesia; from the traveling cart street
vendors to restaurants. Today various types of ready to cook bakso also
available as frozen food commonly sold in supermarkets in Indonesia. Slices of bakso often used and mixed as compliments in mi goreng, nasi goreng, or cap cai recipes.
Unlike other meatball recipes, bakso has a consistent firm, dense, homogeneous texture due to the polymerization of myosin in the beef surimi.
Unlike other meatball recipes, bakso has a consistent firm, dense, homogeneous texture due to the polymerization of myosin in the beef surimi.
Sabtu, 08 Desember 2012
The Carterfone decision
For many years, the Bell System (AT&T)
maintained a monopoly on the use of its phone lines, allowing only
Bell-supplied devices to be attached to its network. Before 1968,
AT&T maintained a monopoly on what devices could be electrically
connected to its phone lines. This led to a market for 103A-compatible
modems that were mechanically connected to the phone, through the
handset, known as acoustically coupled modems. Particularly common models
from the 1970s were the Novation
CAT and the Anderson-Jacobson,
spun off from an in-house project at Stanford Research Institute (now
SRI International). Hush-a-Phone v. FCC
was a seminal ruling in United
States telecommunications
law decided by the DC Circuit Court
of Appeals on November 8, 1956. The District Court found that it
was within the FCC's authority to regulate the terms of use of
AT&T's equipment. Subsequently, the FCC examiner found that as long
as the device was not physically attached it would not threaten to
degenerate the system. Later, in the Carterfone
decision of 1968, the FCC passed a rule setting stringent
AT&T-designed tests for electronically coupling a device to the
phone lines. AT&T's tests were complex, making electronically
coupled modems expensive,[citation needed]
so acoustically coupled modems remained common into the early 1980s.
In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1,200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic's model, but used the lower frequency set for transmission. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation.
In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1,200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic's model, but used the lower frequency set for transmission. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation.
Modem
A modem (modulator-demodulator) is a
device that modulates an analog
carrier signal to encode digital information, and also
demodulates such a carrier signal to decode the transmitted information.
The goal is to produce a signal that can be
transmitted easily and decoded to reproduce the original digital data.
Modems can be used over any means of transmitting analog signals, from light emitting diodes to radio. The
most familiar example is a voice band modem that turns the digital data of a personal computer into modulated electrical signals in the voice frequency range of a telephone
channel. These signals can be transmitted over telephone lines and demodulated by
another modem at the receiver side to recover the digital data.
Modems are generally classified by the amount of data they can send
in a given unit of time, usually expressed in bits per second (bit/s, or bps), or bytes per second (B/s). Modems can
alternatively be classified by their symbol
rate, measured in baud. The baud unit denotes symbols per second,
or the number of times per second the modem sends a new signal. For
example, the ITU V.21 standard used audio frequency shift
keying, that is to say, tones of different frequencies, with two
possible frequencies corresponding to two distinct symbols (or one bit
per symbol), to carry 300 bits per second using 300 baud. By contrast,
the original ITU V.22 standard, which was able to transmit and receive
four distinct symbols (two bits per symbol), handled 1,200 bit/s by
sending 600 symbols per second (600 baud) using phase shift keying.
Modern era Cheese
Until its modern spread along with European culture, cheese was
nearly unheard of in oriental cultures, in the pre-Columbian Americas,
and only had limited use in sub-Mediterranean Africa, mainly being
widespread and popular only in Europe, the Middle East and areas
influenced by those cultures. But with the spread, first of European
imperialism, and later of Euro-American culture and food, cheese has
gradually become known and increasingly popular worldwide, though still
rarely considered a part of local ethnic cuisines outside Europe, the
Middle East, and the Americas.
The first factory for the industrial production of cheese opened in Switzerland in 1815, but it was in the United States where large-scale production first found real success. Credit usually goes to Jesse Williams, a dairy farmer from Rome, New York, who in 1851 started making cheese in an assembly-line fashion using the milk from neighboring farms. Within decades hundreds of such dairy associations existed.
The 1860s saw the beginnings of mass-produced rennet, and by the turn of the century scientists were producing pure microbial cultures. Before then, bacteria in cheesemaking had come from the environment or from recycling an earlier batch's whey; the pure cultures meant a more standardized cheese could be produced.
Factory-made cheese overtook traditional cheesemaking in the World War II era, and factories have been the source of most cheese in America and Europe ever since. Today, Americans buy more processed cheese than "real", factory-made or not.
The first factory for the industrial production of cheese opened in Switzerland in 1815, but it was in the United States where large-scale production first found real success. Credit usually goes to Jesse Williams, a dairy farmer from Rome, New York, who in 1851 started making cheese in an assembly-line fashion using the milk from neighboring farms. Within decades hundreds of such dairy associations existed.
The 1860s saw the beginnings of mass-produced rennet, and by the turn of the century scientists were producing pure microbial cultures. Before then, bacteria in cheesemaking had come from the environment or from recycling an earlier batch's whey; the pure cultures meant a more standardized cheese could be produced.
Factory-made cheese overtook traditional cheesemaking in the World War II era, and factories have been the source of most cheese in America and Europe ever since. Today, Americans buy more processed cheese than "real", factory-made or not.
Post-Roman Europe
As Romanized populations encountered unfamiliar newly-settled
neighbors, bringing their own cheese-making traditions, their own flocks
and their own unrelated words for cheese, cheeses in Europe
diversified further, with various locales developing their own
distinctive traditions and products. As long-distance trade collapsed,
only travelers would encounter unfamiliar cheeses: Charlemagne's first encounter with a white cheese that had an edible rind forms one of the constructed anecdotes of Notker's Life of the Emperor. The British Cheese Board claims that Britain has approximately 700 distinct local cheeses; France and Italy have perhaps 400 each. (A French proverb holds there is a different French cheese for every day of the year, and Charles de Gaulle once asked "how can you govern a country in which there are 246 kinds of cheese?")
Still, the advancement of the cheese art in Europe was slow during the
centuries after Rome's fall. Many cheeses today were first recorded in
the late Middle Ages or after— cheeses like Cheddar around 1500 CE, Parmesan in 1597, Gouda in 1697, and Camembert in 1791.
In 1546, The Proverbs of John Heywood claimed "the moon is made of a greene cheese." (Greene may refer here not to the color, as many now think, but to being new or unaged.) Variations on this sentiment were long repeated and NASA exploited this myth for an April Fools' Day spoof announcement in 2006.
In 1546, The Proverbs of John Heywood claimed "the moon is made of a greene cheese." (Greene may refer here not to the color, as many now think, but to being new or unaged.) Variations on this sentiment were long repeated and NASA exploited this myth for an April Fools' Day spoof announcement in 2006.
Origins Cheese
Cheese is an ancient food whose origins predate recorded history. There is no conclusive evidence indicating where cheesemaking originated, either in Europe, Central Asia or the Middle East, but the practice had spread within Europe prior to Roman times and, according to Pliny the Elder, had become a sophisticated enterprise by the time the Roman Empire came into being.
Proposed dates for the origin of cheesemaking range from around 8000 BCE (when sheep were first domesticated) to around 3000 BCE. The first cheese may have been made by people in the Middle East or by nomadic Turkic tribes in Central Asia. Since animal skins and inflated internal organs have, since ancient times, provided storage vessels for a range of foodstuffs, it is probable that the process of cheese making was discovered accidentally by storing milk in a container made from the stomach of an animal, resulting in the milk being turned to curd and whey by the rennet from the stomach. There is a legend with variations about the discovery of cheese by an Arab trader who used this method of storing milk.
Cheesemaking may have begun independently of this by the pressing and salting of curdled milk to preserve it. Observation that the effect of making milk in an animal stomach gave more solid and better-textured curds may have led to the deliberate addition of rennet.
The earliest archeological evidence of cheesemaking has been found in Egyptian tomb murals, dating to about 2000 BCE. The earliest cheeses were likely to have been quite sour and salty, similar in texture to rustic cottage cheese or feta, a crumbly, flavorful Greek cheese.
Cheese produced in Europe, where climates are cooler than the Middle East, required less salt for preservation. With less salt and acidity, the cheese became a suitable environment for useful microbes and molds, giving aged cheeses their respective flavors.
Proposed dates for the origin of cheesemaking range from around 8000 BCE (when sheep were first domesticated) to around 3000 BCE. The first cheese may have been made by people in the Middle East or by nomadic Turkic tribes in Central Asia. Since animal skins and inflated internal organs have, since ancient times, provided storage vessels for a range of foodstuffs, it is probable that the process of cheese making was discovered accidentally by storing milk in a container made from the stomach of an animal, resulting in the milk being turned to curd and whey by the rennet from the stomach. There is a legend with variations about the discovery of cheese by an Arab trader who used this method of storing milk.
Cheesemaking may have begun independently of this by the pressing and salting of curdled milk to preserve it. Observation that the effect of making milk in an animal stomach gave more solid and better-textured curds may have led to the deliberate addition of rennet.
The earliest archeological evidence of cheesemaking has been found in Egyptian tomb murals, dating to about 2000 BCE. The earliest cheeses were likely to have been quite sour and salty, similar in texture to rustic cottage cheese or feta, a crumbly, flavorful Greek cheese.
Cheese produced in Europe, where climates are cooler than the Middle East, required less salt for preservation. With less salt and acidity, the cheese became a suitable environment for useful microbes and molds, giving aged cheeses their respective flavors.
Etymology Cheese
The word cheese comes from Latin caseus, from which the modern word casein is closely derived. The earliest source is from the proto-Indo-European root *kwat-, which means "to ferment, become sour".
More recently, cheese comes from chese (in Middle English) and cīese or cēse (in Old English). Similar words are shared by other West Germanic languages — West Frisian tsiis, Dutch kaas, German Käse, Old High German chāsi — all from the reconstructed West-Germanic form *kasjus, which in turn is an early borrowing from Latin.
When the Romans began to make hard cheeses for their legionaries' supplies, a new word started to be used: formaticum, from caseus formatus, or "molded cheese" (as in "formed", not "moldy"). It is from this word that the French fromage, Italian formaggio, Catalan formatge, Breton fourmaj, and Provençal furmo are derived. The word Cheese itself is occasionally employed in a sense that means "molded" or "formed". Head cheese uses the word in this sense.
More recently, cheese comes from chese (in Middle English) and cīese or cēse (in Old English). Similar words are shared by other West Germanic languages — West Frisian tsiis, Dutch kaas, German Käse, Old High German chāsi — all from the reconstructed West-Germanic form *kasjus, which in turn is an early borrowing from Latin.
When the Romans began to make hard cheeses for their legionaries' supplies, a new word started to be used: formaticum, from caseus formatus, or "molded cheese" (as in "formed", not "moldy"). It is from this word that the French fromage, Italian formaggio, Catalan formatge, Breton fourmaj, and Provençal furmo are derived. The word Cheese itself is occasionally employed in a sense that means "molded" or "formed". Head cheese uses the word in this sense.
Jumat, 07 Desember 2012
Battery (electricity)
In electricity, a battery is a device consisting of one or more electrochemical cells that convert stored chemical energy into electrical energy. Since the invention of the first battery (or "voltaic pile") in 1800 by Alessandro Volta and especially since the technically improved Daniell cell
in 1836, batteries have become a common power source for many household
and industrial applications. According to a 2005 estimate, the
worldwide battery industry generates US$48 billion in sales each year, with 6% annual growth.
There are two types of batteries: primary batteries (disposable batteries), which are designed to be used once and discarded, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Batteries come in many sizes, from miniature cells used to power hearing aids and wristwatches to battery banks the size of rooms that provide standby power for telephone exchanges and computer data centers.
There are two types of batteries: primary batteries (disposable batteries), which are designed to be used once and discarded, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Batteries come in many sizes, from miniature cells used to power hearing aids and wristwatches to battery banks the size of rooms that provide standby power for telephone exchanges and computer data centers.
History of Milk
Humans first learned to regularly consume the milk of other mammals following the domestication of animals during the Neolithic Revolution or the invention of agriculture. This development occurred independently in several places around the world from as early as 9000–7000 BC in Southwest Asia to 3500–3000 BC in the Americas.
The most important dairy animals—cattle, sheep and goats—were first
domesticated in Southwest Asia, although domestic cattle has been
independently derived from wild auroch populations several times since. Initially animals were kept for meat, and archaeologist Andrew Sherratt
has suggested that dairying, along with the exploitation of domestic
animals for hair and labor, began much later in a separate secondary products revolution in the 4th millennium BC. Sherratt's model is not supported by recent findings, based on the analysis of lipid
residue in prehistoric pottery, that show that dairying was practiced
in the early phases of agriculture in Southwest Asia, by at least the
7th millennium BC.
From Southwest Asia domestic dairy animals spread to Europe (beginning around 7000 BC but not reaching Britain and Scandinavia until after 4000 BC), and South Asia (7000–5500 BC). The first farmers in central Europe and Britain milked their animals. Pastoral and pastoral nomadic economies, which rely predominantly or exclusively on domestic animals and their products rather than crop farming, were developed as European farmers moved into the Pontic-Caspian steppe in the 4th millennium BC, and subsequently spread across much of the Eurasian steppe. Sheep and goats were introduced to Africa from Southwest Asia, but African cattle may have been independently domesticated around 7000–6000 BC. Camels, domesticated in central Arabia in the 4th millennium BC, have also been used as a dairy animal in North Africa and the Arabian peninsula. In the rest of the world (i.e., East and Southeast Asia, the Americas and Australia) milk and dairy products were historically not a large part of the diet, either because they remained populated by hunter-gatherers who did not keep animals or the local agricultural economies did not include domesticated dairy species. Milk consumption became common in these regions comparatively recently, as a consequence of European colonialism and political domination over much of the world in the last 500 years.
In 1863, French chemist and biologist Louis Pasteur invented pasteurization, a method of killing harmful bacteria in beverages and food products.
After the industrial revolution in Britain, the increase in population and introduction of railways meant that the greater demand for milk could be met by integrated and long-distance distribution from the rural producers to the growing towns via rail by the 1860s. The Great Western Railway was carrying 25 million gallons of milk a year by 1900 from the West Country to London.
In 1884, Doctor Hervey Thatcher, an American inventor from New York, invented the first glass milk bottle, called 'Thatcher's Common Sense Milk Jar', which was sealed with a waxed paper disk. Later, in 1932, plastic-coated paper milk cartons were introduced commercially as a consequence of their invention by Victor W. Farris.
From Southwest Asia domestic dairy animals spread to Europe (beginning around 7000 BC but not reaching Britain and Scandinavia until after 4000 BC), and South Asia (7000–5500 BC). The first farmers in central Europe and Britain milked their animals. Pastoral and pastoral nomadic economies, which rely predominantly or exclusively on domestic animals and their products rather than crop farming, were developed as European farmers moved into the Pontic-Caspian steppe in the 4th millennium BC, and subsequently spread across much of the Eurasian steppe. Sheep and goats were introduced to Africa from Southwest Asia, but African cattle may have been independently domesticated around 7000–6000 BC. Camels, domesticated in central Arabia in the 4th millennium BC, have also been used as a dairy animal in North Africa and the Arabian peninsula. In the rest of the world (i.e., East and Southeast Asia, the Americas and Australia) milk and dairy products were historically not a large part of the diet, either because they remained populated by hunter-gatherers who did not keep animals or the local agricultural economies did not include domesticated dairy species. Milk consumption became common in these regions comparatively recently, as a consequence of European colonialism and political domination over much of the world in the last 500 years.
In 1863, French chemist and biologist Louis Pasteur invented pasteurization, a method of killing harmful bacteria in beverages and food products.
After the industrial revolution in Britain, the increase in population and introduction of railways meant that the greater demand for milk could be met by integrated and long-distance distribution from the rural producers to the growing towns via rail by the 1860s. The Great Western Railway was carrying 25 million gallons of milk a year by 1900 from the West Country to London.
In 1884, Doctor Hervey Thatcher, an American inventor from New York, invented the first glass milk bottle, called 'Thatcher's Common Sense Milk Jar', which was sealed with a waxed paper disk. Later, in 1932, plastic-coated paper milk cartons were introduced commercially as a consequence of their invention by Victor W. Farris.
Milk
Milk is a white liquid produced by the mammary glands of mammals. It is the primary source of nutrition for young mammals before they are able to digest other types of food. Early-lactation milk contains colostrum, which carries the mother's antibodies to the baby and can reduce the risk of many diseases in the baby.
Milk is an important drink with many nutrients.
World's dairy farms produced about 730 million tonnes of milk in 2011. India is the world's largest producer and consumer of milk, yet neither exports nor imports milk. New Zealand, the European Union's 27 member states, Australia, and the United States are the world's largest exporters of milk and milk products. China and Russia are the world's largest importers of milk and milk products.
Throughout the world, there are more than 6 billion consumers of milk and milk products, the majority of them in developing countries. Over 750 million people live within dairy farming households. Milk is a key contributor to improving nutrition and food security particularly in developing countries. Improvements in livestock and dairy technology offer significant promise in reducing poverty and malnutrition in the world.
Milk is an important drink with many nutrients.
World's dairy farms produced about 730 million tonnes of milk in 2011. India is the world's largest producer and consumer of milk, yet neither exports nor imports milk. New Zealand, the European Union's 27 member states, Australia, and the United States are the world's largest exporters of milk and milk products. China and Russia are the world's largest importers of milk and milk products.
Throughout the world, there are more than 6 billion consumers of milk and milk products, the majority of them in developing countries. Over 750 million people live within dairy farming households. Milk is a key contributor to improving nutrition and food security particularly in developing countries. Improvements in livestock and dairy technology offer significant promise in reducing poverty and malnutrition in the world.
Kamis, 06 Desember 2012
Primary batteries
Primary batteries can produce current immediately on assembly.
Disposable batteries are intended to be used once and discarded. These
are most commonly used in portable devices that have low current drain,
are used only intermittently, or are used well away from an alternative
power source, such as in alarm and communication circuits where other
electric power is only intermittently available. Disposable primary
cells cannot be reliably recharged, since the chemical reactions are not
easily reversible and active materials may not return to their original
forms. Battery manufacturers recommend against attempting to recharge
primary cells.
Common types of disposable batteries include zinc–carbon batteries and alkaline batteries. In general, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω).
Common types of disposable batteries include zinc–carbon batteries and alkaline batteries. In general, these have higher energy densities than rechargeable batteries, but disposable batteries do not fare well under high-drain applications with loads under 75 ohms (75 Ω).
Battery capacity and discharging
A battery's capacity is the amount of electric charge
it can store. The more electrolyte and electrode material there is in
the cell the greater the capacity of the cell. A small cell has less
capacity than a larger cell with the same chemistry, and they develop
the same open-circuit voltage.
Because of the chemical reactions within the cells, the capacity of a battery depends on the discharge conditions such as the magnitude of the current (which may vary with time), the allowable terminal voltage of the battery, temperature, and other factors. The available capacity of a battery depends upon the rate at which it is discharged. If a battery is discharged at a relatively high rate, the available capacity will be lower than expected.
The capacity printed on a battery is usually the product of 20 hours multiplied by the constant current that a new battery can supply for 20 hours at 68 F° (20 C°), down to a specified terminal voltage per cell. A battery rated at 100 A·h will deliver 5 A over a 20-hour period at room temperature. However, if discharged at 50 A, it will have a lower capacity.
The relationship between current, discharge time, and capacity for a lead acid battery is approximated (over a certain range of current values) by Peukert's law:
Internal energy losses and limited rate of diffusion of ions through the electrolyte cause the efficiency of a real battery to vary at different discharge rates. When discharging at low rate, the battery's energy is delivered more efficiently than at higher discharge rates, but if the rate is very low, it will partly self-discharge during the long time of operation, again lowering its efficiency.
Installing batteries with different A·h ratings will not affect the operation of a device (except for the time it will work for) rated for a specific voltage unless the load limits of the battery are exceeded. High-drain loads such as digital cameras can result in delivery of less total energy, as happens with alkaline batteries. For example, a battery rated at 2000 mAh for a 10- or 20-hour discharge would not sustain a current of 1 A for a full two hours as its stated capacity implies.
Because of the chemical reactions within the cells, the capacity of a battery depends on the discharge conditions such as the magnitude of the current (which may vary with time), the allowable terminal voltage of the battery, temperature, and other factors. The available capacity of a battery depends upon the rate at which it is discharged. If a battery is discharged at a relatively high rate, the available capacity will be lower than expected.
The capacity printed on a battery is usually the product of 20 hours multiplied by the constant current that a new battery can supply for 20 hours at 68 F° (20 C°), down to a specified terminal voltage per cell. A battery rated at 100 A·h will deliver 5 A over a 20-hour period at room temperature. However, if discharged at 50 A, it will have a lower capacity.
The relationship between current, discharge time, and capacity for a lead acid battery is approximated (over a certain range of current values) by Peukert's law:
- is the capacity when discharged at a rate of 1 amp.
- is the current drawn from battery (A).
- is the amount of time (in hours) that a battery can sustain.
- is a constant around 1.3.
Internal energy losses and limited rate of diffusion of ions through the electrolyte cause the efficiency of a real battery to vary at different discharge rates. When discharging at low rate, the battery's energy is delivered more efficiently than at higher discharge rates, but if the rate is very low, it will partly self-discharge during the long time of operation, again lowering its efficiency.
Installing batteries with different A·h ratings will not affect the operation of a device (except for the time it will work for) rated for a specific voltage unless the load limits of the battery are exceeded. High-drain loads such as digital cameras can result in delivery of less total energy, as happens with alkaline batteries. For example, a battery rated at 2000 mAh for a 10- or 20-hour discharge would not sustain a current of 1 A for a full two hours as its stated capacity implies.
Principle of operation
A battery is a device that converts chemical energy directly to electrical energy. It consists of a number of voltaic cells; each voltaic cell consists of two half-cells
connected in series by a conductive electrolyte containing anions and
cations. One half-cell includes electrolyte and the electrode to which anions (negatively charged ions) migrate, i.e., the anode or negative electrode; the other half-cell includes electrolyte and the electrode to which cations (positively charged ions) migrate, i.e., the cathode or positive electrode. In the redox
reaction that powers the battery, cations are reduced (electrons are
added) at the cathode, while anions are oxidized (electrons are removed)
at the anode. The electrodes do not touch each other but are electrically connected by the electrolyte.
Some cells use two half-cells with different electrolytes. A separator
between half-cells allows ions to flow, but prevents mixing of the
electrolytes.
Each half-cell has an electromotive force (or emf), determined by its ability to drive electric current from the interior to the exterior of the cell. The net emf of the cell is the difference between the emfs of its half-cells, as first recognized by Volta. Therefore, if the electrodes have emfs and , then the net emf is ; in other words, the net emf is the difference between the reduction potentials of the half-reactions.
The electrical driving force or across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts. The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf of the cell. Because of internal resistance, the terminal voltage of a cell that is discharging is smaller in magnitude than the open-circuit voltage and the terminal voltage of a cell that is charging exceeds the open-circuit voltage. An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and stored a charge of one coulomb then on complete discharge it would perform 1.5 joule of work. In actual cells, the internal resistance increases under discharge, and the open circuit voltage also decreases under discharge. If the voltage and resistance are plotted against time, the resulting graphs typically are a curve; the shape of the curve varies according to the chemistry and internal arrangement employed.
As stated above, the voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. On the other hand the high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.
Each half-cell has an electromotive force (or emf), determined by its ability to drive electric current from the interior to the exterior of the cell. The net emf of the cell is the difference between the emfs of its half-cells, as first recognized by Volta. Therefore, if the electrodes have emfs and , then the net emf is ; in other words, the net emf is the difference between the reduction potentials of the half-reactions.
The electrical driving force or across the terminals of a cell is known as the terminal voltage (difference) and is measured in volts. The terminal voltage of a cell that is neither charging nor discharging is called the open-circuit voltage and equals the emf of the cell. Because of internal resistance, the terminal voltage of a cell that is discharging is smaller in magnitude than the open-circuit voltage and the terminal voltage of a cell that is charging exceeds the open-circuit voltage. An ideal cell has negligible internal resistance, so it would maintain a constant terminal voltage of until exhausted, then dropping to zero. If such a cell maintained 1.5 volts and stored a charge of one coulomb then on complete discharge it would perform 1.5 joule of work. In actual cells, the internal resistance increases under discharge, and the open circuit voltage also decreases under discharge. If the voltage and resistance are plotted against time, the resulting graphs typically are a curve; the shape of the curve varies according to the chemistry and internal arrangement employed.
As stated above, the voltage developed across a cell's terminals depends on the energy release of the chemical reactions of its electrodes and electrolyte. Alkaline and zinc–carbon cells have different chemistries but approximately the same emf of 1.5 volts; likewise NiCd and NiMH cells have different chemistries, but approximately the same emf of 1.2 volts. On the other hand the high electrochemical potential changes in the reactions of lithium compounds give lithium cells emfs of 3 volts or more.
History of Battery
In strict terms, a battery is a collection of multiple electrochemical cells, but in popular usage battery often refers to a single cell. For example, a 1.5-volt AAA battery is a single 1.5-volt cell, and a 9-volt battery has six 1.5-volt cells in series. The first electrochemical cell was developed by the Italian physicist Alessandro Volta in 1792, and in 1800 he invented the first battery, a "pile" of many cells in series.
The usage of "battery" to describe electrical devices dates to Benjamin Franklin, who in 1748 described multiple Leyden jars (early electrical capacitors) by analogy to a battery of cannons. Thus Franklin's usage to describe multiple Leyden jars predated Volta's use of multiple galvanic cells. It is speculated, but not established, that several ancient artifacts consisting of copper sheets and iron bars, and known as Baghdad batteries may have been galvanic cells.
Volta's work was stimulated by the Italian anatomist and physiologist Luigi Galvani, who in 1780 noticed that dissected frog's legs would twitch when struck by a spark from a Leyden jar, an external source of electricity. In 1786 he noticed that twitching would occur during lightning storms. After many years Galvani learned how to produce twitching without using any external source of electricity. In 1791, he published a report on "animal electricity."[10] He created an electric circuit consisting of the frog's leg (FL) and two different metals A and B, each metal touching the frog's leg and each other, thus producing the circuit A–FL–B–A–FL–B...etc. In modern terms, the frog's leg served as both the electrolyte and the sensor, and the metals served as electrodes. He noticed that even though the frog was dead, its legs would twitch when he touched them with the metals.
Within a year, Volta realized the frog's moist tissues could be replaced by cardboard soaked in salt water, and the frog's muscular response could be replaced by another form of electrical detection. He already had studied the electrostatic phenomenon of capacitance, which required measurements of electric charge and of electrical potential ("tension"). Building on this experience, Volta was able to detect electric current through his system, also called a Galvanic cell. The terminal voltage of a cell that is not discharging is called its electromotive force (emf), and has the same unit as electrical potential, named (voltage) and measured in volts, in honor of Volta. In 1800, Volta invented the battery by placing many voltaic cells in series, piling them one above the other. This voltaic pile gave a greatly enhanced net emf for the combination, with a voltage of about 50 volts for a 32-cell pile. In many parts of Europe batteries continue to be called piles.
Volta did not appreciate that the voltage was due to chemical reactions. He thought that his cells were an inexhaustible source of energy, and that the associated corrosion effects at the electrodes were a mere nuisance, rather than an unavoidable consequence of their operation, as Michael Faraday showed in 1834. According to Faraday, cations (positively charged ions) are attracted to the cathode, and anions (negatively charged ions) are attracted to the anode.
Although early batteries were of great value for experimental purposes, in practice their voltages fluctuated and they could not provide a large current for a sustained period. Later, starting with the Daniell cell in 1836, batteries provided more reliable currents and were adopted by industry for use in stationary devices, in particular in telegraph networks where they were the only practical source of electricity, since electrical distribution networks did not exist at the time. These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly. Many used glass jars to hold their components, which made them fragile. These characteristics made wet cells unsuitable for portable appliances. Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical.
Since then, batteries have gained popularity as they became portable and useful for a variety of purposes.
The usage of "battery" to describe electrical devices dates to Benjamin Franklin, who in 1748 described multiple Leyden jars (early electrical capacitors) by analogy to a battery of cannons. Thus Franklin's usage to describe multiple Leyden jars predated Volta's use of multiple galvanic cells. It is speculated, but not established, that several ancient artifacts consisting of copper sheets and iron bars, and known as Baghdad batteries may have been galvanic cells.
Volta's work was stimulated by the Italian anatomist and physiologist Luigi Galvani, who in 1780 noticed that dissected frog's legs would twitch when struck by a spark from a Leyden jar, an external source of electricity. In 1786 he noticed that twitching would occur during lightning storms. After many years Galvani learned how to produce twitching without using any external source of electricity. In 1791, he published a report on "animal electricity."[10] He created an electric circuit consisting of the frog's leg (FL) and two different metals A and B, each metal touching the frog's leg and each other, thus producing the circuit A–FL–B–A–FL–B...etc. In modern terms, the frog's leg served as both the electrolyte and the sensor, and the metals served as electrodes. He noticed that even though the frog was dead, its legs would twitch when he touched them with the metals.
Within a year, Volta realized the frog's moist tissues could be replaced by cardboard soaked in salt water, and the frog's muscular response could be replaced by another form of electrical detection. He already had studied the electrostatic phenomenon of capacitance, which required measurements of electric charge and of electrical potential ("tension"). Building on this experience, Volta was able to detect electric current through his system, also called a Galvanic cell. The terminal voltage of a cell that is not discharging is called its electromotive force (emf), and has the same unit as electrical potential, named (voltage) and measured in volts, in honor of Volta. In 1800, Volta invented the battery by placing many voltaic cells in series, piling them one above the other. This voltaic pile gave a greatly enhanced net emf for the combination, with a voltage of about 50 volts for a 32-cell pile. In many parts of Europe batteries continue to be called piles.
Volta did not appreciate that the voltage was due to chemical reactions. He thought that his cells were an inexhaustible source of energy, and that the associated corrosion effects at the electrodes were a mere nuisance, rather than an unavoidable consequence of their operation, as Michael Faraday showed in 1834. According to Faraday, cations (positively charged ions) are attracted to the cathode, and anions (negatively charged ions) are attracted to the anode.
Although early batteries were of great value for experimental purposes, in practice their voltages fluctuated and they could not provide a large current for a sustained period. Later, starting with the Daniell cell in 1836, batteries provided more reliable currents and were adopted by industry for use in stationary devices, in particular in telegraph networks where they were the only practical source of electricity, since electrical distribution networks did not exist at the time. These wet cells used liquid electrolytes, which were prone to leakage and spillage if not handled correctly. Many used glass jars to hold their components, which made them fragile. These characteristics made wet cells unsuitable for portable appliances. Near the end of the nineteenth century, the invention of dry cell batteries, which replaced the liquid electrolyte with a paste, made portable electrical devices practical.
Since then, batteries have gained popularity as they became portable and useful for a variety of purposes.
Langganan:
Postingan (Atom)