Wednesday, October 30, 2019

Post-War Evolution of The Aircraft Manufacturing Industry Case Study - 1

Post-War Evolution of The Aircraft Manufacturing Industry - Case Study Example The speed was required in both the military and commercial jets. The piston engine was improved into a powered engine that could propel the jet at a speed higher than that of sound. The jet engine is based on Newton’s law of motion. The reactor in engine generates thrust through propulsion that makes the jet to move very fast. This was a very great milestone to the jet evolvement. Throughout the development of aircraft engine, engineers faced the challenges of trial and error where they overlooked some of the key aspects that they had to review. The new jets with high power engines lacked hydraulic flight control systems that were very important in a flight, air conditioning and ejection seats among others. Engineers had to come up with other models that would accommodate new modifications for a stable flight. The engineers wanted to make a jet that could easily maneuver on air. During this period, many different jets became obsolete before they made the great innovation due to the trial and error. After World War II, the engineers decided to improve the reliability of the aircraft that could be used for both military and commercial purposes. They wanted a jet that can fly under harsh weather conditions without losing its stability. To achieve this engineers spend lots of time to perfect on the high power engines. There was also the challenge of making the engine power without increasing its weight. The bigger the engine, the more powerful it is. The engineers had to choose the effective materials that would make a big, reliable and light engine The engineers also faced the challenge of making an engine that will be economical in fuel consumption. With time, the engineers came up with an engine that could propel the plane across the ocean with less fuel than what the piston engine consumed (Albert and Army War College, 1997). The development of this engine had a very great social impact on the people.  

Monday, October 28, 2019

Stereotypical Roles Women Play In Advertisements Essay Example for Free

Stereotypical Roles Women Play In Advertisements Essay Since the commonly known creation Bible story of Adam and Eve, women have been viewed subsidiary to men; society has formed a bias that females cannot perform jobs equivalent to or of the same value as men. The model in Figure 1 is extremely feminine, for example: her nails are painted, she is wearing multiple rings on her fingers as she holds a light grip on the steering wheel, Since the commonly known creation Bible story of Adam and Eve, women have been viewed subsidiary to men; society has formed a bias that females cannot perform jobs equivalent to or of the same value as men. The model in Figure 1 is extremely feminine, for example: her nails are painted, she is wearing multiple rings on her fingers as she holds a light grip on the steering wheel, Advertisements help feed into the stereotypical image of women functioning as housewives and caretakers. One might ask: is advertising simply mirroring societies view on the roles of females, or are they part of the reason why America still labels women as domesticated? Printed advertisements portray women inferior to men by the use of their context, imagery, and content. Companies use the conventional view of women in marketing strategies in order to sell their products. Figure 1 â€Å"The Mini Automatic. For simple driving.† advertisement Source: Mini Automatic Transmission Printed Ad holds a light grip on the steering wheel, her hair is set in perfect curls, her make-up is flawless, and she appears to be dressed up for a special occasion. The expression on the women’s face displays her indecisiveness and insecurity of her driving abilities. Figure 1 states that the Mini Automatic is for â€Å"simple driving†, implying that the motorist is incapable of driving an automobile without difficulty. The advertisement is using the sentiment that women are inadequate drivers; it implies that if a woman is competent enough to maneuver the Mini Automatic, then it must be of simplicity. Before the 1950’s women in America were expected to cook, clean, and take care of the household, whereas men were looked at as the financial support system. Figures 2 and 3 are subliminally underlying the message of men doing a women’s role as a housewife. Both images contain the phrase â€Å"whipped so good†; meaning that if one buys Pinnacle Vodka, it will in turn reverse the classic expectation of women fulfilling household chores and encourage men to do so instead. The advertisements encourage the idea of men superiority because they are performing duties that are seen to be abnormal, such as doing laundry or ironing clothes. Before the 1950’s women in America were expected to cook, clean, and take care of the household, whereas men were looked at as the financial support system. Figures 2 and 3 are subliminally underlying the message of men doing a women’s role as a housewife. Both images contain the phrase â€Å"whipped so good†; meaning that if one buys Pinnacle Vodka, it will in turn reverse the classic expectation of women fulfilling household chores and encourage men to do so instead. The advertisements encourage the idea of men superiority because they are performing duties that are seen to be abnormal, such as doing laundry or ironing clothes. The main objective in Figure 4 Swiffer WetJet advertisement is for the audience to find a commonality between themselves and the mother standing in the kitchen. The advertisement displays a modern day housewife cleaning up after her child. The text states: â€Å"He made it in the kitchen and ate it in the dining room. With Swiffer WetJet, both floors were clean before he was.† Figure 5 shows two women in a clean kitchen with the Orbit logo â€Å"after any meal† indicating that their mothers who cook breakfast, lunch, and dinner for the family. Both Figure 4 and 5 exhibit women in the kitchen, either cleaning or cooking. Society struggles with gender expectations. In figures 1 through 5 women are suggestively thought of as ones who take care of the family and clean the house, whereas men generally take on the dominant role. Advertisements continue to use marketing strategies that are stereotypical to women because people still uphold the belief that men are superior. Advertisements are apart of the problem.

Saturday, October 26, 2019

Women in Latin America during the Colonization Essay -- essays researc

Women in Latin America during the Colonization The perception of inequality was evident in the colonial Spanish America, man belief that women were lacked in capacity to reason as soundly as men. A normal day for European women in the new world was generally characterized by male domination, for example marriage was arranged by the fathers, women never go out except to go church, women didn’t have the right to express their opinions about politic or society issues. Subsequent to all these bad treats European women try to find different ways to escape from man domination and demonstrate their intellectual capacities, for example women used become part of a convent, write in secret their desires and disappointments, and even dress as man to discover what was the real world. On the other hand native women were not treating the same way, because their enjoyed economic importances that place them far from being man victims. However, Europeans women were very discriminated and dominated during the colonial times; but little by l ittle women fight for their rights and become free of man domination.   Ã‚  Ã‚  Ã‚  Ã‚  In the year 1520 European women begin arriving to the New World; all these women were treat as minors and became adult at the age of 25 years old. At this time or before women were destined to get marriage. Marriages were controlled by fathers, fathers would make sure that the husband choose to their daughters were equal or better in economic matters. The issue of â€Å"inequality† of course, rarely arose on the top elite level, but to middle or low level classes it was a major issue. According to one of the stories of Tales of Potosi called The Strange Case of Fulgencio Orozco people from low classes pass throughout many difficulties to arrange a marriage for their daughters; in this story a Spanish man who lacked in economic matters experience many complications trying to organized a marriage for his daughter, he never obtain a good marriage for his daughter and finally became crazy, lost his faith in God and died. Cases like this one occur around all Spa nish America in low classes; marriage was an economic contract that almost always benefits top elite level class.   Ã‚  Ã‚  Ã‚  Ã‚  In a normal day a European women were required to stay home all day except to go to church. The church became a place of reunions to women of the top... ...European women could have all these freedoms was after the death of their husbands, the heritance from their husbands give to each women an economic base to managed business and be independent into society.   Ã‚  Ã‚  Ã‚  Ã‚  However, Europeans women were very segregated and under man control during the colonial times; but little by little women fight for their rights and become free of man domination. Today the status of women’s civil rights varies dramatically in different countries and, in some cases, among groups within the same country, such as ethnic groups or economic classes. In recent decades women around the world have made strides in political participation, as for example women acquired the right to vote, the right to become part of political issues, the right to marriage who they want, and the right to be free as an individual. Resources: Benjamin Keen, Keith Haynes. A History of Latin America Seventh Edition. Houghton Mifflin   Ã‚  Ã‚  Ã‚  Ã‚  Company. Boston New York, 2004. Bartolome Arzans de Orsua y Vela. Tales of Potosi. Providence Brown University, 1975. Emma Sordo. Latin American Civilization Class Notes. 5/25/05.

Thursday, October 24, 2019

‘A Taste of Honey’- Improvements

During the rehearsal period before our short performances of ‘A Taste of Honey’, each actor improved all aspects of their performance, from the interpretation to their proxemics on stage. This was due to our intense rehearsal period where we developed our own acting skills as well as our way of interpreting characters. One of the issues I faced whilst playing Geoff was how best to convey his love and caring toward Jo. Because this is a core and essential part of his character, I felt that I had to work on this part of Geoff more than other parts. To achieve this, I worked closely with Poppy (who played the character of Jo) to perfect the scene which opens the piece we were performing, because this is the biggest chance we had to express Geoff’s feeling toward Jo whilst Helen is not in the scene. I included more gestures to show my feelings, such as stroking Jo’s shoulder and helping her up as she is pregnant- these worked together to show that my character cares immensely for Jo. In turn, several techniques helped me to perfect my interpretation. A strategy that I found extremely helpful was called ‘Reflection in Role’; during this process I was asked questions about my character directly after the scene had finished so that I would still be in role and have the feelings of the character fresh in my head. This technique helped to establish a relationship between our characters and develop our understanding of the Human Context. The next strategy which we used is called ‘Hot Seat’ which involved sitting in a chair in front on the class- in character- and being asked questions by the audience about feelings, relationships or statuses within the scene. This helped us to develop a deeper understanding of our character. Furthermore, one of the most common issues within our class was that our dialogue and the delivery of it didn’t sound believable in the ‘Kitchen-Sink’ context. The style of the piece was naturalistic which meant that our actions and the way we said our dialogue had to reflect this. An example of this is, during the fight scene, our lines had to overlap because this would be what would happen in a real fight- we had to make it seem like our lines were unscripted. Repetition of the scene helped us to familiarise ourselves with individual cues, certain moves between characters and being careful not to block each other- this was especially apparent in the scene where Helen is parading across the floor space and steps in front of Jo and Geoff quite often. To perfect the timing of this scene we practised it lots of times, as the repetition helped us to remember and time the section perfectly. Other techniques that we used included going through the scene without stopping- even if we did make mistakes- because this would highlight which areas we needed to improve. Because of the realism theme, everything had to feel as if it was happening for the first time. This was unusual for me, because I am used to each of my lines being heavily rehearsed and sound it. However, in ‘A Taste of Honey’ I had to act as if it was the first time that I had said it- and react accordingly. I found this particularly hard with the line: â€Å"Don’t tell her I came for you,† because I had rehearsed it so much that it had started to sound as if it wasn’t important to the scene- which it was. I improved this by changing the tone of my voice each time I said it, so that it would sound more genuine. In turn, these techniques also helped our next dilemma in rehearsing which were our positions on stage. Before we practised in front of an audience, our scene was using far too much space on stage; we improved this by restricting the amount of room we could use as a performing area. Our group also decided to experiment with different proxemics, so that we could show relationships and the interest and focus of the characters just by the positioning on stage. We also found that we often blocked each other on stage- especially during the fight scene- which would distract from the main action. This was easily corrected, however, and we were able to not upstage each other by our recorded concluding performance. Also, a common problem that some groups faced was that they forgot about their audience and played their character too much in profile so a lot of facial expressions were missed. This was fixed by remembering that the audience are the most important part of the theatre- if they were not there, there would be no theatre! The final obstacle that we faced as a group in our rehearsal period was how to vary the dynamics during the performance. As, during the scene, we are supposed to convey a variety of emotions to the audience we had to include different dynamics. To achieve this, our group experimented with different paces- especially during the argument section. We experimented with different pauses in places where they felt necessary to let the emotions of the scene process with the audience and to dramatize the moment. In each scene that required it, lines would be read at a fast pace, very quickly as to heighten the audience’s emotions and keep them on ‘the edge-of-their-seat’. In contrast, some of the scene was improved so that it was much slower than the rest of the piece. This would add tension to the scene (especially when Helen and Jo are discussing their futures) and would juxtapose the fight section. This would also create a stronger effect as it shows that Helen does truly care about her daughter but doesn’t know how to show or prove it. Before our rehearsal period our characters were very one-dimensional and ‘flat’, but after practising, interpreting, and getting used to our characters we were able to make them a lot more two-dimensional and more interesting to watch during a performance

Wednesday, October 23, 2019

Impaired Asset

IMPAIRMENT OF ASSETS The following information relates to Q1 & Q2.Information about three assets are given below in the table:Aldo Balbo Casco Value in Use $150,000 $195,000 $105,000Carrying Amount $90,000 $140,000 $112,000Net Realizable Value $115,000 $136,000 $85,000 Q1. What are the recoverable amounts of each asset? (MCQ)Aldo ($115,000), Balbo ($136,000), Casco ($105,000)Aldo ($150,000), Balbo ($136,000), Casco ($105,000)Aldo ($150,000), Balbo ($195,000), Casco ($105,000)Aldo ($115,000), Balbo ($195,000), Casco ($85,000)(2 marks) Q2. What are the impairment losses on each asset? (MCQ)Aldo ($0), Balbo ($0), Casco ($0)Aldo ($0), Balbo ($55,000), Casco ($20,000)Aldo ($25,000), Balbo ($4,000), Casco ($7,000)Aldo ($0), Balbo ($0), Casco ($7,000)(2 marks) Q3. A cash-generating unit has the following assets:Building $600,000Plant & Machinery $100,000Goodwill $80,000Inventory $50,000Total $830,000One of the machines valued at $60,000 has been damaged & will be scrapped. The total recoverable amount estimated from the cash-generating unit is $470,000. What is the recoverable amount of the current assets after the impairment loss? (MCQ)$21,800$28,000$33,500$50,000 (2 marks) Q4. Which of the following correctly defines the recoverable amount of an asset? (MCQ)Current market value of the asset less cost of disposalHigher of fair value less cost of disposal & value in useHigher of carrying amount & fair valueLower of fair value less cost of disposal & value in use (2 marks) Q5. An asset has a carrying amount of $55,000 at the year-end 31st March 2002. Its market value is $47,000 having a disposal cost of $3,500. A new asset will cost $85,000. The company expects that the asset will generate $19,000/per annum of cash flows for the next three years. The cost of capital is 8%. What is the impairment loss to be recognized for the year end 31st March 2002? (FIB)3613151270000$ (2 marks) Q6. Which of the following are internal indications of impairment? (MRQ) A fall in the market value of a machine due to inflationThe management realized that an asset is unable to produce up to its full capacityA report prepared by the warehouse manager than one of the lifter cars has crashed into a wallThe development of intention of management to sell the asset during the next 3 months (2 marks) Q7. Moby had purchased an asset on 1st September 2009 at a cost of $500,000 with the useful life of ten years with no cash inflow at the time of disposal. The asset has been depreciated until 31st October 2014. At that date, an accident occurred which resulted in the damage of the asset & an impairment test was taken by Moby. On 31st October 2014, the fair value of the asset was $160,000 with $10,000 cost of disposal. The expected future cash flows were $13,000/annum for the next five years. The cost of capital is at 10% with five-year annuity factor of 3.79. Calculate the impairment on 31st October 2014? (MCQ)$0$100,000$150,970$200,730 (2 marks) Q8. A cash-generating unit has the following assets:Property & Plant $400,000Machinery $90,000Goodwill $75,000License $5,000Net Assets (realizable value) $30,000Total $600,000The company had breached a government legislation which results in its cash-generating unit value to fall by $200,000. What will be the value of Property & Plant after the impairment? (MCQ)$101,010$126,316$266,667$298,990 (2 marks) Q9. Which of the following is not an indicator of impairment? (MCQ)The NRV of inventory has reduced due to damages but carrying amount is still lowered it's than NRV Technological advancement has boomed in a country resulting old machinery becoming obsoleteCost of capital of a company has increased due to increase in market ratesThe carrying amount of an asset is higher of the recoverable amount of an asset (2 marks) Q10. A company purchased an asset on 1st January 2000 costing $2.1 million and its life was 10 years. On 31st December 2001, the fair value of the assets was $1.9 million. On 31st December 2002, the recoverable amount of the asset was $0.7 million. Calculate the impairment loss to be recorded in Profit ; Loss account on 31st December 2002? (FIB)3613151270000$ (2 marks) Q11. A cash-generating unit has the following assets:Building $409,050Plant ; Machinery $311,000Goodwill $30,500Inventory $156,000Total $906,550One of the plants valued at $91,000 was destroyed ; will be scrapped. The total recoverable amount estimated from the cash-generating unit is $760,050. What is the recoverable amount of the Plant ; Machinery after the impairment loss? (FIB)3613151270000$ (2 marks) Q12. Meagan had purchased an asset on 1st September 2015 at a cost of $300,000 with the useful life of six years with no residual value. The asset has been depreciated until 31st October 2020. At that date, the asset was damaged ; an impairment test was taken by Moby. On 31st October 2020, the fair value of the asset was $60,000 with a $3,000 cost of disposal. The expected future cash flows were $16,000/annum for the next five years. The cost of capital is at 13% with five-year annuity factor of 3.52. Calculate the impairment on 31st October 2020? (MCQ)$0$680$6,320$7,000(2 marks) Q13. A delivery van has a carrying amount of $39,000 at the year-end 31st March 2016. Its market value is $33,800 having a disposal cost of $1,250. A new delivery van will cost $46,500. The company expects that the van can generate $9,300/per year of cash flows for the next four years. The cost of capital is 5%. What is the impairment loss to be recognized for the year end 31st March 2016? (MCQ)$1,250$5,200$6,022$6,450(2 marks) Q14. ZZZ Co purchased a non-current asset on 1st January 2012 costing $3.75 million and its life was eight years. On 31st December 2013, the fair value of the non-current asset was $2.95 million. On 31st December 2014, the recoverable amount of the asset was $1.25 million. Calculate the impairment loss to be recorded in Profit ; Loss account on 31st December 2014 nearest to $000? (FIB)3613151270000$ 000 (2 marks) IMPAIRMENT OF ASSETS (ANSWERS) Q1. CRecoverable amount is the higher of the Value in Use or the Net Realizable Value. Q2. DImpairment loss = Carrying amount – Recoverable amount = Positive (+) Aldo = $90,000 – $150,000 = (-$60,000) No ImpairmentBalbo = $140,000 – $195,000 = (-$55,000) No ImpairmentCasco = $112,000 – $105,000 = $7,000 Impairment Q3. DAssets which have their own impairment criteria do not fall under the scope of IAS 32 -Impairment of asset. Inventory is impaired under IAS 2 – Inventory where it is calculated by choosing lower of Cost or Net Realizable Value. Q4. B Q5. $6,037Value in UseCash Flow Discount Factor 8% Present Value19,000 0.926 $17,59419,000 0.857 $16,28319,000 0.794 $15,086Total PV $48,963Fair Value less Cost to sell = $47,000 – $3,500 = $43,500Higher of = $48,963Impairment Loss = $55,000 – $48,963 = $6,037 Q6.A fall in the market value of a machine due to inflation (External indication)The management realized that an asset is unable to produce up to its full capacity (Internal indication)A report prepared by the warehouse manager than one of the lifter cars has crashed into a wall (Internal indication)The development of intention of management to sell the asset during the next 3 months (Internal indication) Q7. BCarrying Amount = (500,000 Ãâ€" 5/10) = 250,000Fair value less cost to sell = (160,000 – 10,000) = 150,000Value in use = (13,000 Ãâ€" 3.79) = 49,270Recoverable amount $150,000, Impairment = 250,000 – 150,000 = $100,000 Q8. DThe total impairment of CGU is $200,000The goodwill is impaired by $75,000 leaving $125,000 of impairment to be allocated to other assets.Total of assets to be impaired is $495,000 (400 + 90 +5)Impairment = (400,000 à · 495,000) Ãâ€" 125,000 = 101,010Fair Value after impairment = 400,000 – 101,010 = $298,990 Q9. AThe NRV of the inventory is still greater than its carrying amount so no impairment has arisen Q10. $742,500Calculation done in $000Cost = 2,100Depreciation = (2,100 Ãâ€" 2/10) = 420Carrying amount (After 2 years) = 2,100 – 420 = 1,680Revaluation of asset = 1,680 1,900 = 220 in Revaluation ReserveNew Cost = 1,900Depreciation = (1,900 Ãâ€" 1/8) = 237.5Carrying amount (After 1 year) = 1,900 – 237.5 = 1,662.5Impairment loss = 1,662.5 – 700 = 962.5Reversal of Revaluation Reserve = $220Excess recorded in Profit ; Loss account = 962.5 – 220 = $742,500 Q11. $211,257The total impairment of CGU is $146,500The goodwill is impaired by $30,500 leaving $116,000 of impairment to be allocated to other assets. The plant is impaired by $91,000 leaving $25,000 of impairmentTotal of assets to be impaired is $629,050 (409,050 + 311,000 – 91,000)Impairment = (220,000 à · 629,050) Ãâ€" 25,000 = 8,743Fair Value after impairment = 220,000 – 8,743 = $211,257 Q12. ACarrying Amount = (300,000 Ãâ€" 1/6) = 50,000Fair value less cost to sell = (60,000 – 3,000) = 57,000Value in use = (16,000 Ãâ€" 3.52) = 56,320Recoverable amount $57,000, Impairment = 50,000 – 57,000 = $0 Q13. CValue in UseCash Flow Annuity Factor 5% (1-4) Present Value9,300 3.546 $32,978Total PV $32,978Fair Value less Cost to sell = $33,800 – $1,250 = $32,550Higher of = $32,978Impairment Loss = $39,000 – $32,978 = $6,022 Q14. $1,071,000Calculation done in $000Cost = 3,750Depreciation = (3,750 Ãâ€" 2/8) = 937.5Carrying amount (After 2 years) = 3,750 – 937.5 = 2,812.5Revaluation of asset = 2,812.5 2,950 = 137.5 in Revaluation ReserveNew Cost = 2,950Depreciation = (2,950 Ãâ€" 1/6) = 491.67Carrying amount (After 1 year) = 2,950 – 491.67 = 2,458.33Impairment loss = 2,458.33 – 1,250 = 1,208.33Reversal of Revaluation Reserve = $137.5Excess recorded in Profit ; Loss account = 1,208.33 – 137.5 = $1,070,830Nearest to $000 = $1,071,000

Tuesday, October 22, 2019

Free Essays on Why Induction Is Invalid

Why Induction is Invalid Induction is the process of deriving general principles from particular facts or instances. The â€Å"problem of induction† lies in the using of a finite number of instances and generalizing those instances to an infinite possibility of instances. Philosophers of science are constantly debating on whether the â€Å"problem of induction† can be solved or if it is unsolvable. The following essay looks at one author who claims that the â€Å"problem of induction† can be solved. This essay will examine the article Philosophical Foundations of Physics: An Introduction to the Philosophy of Science by Rudolf Carnap. The first section will reconstruct Carnap’s argument of why he believes the â€Å"problem of induction† can be solved and the second section will deal with a critical analysis of his argument. The following is the reconstruction of Carnap’s argument from the essay titled in the introduction: If logical/inductive probability can be used to show 100% confirmation with no negative instances, then the â€Å"problem of induction† can be solved. Logical/inductive probability can be used to show 100% confirmation with no negative instances. Therefore, the â€Å"problem of induction† can be solved. In the first premise, Carnap explores the foundation of his argument. The foundation of his argument lies in using induction to derive a valid deductive argument. His reasoning behind this is due to the fact that the conclusion of an inductive argument is never certain. Even if the premises are assumed to be true and the inference is a valid inductive argument, the conclusion may be false (Carnap, p. 17-18). On the other hand, the conclusion to a valid deductive argument is always certain. In deductive logic, inference leads from a set of premises to a conclusion just as certain as the premises. If you have reason to believe the premises, you have equally valid reason to believe the conclusion that... Free Essays on Why Induction Is Invalid Free Essays on Why Induction Is Invalid Why Induction is Invalid Induction is the process of deriving general principles from particular facts or instances. The â€Å"problem of induction† lies in the using of a finite number of instances and generalizing those instances to an infinite possibility of instances. Philosophers of science are constantly debating on whether the â€Å"problem of induction† can be solved or if it is unsolvable. The following essay looks at one author who claims that the â€Å"problem of induction† can be solved. This essay will examine the article Philosophical Foundations of Physics: An Introduction to the Philosophy of Science by Rudolf Carnap. The first section will reconstruct Carnap’s argument of why he believes the â€Å"problem of induction† can be solved and the second section will deal with a critical analysis of his argument. The following is the reconstruction of Carnap’s argument from the essay titled in the introduction: If logical/inductive probability can be used to show 100% confirmation with no negative instances, then the â€Å"problem of induction† can be solved. Logical/inductive probability can be used to show 100% confirmation with no negative instances. Therefore, the â€Å"problem of induction† can be solved. In the first premise, Carnap explores the foundation of his argument. The foundation of his argument lies in using induction to derive a valid deductive argument. His reasoning behind this is due to the fact that the conclusion of an inductive argument is never certain. Even if the premises are assumed to be true and the inference is a valid inductive argument, the conclusion may be false (Carnap, p. 17-18). On the other hand, the conclusion to a valid deductive argument is always certain. In deductive logic, inference leads from a set of premises to a conclusion just as certain as the premises. If you have reason to believe the premises, you have equally valid reason to believe the conclusion that...

Monday, October 21, 2019

Essay on Solar Energy

Essay on Solar Energy Essay on Solar Energy Economical feasibility of large-scale solar energy collection Results and Discussion Table 1, (Appendix) displays middle ground estimation of PV and cost production for the 10kW system of PV in dollars. The data can be a representation of large system of residential or a minute commercial system. The required calculations were scaled down and up having a number of adjustments for the scale of economies that are linked to the larger systems installation. In this regard, the basic costs include installation, inverters replacement. The costs are observed to have a decreasing trend, and they flatten temporarily. For instance, $80,000 figure installation is a representation considered being fair, and optimistic. In this case, a typical system of residential would have $8 in every watt. There is a possibility of declining costs with time. The key issues that are linked to the analysis of cost include the panel’s lifetime and the required, discount rate for the project evaluation. Different types of panels normally have warranties that are limited for a minimal of 20 years or longer (Nemet 6) The data presented assumed a 25 year calculation of lifetime. Such a timeframe leads to the extending of life to about 30 years ad the cost of every kWh would be smaller as a result of discounting. Table identifies a range of actual interest rates. A number of industries have suggested that a high rate is extremely reflective of the different rates of interest that are normally faced by a variety of buyers. The values are always higher than the social discount real rate for which an individual could apply to the analysis of the public policy. In this respect, an interest that is lower could be the most relevant. In table 1, the given low rates of interest rates are relatively lower hence appropriate in the evaluation of the rate of the social discount, and the two high ones are relevant on evaluating the opportunity of the market capital cost. The results displayed also show out that, after installation, the high cost that the PV solar system owner would face involves replacing the inverter (Barbose, Darghouth and Wiser 3). A research conducted in this field reported that the mean time to time failure estimation in inverters is approximately 10 years. Assuming an approximation of 8 years means that such an inverter will require replacement at least two times within the duration of 25 years life panel. This replacement, according to the results in table1 would occur in the 8th and the 16th year. The cost for an inverter having the 10kW system has a range of $8000, which has a huge possibility of declining with time. Additionally, the costs of inverters are considered to decline in intervals of 2% each year in actual terms that are consistent with the Navigant consulting study for energy renewal. The displayed discounts and costs are normally combined to give out a present cost for the PV system. Additionally, table 1 displa ys the data for simultaneous cases, including the price cap Psim, ad the PsimH, which is high volatility of price (LBBW 4). The highest and lowest valuation are displayed in the results that are simultaneous. The PsimL results are always in the range that is displayed. The ISO prices results having no augmentation for when the binding price caps occurred are similar to the PsimH values (Barbose Darghouth and Wiser, 6). Other different studies conducted on the PV production in a lifetime panel reported two TRNSYS simulation adjustments behind table 1, whenever the evaluation of the solar production of the PV is for a life time. The aging effect is considered being one of the factors affecting the production of the PV solar. In this case, the PV production reduces in a given time having the best estimation ranging from 1% of the original potential per year. Soil effect is another factor that affects the PV cell production. The panels are known to absorb minimal solar radiation hence give out minimal electricity. The effect of soil on, the PV cells, relies on the idiosyncratic factors like the density and the amount of rainfall and on endogenous traits like the effort of maintenance. The data presented in table 1 displays the effect of aging but not for the soiling. The production of electricity from the PV solar is not equal to the equal production in the current world. Whenever the electricity real cost remained constant, the real interest rate that is positive would cause electricity production failure with low present value. Increasing the electricity cost for a given time would increase the present production. Obtaining the knowledge of the reducing trend of the solar PV costs is vital in the formulation of the policy due to the irreversible durable nature of that specific investment. Whenever the costs of the PV reduce rapidly due to reasons like the policy of subsidy, the investment delay would be observed in many companies. If the decline is only 2015 in every year, there would be an increased amount of renewable energy (Mints 5). Table 2 (appendix) present the translated figures of table 1 for the benefits and costs that are leveled. At a three percent real annual interest, column 2 shows out the net cost of the PV solar installation that is similar to the purchase of the MWh for the panels life at a real price that is constant Conclusion A deep analysis of the non market and market traits are key in understanding the benefits and costs for the PV solar power. From this study, a method used in analyzing the value of the market for the PV solar power was displayed. The presented method produced a minimal amount of outputs whenever the weather was sunny, and the demand of the system was relatively high. The application of this method suggested that account for the electricity production that is time varying in the solar panel may increase the output value substantially. The utility of the real-time prices allows the alteration of value from 0% to 20%. Using the simulated model prices will make sure that the peak capacity of the gas takes care of the fixed costs through the increased prices of energy, which makes the real time values to increase (Bloomberg 6). In a wholesaler market of electricity, the simulation is normally substantially low in volatility. This study took into consideration the time-varying savings for the lie losses especially when the production of the power is on a larger site. The study, however, fails to account for the potential savings from a reduced requirement of distribution and transmission capacity. A different analysis of such factors would indicate their possibility of amounting to percentage PV solar valuation points that are higher than two. Actually buying custom essays from can be your best solution to have your paper written by experts. We provide professional essay writing help on any topic.

Sunday, October 20, 2019

Nietzsches The Use And Abuse Of History

Nietzsches The Use And Abuse Of History Between 1873 and 1876 Nietzsche published four â€Å"Untimely Meditations.†Ã‚   The second of these is the essay often referred to as â€Å"The Use and Abuse of History for Life.† (1874)   A more accurate translation of the title, though, is â€Å"On the Uses and Disadvantages of History for Life.† The Meaning of History and Life The two key terms in the title, â€Å"history† and â€Å"life† are used in a very broad way.   By â€Å"history,† Nietzsche mainly means historical knowledge of previous cultures (e.g. Greece, Rome, the Renaissance), which includes knowledge of past philosophy, literature, art, music, and so on.   But he also has in mind scholarship in general, including a commitment to strict principles of scholarly or scientific methods, and also a general historical self-awareness which continually places one’s own time and culture in relation to others that have come before. The term â€Å"life† is not clearly defined anywhere in the essay.   In one place Nietzsche describes it as â€Å"a dark driving insatiably self-desiring power,† but that doesn’t tell us much.   What he seems to have in mind most of the time, when he speaks of â€Å"life,† is something like a deep, rich, creative engagement with the world one is living in.   Here, as in all his writings, the creation of an impressive culture is of prime importance to Nietzsche.   What Nietzsche Is Opposing In the early 19th century, Hegel (1770-1831) had constructed a philosophy of history which saw the history of civilization as both the expansion of human freedom and the development of greater self-consciousness regarding the nature and meaning of history.   Hegel’s own philosophy represents the highest stage yet achieved in humanity’s self-understanding.   After Hegel, it was generally accepted that a knowledge of the past is a good thing.   In fact, the nineteenth century prided itself on being more historically informed than any previous age.   Nietzsche, however, as he loves to do, calls this widespread belief into question.   He identifies 3 approaches to history: the monumental, the antiquarian, and the critical.   Each can be used in a good way, but each has its dangers. Monumental History Monumental history focuses on examples of human greatness, individuals who â€Å"magnify the concept of man†¦.giving it a more beautiful content.†Ã‚   Nietzsche doesn’t name names, but he presumably means people like Moses, Jesus, Pericles, Socrates, Caesar, Leonardo, Goethe, Beethoven, and Napoleon.   One thing that all great individuals have in common is a cavalier willingness to risk their life and material well-being.     Such individuals can inspire us to reach for greatness ourselves.   They are an antidote to world-weariness.   But monumental history carries certain dangers.   When we view these past figures as inspirational, we may distort history by overlooking the unique circumstances that gave rise to them.   It is quite likely that no such figure could arise again since those circumstances will never occur again.   Another danger lies in the way some people treat the great achievements of the past (e.g. Greek tragedy, Renaissance painting) as canonical.   They are viewed as providing a paradigm that contemporary art should not challenge or deviate from.   When used in this way, monumental history can block the path to new and original cultural achievements. Antiquarian History Antiquarian history refers to the scholarly immersion in some past period or past culture.   This is the approach to history especially typical of academics.   It can be valuable when it helps to enhance our sense of cultural identity.   E.g. When contemporary poets acquire a deep understanding of the poetic tradition to which they belong, this enriches their own work.   They experience â€Å"the contentment of a tree with its roots.† But this approach also has potential drawbacks.   Too much immersion in the past easily leads to an undiscriminating fascination with and reverence for anything that is old, regardless of whether it is genuinely admirable or interesting.   Antiquarian history easily degenerates into mere scholarliness, where the purpose of doing history has long been forgotten.   And the reverence for the past it encourages can inhibit originality.   The cultural products of the past are seen as so wonderful that we can simply rest content with them and not try to create anything new. Critical History Critical history is almost the opposite of antiquarian history.   Instead of revering the past, one rejects it as part of the process of creating something new.   E.g. Original artistic movements are often very critical of the styles they replace (the way Romantic poets rejected the artificial diction of 18th-century poets).   The danger here, though, is that we will be unfair to the past.   In particular, we will fail to see how those very elements in past cultures that we despise were necessary; that they were among the elements that gave birth to us.   The Problems Caused by Too Much Historical Knowledge In Nietzsche’s view, his culture (and he would probably say ours too) has become bloated with too much knowledge.   And this explosion of knowledge is not serving â€Å"life†Ã¢â‚¬â€œthat is, it is not leading to a richer, more vibrant, contemporary culture.   On the contrary. Scholars obsess over methodology and sophisticated analysis.   In doing so, they lose sight of the real purpose of their work.   Always, what matters most isn’t whether their methodology is sound, but whether what they are doing serves to enrich contemporary life and culture. Very often, instead of trying to be creative and original, educated people simply immerse themselves in relatively dry scholarly activity.   The result is that instead of having a living culture, we have merely a knowledge of culture.   Instead of really experiencing things, we take up a detached, scholarly attitude to them.   One might think here, for instance, of the difference between being transported by a painting or a musical composition, and noticing how it reflects certain influences from previous artists or composers. Halfway through the essay, Nietzsche identifies five specific disadvantages of having too much historical knowledge.   The rest of the essay is mainly an elaboration on these points.   The five drawbacks are: It creates too much of a contrast between what’s going on people’s minds and the way they live.   E.g. philosophers who immerse themselves in Stoicism no longer live like Stoics; they just live like everyone else.   The philosophy is purely theoretical. Not something to be lived.It makes us think we are more just than previous ages.   We tend to look back on previous periods as inferior to us in various ways, especially, perhaps, in the area of morality.   Modern historians pride themselves on their objectivity.   But the best kind of history isn’t the kind that is scrupulously objective in a dry scholarly sense.   The best historians work like artists to bring a previous age to life.It disrupts the instincts and hinders mature development.   In supporting this idea, Nietzsche especially complains at the way modern scholars cram themselves too quickly with too much knowledge.   The result is that they lose profundity.   Extreme specialization, a nother feature of modern scholarship, leads them away from wisdom, which requires a broader view of things. It makes us think of ourselves as inferior imitators of our predecessorsIt leads to irony and to cynicism. In explaining points 4 and 5, Nietzsche embarks on a sustained critique of Hegelianism.   The essay concludes with him expressing a hope in â€Å"youth†, by which he seems to mean those who have not yet been deformed by too much education. In the Background – Richard Wagner Nietzsche does not mention in this essay his friend at the time, the composer Richard Wagner.   But in drawing the contrast between those who merely know about culture and those who are creatively engaged with culture, he almost certainly had Wagner in mind as an exemplar of the latter type.   Nietzsche was working as a professor at the time at the University of Basle in Switzerland.  Ã‚   Basle represented historical scholarship.   Whenever he could, he would take the train to Lucerne to visit Wagner, who at the time was composing his four-opera Ring Cycle.   Wagner’s house at Tribschen represented life.   For Wagner, the creative genius who was also a man of action, fully engaged in the world, and working hard to regenerate German culture through his operas, exemplified how one could use the past (Greek tragedy, Nordic legends, Romantic classical music) in a healthy way to create something new.

Saturday, October 19, 2019

Can the war on terror be undertood in terms of realism Essay

Can the war on terror be undertood in terms of realism - Essay Example This shouldn’t be startling as they all have a common origin and considering the complexity of the subject, it deserves this literary stretch. If Marxism and Christianity can have various interpretations, so can realism. John J. Mearsheimer, professor at the University of Chicago was once asked what realism says about terrorism, and his answer was, ‘Not a whole heck of a lot, Realism†¦..is really about relations among states, especially among the great powers†¦Ã¢â‚¬ ¦Realism doesn’t have much to say about the causes of terrorism’1. It would be better to take a look at various realism theories before going into the details of the war on terror. Realism is a ‘philosophical disposition’ in the words of Robert Gilpin; many commentators have considered it a general orientation and not a set of explicit rules. According to Ferguson and Mansbach, it is a set of normative emphases that shape the theory and an ‘attitude of mind’ with distinct and perceptible flavours in the words of Edward Garnett. Sandra Rosenthal considers realism a ‘loose framework’ while Colin Elman considers it a ‘big tent’ that has room to accommodate a number of theories and notions2. Realism is a methodology of understanding international relations and many scholars and thinkers have placed themselves in the growth of this method that is why it has been delimited in its projection and analysis. It is a little difficult to enclose terrorism or ‘war on terror’ in one definition of realism. Terrorism is a concept that has reached all corners of the earth. No place on this planet is oblivion of this concept, especially after the sad September 11 attacks on the twin towers of U.S. and the America’s ‘war on terror’. This phrase has redefined domestic conflicts and territorial skirmishes. Prior to 9/11 attacks and the alleged ‘war on terror’, terrorist groups were indigenous of their terrains but now just like international trade and

Friday, October 18, 2019

President Barack Obama Research Paper Example | Topics and Well Written Essays - 1250 words

President Barack Obama - Research Paper Example Obama achieved national attention during his time as a senator. Encouraged by his popularity and determination to excel, Obama ran for the Democratic Presidential Candidacy for the 2008 presidential election and won the party nomination beating Hilary Rodham Clinton. He went on to win the presidential election and sworn in as president on 20th January, 2009. Later in the year Obama also achieved the honor of being the Nobel Peace Prize Laureate of 2009 (Gormley). As a patriot and loyalist countryman, Obama had a very lucid vision about his country which he wanted his fellow countrymen to adopt; the vision of unity, accord and common good (Price). In his address to the Democratic National Convention in 2004 he said: There’s not a liberal America or a conservative America, there’s United States of America. There’s not a Black America and White America and Latino America and Asian America, there’s United States of America. We are one people, all of us pledgin g allegiance to the Stars and Stripes, all of us defending the United States of America. Obama had to face a lot of challenges soon after assuming the office of the President of the United States most significant of which were the dwindling economy and effectual exit from the war in Afghanistan and Iraq. American Recovery and Reinvestment Act, Tax Relief, Unemployment Insurance Reauthorization and Job Creation Act, Patient Protection and Affordable Care Act, Dodd-Frank Wall Street Reform, Consumer Protection Act and the Budget Control Act are some of the most important legislative measures taken by the Obama administration which are being deemed as very effective and successful for solving issues related to worsening economic conditions and social welfare. Obama achieved remarkable success in foreign policy in the form of the death of Al-Qaeda leader Osama bin Laden after a successful military operation in a mountain town in Pakistan, the ouster of undemocratic and tyrant government of Muammar Gaddafi in Libya and the success of strategic exit of forces from Iraq and Afghanistan, the process which is still underway. Gaining confidence from the success of his domestic and foreign policies, Obama declared his intensions for re-election in the 2012 Presidential Elections (Horn). Obama was born in Hawaii to parents of ethnically diverse descent; his father was an African from Kenya while his mother was of Irish descent. Obama had to struggle to understand his multiracial origin during his childhood. To further complicate things, after getting divorced his mother remarried an Indonesian student and had to move to Indonesia when Suharto, a military leader of Indonesia called back all Indonesian students from foreign universities, that is why Obama spent his early childhood in Indonesia and owing to this he is very popular in Indonesia. Obama came back to US and started living with his maternal grandparents in Hawaii where he attended high school. Being brought up in a multiracial culture was not very easy for Obama to understand as a child because multiracial background was not very common in the US at that time however it helped him having an expansive vision he practiced in his political life and enabled him to develop the quality of understanding the aspirations of people from diverse cultural and ethnic backgrounds (Hill). Being an African American

The Movie Crash Essay Example | Topics and Well Written Essays - 1000 words

The Movie Crash - Essay Example The movie avoids conversations leading to the topic on racism the author and journalist Jeff Chang tends to feature the practice as an â€Å"abomination to Hollywood subsequent to 9/11.†When an individual watch the movie for the first time; elements of race and prejudice appears evidently present. In as much as, the movie constantly talks about the issue of shoving racism, it ends up contradicting the message as the movie constantly voices, the advantages and superiority; that the whites enjoys in the film (Haggis et al, 24). The whites in the movie crash tend to enjoy superior positions, both in social class and economic settings. White characters such as; Jean and Rick Cabot, acted by Sandra Bullock with Brendan Fraser correspondingly, appear as prominent individuals in L.A socialites; characters like Rick, works as the District Attorney of Los Angeles (Haggis et al, 124). The society in the film comprises wealthy black producers like Cameron Thayer (Terence Howard), who des pite their wealthy status, experiences social insecurity. Tony Danza, a television chief producer, another white character tells Thayer, who appears black; to ensure that one of his actors brings out a â€Å"more black† personality as the character must appear â€Å"as the dumb one.† Characters like John Ryan, who appears to be played by Matt Dillion, and Tom Hasan Played by Ryan Phillippe appear as police detectives in the Los Angeles Police Department. The film clearly depicts that no white character that is seen struggling with financial discomforts. At this juncture, most of the alternative characters appear as impoverished or defenseless socially in the society (Haggis et al, 110). Michael Pena’s, acts as Daniel, a young Hispanic, family man, who has a young daughter appears as a working class. The daughter appears to sleep under the bed, on hearing the gun shots, the audience, which scared her. Her fears come the incident when a bullet penetrated into her room, in their former old house they had just vacated; Daniel later comments saying that the area’s neighborhood appeared as insecure (Haggis et al, 124). He operates for a 24-hour locksmith who leaves for a call at jean and Rick Cabot’s residence; he bumps into two youthful Black car thieves named Antony (Ludacris) and peter (Latez Tate). Jean demands the change of the locks as she believes that Daniel might give the key to his supposed friends, appearing as members of the gang. Daniel overhears the statement that Jean made as he was in the down Hall way. Daniel is also seen fixing Farhad’s convenience store’s back door. Daniel tells the store owner that the door requires replacement; the idea appears to disturb Farhad, the older Persian man, engages in business, misinterprets Daniel’s quest. Daniel and Fahad end up shouting at each other; at this juncture, Fahad terms Daniel as a fraudulent man (Haggis et al, 111). There exists another Officer na med Ryan; Ryan’s encounters a situation where he must contend with racism. The racism originates from Ryan father’s retrogress in life. Ryan appears underprivileged, in a way that he struggles to get the basic items including food and shelter. These appears as movie demonstration, despite the fact that, they appear slightly exaggerated, displaying how the wealthy individuals resides in the States (Haggis et al, 122). Contrary, the whites enjoy a distinct group comprising of Americans

Thursday, October 17, 2019

Naturalistic Philosophy Essay Example | Topics and Well Written Essays - 2000 words

Naturalistic Philosophy - Essay Example Plot characters development showed the degree of control that man had over their destiny. American realizes that the power of outside forces is what limited humanity’s freedom of choice, to them individuals had no choice since their lives dedicated to only hereditary and external environment. To naturalists, humanity was helpless and wholly dependent on nature’s favors. American naturalism got to its peak at around the beginning of the nineteenth century. Charles Darwin’s theory of phylogeny also played a great role. Malcolm Cowley states that years between the first and second world wars were a flourishing time for the American writers. American literature had attained a new maturity and an abundant diversity. Marked by the publication of several works? It was at this time that memorable works published though they were not up to standard an excellent number became influential and were later in time criticized. Most novels that were written around this time majorly based on the war that had just ended. It was only by means of civil war that the young country could achieve both unification and peace. Stephen Cranes, The Red Badge of Courage, illustrates an actual description of fighting in the civil war that ended up leading their country to victory. Novels about the war have been the most reliable ways of writing about the war life. Some permeated with a lot of protests, therefore, they were named war books in general. In the history of America, war writings are considered to have taken a greater part of the portion when all books put together. It was around this time that Stephen Crane’s The Red Budge was written and published for the first time. Its location is the battlefield, Crane attempted to explain and draw the picture of what was happening during the war and in the lives of the soldiers.  

Assignment #1 Essay Example | Topics and Well Written Essays - 500 words

Assignment #1 - Essay Example The other major concern shown in the article are the more nefarious interests of the Western powers that deliberately foster hatred and doubt amongst the people of poor countries like Rwanda so as to weaken their power and use them for their own vested interests. Waal has just shown the vicious side of the powerful countries who do not hesitate to become the root cause of genocide of people who may belong to different race, color, nation and ethnicity. Since anthropology primarily studies the evolving cultural values that influence the human behavior through the times, applying anthropology in understanding the changing societal paradigms would greatly facilitate in improving inter-personal relationship in the organization, leading to improved performance. The analysis of changing dynamics within the tribes of Rwanda and studying the psychology of the western culture, one would be better equipped to apply the anthropological paradigms and disseminate information regarding cross cultural values. People across the globe need to inculcate better understanding of cross cultural values. The article by Bourgois discusses the problems that have risen from the migration of people across borders. It is the most sensitive issue of the contemporary times. The changing dynamics of the societal norms across the globe has resulted in the huge migration of people from one country to another. The social problems arising from the new emerging social fabric are widespread. The most important are the employment opportunities, housing and medical facilities for the new migrant labor. The author asserts that the state has not been able to meet the challenges of the times and the marginalized population is often poorly paid which makes a mockery of so called social integration. This segment then becomes vulnerable and gets caught into the vicious cycle of drugs, prostitution and other

Wednesday, October 16, 2019

Naturalistic Philosophy Essay Example | Topics and Well Written Essays - 2000 words

Naturalistic Philosophy - Essay Example Plot characters development showed the degree of control that man had over their destiny. American realizes that the power of outside forces is what limited humanity’s freedom of choice, to them individuals had no choice since their lives dedicated to only hereditary and external environment. To naturalists, humanity was helpless and wholly dependent on nature’s favors. American naturalism got to its peak at around the beginning of the nineteenth century. Charles Darwin’s theory of phylogeny also played a great role. Malcolm Cowley states that years between the first and second world wars were a flourishing time for the American writers. American literature had attained a new maturity and an abundant diversity. Marked by the publication of several works? It was at this time that memorable works published though they were not up to standard an excellent number became influential and were later in time criticized. Most novels that were written around this time majorly based on the war that had just ended. It was only by means of civil war that the young country could achieve both unification and peace. Stephen Cranes, The Red Badge of Courage, illustrates an actual description of fighting in the civil war that ended up leading their country to victory. Novels about the war have been the most reliable ways of writing about the war life. Some permeated with a lot of protests, therefore, they were named war books in general. In the history of America, war writings are considered to have taken a greater part of the portion when all books put together. It was around this time that Stephen Crane’s The Red Budge was written and published for the first time. Its location is the battlefield, Crane attempted to explain and draw the picture of what was happening during the war and in the lives of the soldiers.  

Tuesday, October 15, 2019

Developing Potentially Highly Profitable New Systems Technologies Essay

Developing Potentially Highly Profitable New Systems Technologies - Essay Example The new technology is emerged in business for improvement for buyer’s access to the important and critical information. The advantages and disadvantages of the competitors that already have adopted the new technologies are observed and important factors are taken from it. The success rate for adding new technologies into business is critically viewed from other companies. The success factors are in both forms like tangible and intangible benefits, so measuring intangible benefits are harder and some time results are long term like improvement in operational efficiency, the improvement of decision making of customer. (Turban and Volonino). Electronic Business (E-business) Electronic business (E-business) is a business done through the online network and internet. It provides channels among customers, supply chain partners, employees and other concerned persons. The firm needs to develop the e-business as a new technology. The performance measures like incentives and different o perating models are applied for promoting the business. (Turban, Volonino and Wood, 157) The basic demand of e-business is to maintain the website regularly. The business-to-business (B2B) sites may have many weak points that must resolve for getting improved performance of the e-business. By following such important factors and by focusing the performance measures make a positive impact of the e-business and also the firm gets benefited through it. The emergence of information technology improved the productivity of the products in firm. The demands of consumers are fulfilled according and through ease, so it is also important to make the e-business fully secure and reliable (Turban and Volonino). The intangible benefits for a multinational firm are the soft profits it takes from website. The accuracy and quick response not only for Web servers but also the software of e-commerce and databases need to respond quickly. The less web issues promote the business to success and results in customer satisfaction, which is great intangible profit for the multinational firm. It also provides tangible results for the multinational firm, when customers are more satisfied with e-business and easily and quickly perform the business tasks. (Turban and Volonino163) Fig.1 E-commerce Model (Source: Turban, Volonino and Wood, 166) The e-business promotes the business to success, and both tangible and intangible profits are shown in the firm. By following the models like B2C and G2C and many other strategies, a business can grow and enhance the productivity. These models are known as business markets that provide success to the business (Turban, Volonino and Wood, 156). The B2C market covers the national and international market, the buyers and sellers are organizations. So it is also called e-tailing (electronic retailing). Another market named C2B that is consumer based market and covers the consumer that purchases the products from firm. G2C is the market work among Governme nt-to-citizens this market provides services from Government agencies to the local citizens. And the business-to-Government market sells different types of products and also provides services to the government agencies (Hubbard). Funding of a Project and convincing the senior management Most of the companies shape product process development through the information technology. Increased productivity and quality improvement have been seen with the adaption of the new technology. Many of the manufacturing companies find it a methodology for the faster product development cycles, high level quality products and shorter production schedules. Justification of advantages of new technology before senior management is about economic issues and related advantages. The view is to cut the cross

Monday, October 14, 2019

Dissolved Oxygen Essay Example for Free

Dissolved Oxygen Essay Oxygen in Liquids (DISSOLVED OXYGEN) Dissolved Oxygen – the amount of dissolved oxygen in a body of water as an indication of the degree of the health of water and its ability to support a balanced aquatic ecosystem. Oxygen – is a clear, colorless, odorless, and tasteless gas that dissolves in water. Small but important amounts of it are dissolved in water. OXYGEN: Aquatic Life Depends on it Plants and Animals depend on dissolved oxygen for survival. Lack of dissolved oxygen can cause aquatic animals to leave quickly they are or face death. Factors Affecting Oxygen Levels Temperature Rate of Photosynthesis Degree of Light Penetration (turbidity water depth) Degree of Water Turbulence or Wave action The amount of oxygen used by respiration and decay of organic matter Oxygen in the Balance Dissolved Oxygen levels that are at 90% and 110% saturation level or higher consistently considered healthy or good. If the Dissolved Oxygen are below 90%, there may be large amounts of oxygen demanding materials. What Is Dissolved Oxygen In Water? Dissolved oxygen in water is vital for underwater life. It is what aquatic creatures need to breathe. Why Is Dissolved Oxygen Important? Just as we need air to breathe, aquatic organisms need dissolved oxygen to respire. It is necessary for the survival of fish, invertebrates, bacteria, and underwater plants. How Is Dissolved Oxygen Measured? Dissolved oxygen concentration can be reported as milligrams per liter, parts per million, or as percent air saturation. Polarographic Cell It is very similar to the galvanic cell. However, the polarographic cell has two noble-metal electrodes and requires a polarizing voltage to reduce the oxygen. The dissolved oxygen in the sample diffuses through the membrane into the electrolyte, which usually is an aqueous KC1 solution. If there is a constant polarizing voltage (usually 0.8 V) across the electrodes, the oxygen is reduced at the cathode, and the resulting current How is proportional to the oxygen content of the electrolyte. This current flow is detected as an indication of oxygen content. Galvanic Cell All galvanic cells consist of an electrolyte and two electrodes (Figure 8.43c). The oxygen content of the electrolyte is equalized with that of the sample. The reaction is spontaneous; no external voltage is applied. In this reaction, the cathode reduces the oxygen into hydroxide, thus releasing four electrons for each molecule of oxygen. These electrons cause a current flow through the electrolyte.. The magnitude of the current flow is in proportion to the oxygen concentration in the electrolyte. Flow through Cells In the flow-through cells, the process sample stream is bubbled through the electrolyte. The oxygen concentration of the electrolyte is therefore in equilibrium with the samples oxygen content, and the resulting ion current between the electrodes is representative of this concentration. These types of cells are usually provided with sampling consisting of (but not limited to) filtering and scrubbing components and flow, pressure, and temperature regulators. Thallium Cell Thallium cells are somewhat unique in their operating principle and cannot be classified into the category of either galvanic or polarographic cells. At the same time, they are of the electrochemical type. One thallium-electrode cell design is somewhat similar in appearance to the unit illustrated on Figure 8.43c except that it has no membrane or electrolyte. This cell has a thallium outer-ring electrode and an inner reference electrode. When oxygen contacts the thallium, the potential developed by the cell is a function of  the thallous ion concentration at the face of the electrode, and the ion concentration is in proportion to the concentration of dissolved oxygen. Fluorescence-based Sensor In this case, a compound containing ruthenium is immobilized in a gas-permeable matrix called a sol-gel. Sol-gels are very low-density, silica-based matrices suitable for immobilizing chemical compounds such as the ruthenium compound used in this measurement technique. Effectively, the sol-gel is equivalent to the membrane in a conventional DO sensor. Using fiber optics, light from a light-emitting diode is transferred to the backside of the sol-gel coating. The emitted fluorescence is collected from the backside of the sol-gel with another optical fiber and its intensity is detected by photodiode. A simplified sensor design is shown in Figure 8.43g. If no oxygen is present, the intensity of the emitted light will be at its maximum value. If oxygen is present, the fluorescence will be quenched, and the emitted intensity will decrease. Twinkler Titration The Winkler Method is a technique used to measure dissolved oxygen in freshwater systems. Dissolved oxygen is used as an indicator of the health of a water body, where higher dissolved oxygen concentrations are correlated with high productivity and little pollution. Temperature Effects Pressure Effects Salinity Effects Biochemical Oxygen Demand (BOD) Biological Oxygen Demand (BOD) is a measure of the oxygen used by microorganisms to decompose this waste. If there is a large quantity of organic waste in the water supply, there will also be a lot of bacteria present working to decompose this waste. In this case, the demand for oxygen will be high (due to all the bacteria) so the BOD level will be high. As the waste is consumed or dispersed through the water, BOD levels will begin to decline. Biochemical oxygen demand (BOD) is a measure for the quantity of oxygen required for the biodegradation of organic matter (carbonaceous demand) in water.It can also indicate the amount of oxygen used to oxidise reduced forms of nitrogen (nitrogenous demand), unless their oxidation is prevented by an inhibitor. A test is used to measure the amount of oxygen consumed by these organisms during a specified period of time (usually 5 days at 20 ÌŠÌŠÌŠÌŠC). Classification: BOD is devided in two parts which is Carbonaceous Oxygen Demand and the Nitrogenous Oxygen Demand. Carbonaceous Oxygen Demand it is the amount of oxygen consumed by the microorganisms during decomposing carbohydrate material. Nitrogenous Oxygen Demand it is the amount of oxygen consumed by the microorganisms during decomposing nitrogenous materials. Relationship of DO and BOD If the Dissolve Oxygen (DO) of a water is high, the Biological Oxygen Demand (BOD)is low. If the BOD of the water is hight, the DO is low.Therefore DO and BOD is inversely Proportional to each other. Why we should need to know BOD? BOD directly affects the amount of dissolved oxygen in rivers and streams. The greater the BOD, the more rapidly oxygen is depleted in the stream. This means less oxygen is available to higher forms of aquatic life. The consequences of high BOD are the same as those for low dissolved oxygen: aquatic organisms become stressed, suffocate, and die. Knowledge of oxygen utilization of a polluted water supply is important because: 1. It is the measure of the pollution load, relative to oxygen utilization by other life in the water; 2. It is the means for predicting progress of aerobic decomposition and the amount of self-purification taking place; 3. It is the measure of the oxygen demand load removal efficiency by different treatment process. Factors that contributes to variations in BOD The Seed Is the bacterial culture that affects the oxidation of materials in the sample. If the biological seed is not acclimated to the particular wastewater, erroneous results are frequently obtained. pH The BOD results are also greatly affected by the pH of the sample, especially if it is lower than 6.5 or higher than 8.3. In order to achieve uniform conditions, the sample should be buffered to a pH of about 7. Temperature Standard test condition calls for a temperature of 20 ÌŠC (68 ÌŠF). field tests often require operation at other temperatures and, consequently, the results tend to vary unless temperature corrections are applied. Toxicity The presence of toxic materials may result increase in the BOD value as a specific sample is dilluted for the BOD test.Consistent value may be obtained either by removing the toxic materials from the sample or By developing a seed that is compatible with the toxic material in the sample. Incubation Time The usual standard lab test incubation time is 5 days, results may occur at a flat part or occur at a steeply rising portion.Depending on the type of seed and the type of oxidable material, divergent result can be expected. Nitrification In the usual course BOD test, the oxygen consumption rises steeply at the beginning of the test owing to attack on carbohydrate materials. Another sharp increase in oxygen utilization occurs sometime during 10th to 15th day in those samples containing nitrogenous materials. How we determine or measure BOD? Five-Day BOD Procedure The BOD test takes 5 days to complete and is performed using a dissolved oxygen test kit. The BOD level is determined by comparing the DO level of a water sample taken immediately with the DO level of a water sample that has been incubated in a dark location for 5 days. The difference between the two DO levels represents the amount of oxygen required for the decomposition of any organic material in the sample and is a good approximation of the BOD level. Test procedures: 1. Take 2 samples of water 2. Record the DO level (ppm) of one immediately using the method described in the dissolved oxygen test. 3. Place the second water sample in an incubator in complete darkness at 20oC for 5days. If you dont have an incubator, wrap the water sample bottle in aluminum foil or black electrical tape and store in a dark place at room temperature (20 ÌŠC or 68  °F). 4. After 5 days, take another dissolved oxygen reading (ppm) using the dissolved oxygen test kit. 5. Subtract the Day 5 reading from the Day 1 reading to determine the BOD level. Record your final BOD result in ppm. Note: Generally, when BOD levels are high, there is a decline in DO levels. This is because the demand for oxygen by the bacteria is high and they are taking that oxygen from the oxygen dissolved in the water. If there is no organic waste present in the water, there wont be as many bacteria present to decompose it and thus the BOD will tend to be lower and the DO level will tend to be higher. At high BOD levels, organisms such as macro  invertebrates that are more tolerant of lower dissolved oxygen may appear and become numerous. Organisms that need higher oxygen levels) will NOT survive. Extended BOD Test Continuation of BOD test beyond 5 days shows a continuing oxygen demand, with a sharp increase in BOD rate at the 10th day owing to nitrification. The latter process involves biological attack on nitrogenous organic material accompanied by an increase in BOD rate. The oxygen demand continues at a uniform rate for an extended time. Manometric BOD Test In the manometric procedure, the seeded sample is confined in a closed system that includes an appreciable amount of air . As the oxygen in the water is depleted, it is replenish by the gas phase. A potassium hydroxide (KOH) absorber within the system removes any gaseous carbon dioxide generated by bacterial action. The oxygen removed from the air phase results in a drop in pressure that is that is removed with a manometer. This fall is then related to the BOD of the sample. Electrolysis System for BOD The measuring principle for all electrolytic respirometers is quite similar. As micro-organisms respire they use oxygen converting the organic carbon in the solution to CO2 gas, which is absorbed to alkali. This causes a reduction in the gas pressure, which can be sensed with various sensors or membranes. A small current is created in electrolysis cell and this generates oxidation/reduction reactions in the electrolysis cell and oxygen is formed at the anode. Electrolysis of water can supply oxygen to a closed system as incubation proceeds . At constant current, the time during which electrolysis generates the oxygen to keep the system pressure constant is a direct measure of the oxygen demand. The amount of oxygen produced by the electrolysis correlates with the amount of oxygen consumed by bacteria. Chemical Oxygen Demand (COD) Is the standard method for indirect measurements of the amount of pollution in a sample of water that cannot be oxidized biologically. Is based on the chemical decomposition of organic and inorganic contaminants, dissolved or suspended in water. Why Measure Chemical Oxygen Demand? It is often measured as a rapid indicator for organic pollutant in water. Normally measured in both municipal and industrial wastewater treatment plants and gives an indication of the efficiency of the treatment process. It is measured on both influent and effluent water. Standard Dichromate COD Procedure A sample is heated to its boiling point with known amounts of sulfuric acid and potassium dichromate. The loss of water is minimized by the reflux condenser. After 2 h, the solution is cooled, and the amount of dichromate that reacted with oxidizable material in the water sample is determined by titrating the excess potassium dichromate with ferrous sulfate. Dichromate consumed is calculated as to oxygen equivalent for the sample and stated as milligrams of oxygen per liter of sample (ml/l). Factors preventing the concordance of BOD values to COD values: Many organic materials are oxidizable by dichromate but not biochemically oxidizable, and vice versa. For example, pyridine, benzene, and ammonia are not attacked by the dichromate procedure. A number of inorganic substances such as sulfide, sulfites, thiosulfates, nitrites, and ferrous iron are oxidized by dichromate, creating an inorganic COD that is misleading when estimating the organic content of wastewater. Although the factor of seed acclimation will give erroneously low results on the BOD tests, COD results are not dependent on acclimation. Chlorides interfere with the COD analysis, and their effect must be minimized in order to obtain consistent results. The standard procedure provides for only a limited amount of chlorides in the sample. This is usually accomplished by diluting the sample to achieve a lower chloride concentration and interference. This can be a problem for low COD  concentration samples, as the dilution may dilute the COD concentration below the detection level or to levels at which accuracy and repeatability are poor. COD Detector The term COD usually refers to the laboratory dichromate oxidation procedure, although it has also been applied to other procedures that differ greatly from the dichromate method but which do involve chemical reaction. These methods have been embodied in instruments both for manual operation in the laboratory and for automatic operation online. They have the distinct advantage of reducing analysis time from days (5-day BOD) and hours (dichromate, respirometer) to minutes. Automatic On-Line Designs Takes a 5 cc sample from the flowing process stream. Injects it into the reflux chamber after mixing it together with dilution water (if any) agents. One ozone-based scheme enriches dilution water with and with two reagents: dichromate solution and sulfuric acid. The reagents also contain an oxidation catalyst (silver sulfate) and a chemical that complexes chlorides in the solution (mercuric sulfate). The mixture is boiled at 302 °F (150 °C) by the heater. Vapors are condensed by the cooling water in the reflux condenser. During which the dichromate ions are reduced to trivalent chromic ions, as the oxygen demanding organics are oxidized in the sample. The chromic ions give the solution a green color. The COD concentration is measured by detecting the amount of dichromate converted to chromic ions by measuring the intensity of the green color through a fiber-optic detector. The microprocessor-controlled package is available with automatic zeroing, calibration, and flushing features. Sampling and Traditional Parameter Parameter Limit Value Sampling: pH, Standard Units 6.0 9.0 Traditional Parameters: Biological Oxygen Demand (BOD) ≠¤ 30 ppm Chemical Oxygen Demand (COD) ≠¤ 200 ppm COD has a large value than BOD because BOD measurement is based only in decomposition of organic matter while COD measures the decomposition of both organic and Inorganic compound. Sources of Error Cause of using nonhomogeneous sample is the largest error. Use of volumetric flasks and volumetric pipettes with a large bore. Oxidizing agent must be precisely measured. Make sure that the vials are clean and free of air bubbles. Always read the bottom of the meniscus at eye level. Total Oxygen Demand (TOD) The quantitative measurement of the amount of oxygen used to burn the impurities in a liquid sample. Thus, it is a direct measure of the oxygen demand of the sample. Measurement is by continuous analysis of the concentration of oxygen in a combustion process gas effluent. A quantitative measurement of all oxidizable material in a sample water or wastewater as determined instrumentally by measuring the depletion of oxygen after high-temperature combustion. BOD and COD have long time cycles. COD use corrosive reagents with the inherent problem of disposal. Analysis is faster, approximately 3 min, and uses no liquid reagents in its analysis. Can be correlated to both COD and BOD. Unaffected by the presence of inorganic carbon. Also indicate noncarbonaceous materials that consume or contribute oxygen Since the actual measurement is oxygen consumption. Reflects the oxidation state of the chemical compound. TOD Analyzer The oxidizable components in a liquid sample introduce into the combustion tube are converted to their stable oxides by a reaction that disturbs the oxygen equilibrium in the carrier gas steam. The momentary depletion in the oxygen concentration in the carrier gas is detected by an oxygen detector and recorded as a negative oxygen peak. Sample Valves Sliding Plate Upon a signal from a cycle timer, the air actuator temporarily moves the valve to its â€Å"sample fill† position. At the same time, an air-operated actuator moves a 20-ul sample through the valve into the combustion tube. A stream of oxygen-enriched nitrogen carrier gas moves the slug of sample into the combustion tube. Rotary Sampling Valve A motor continuously rotates a sampling head, which contains a built-in sampling syringe. For part of the time, the tip of the syringe is over a trough that contains the flowing sample. 2 or more cam ramps along the rotational path cause the syringe plunger to rise and fall, thus rinsing the sample chamber. Just before the syringe reaches the combustion tube, it picks up a 20-ul sample. As it rotates over the combustion tube, it discharges the sample. Oxygen Detectors Platinum-lead Fuel Cell Fuel Cell Generates a current in proportion to the oxygen content of the carrier gas passing through it. Before entering the cell, the gas is scrubbed in a potassium hydroxide solution, both to remove acid gases and other harmful combustion products to humidify the gas. The oxygen cell and the scrubber are located in a temperature-controlled compartment. The fuel cell output is monitored and zeroed to provide a constant baseline. The output peaks are linearly proportionate to the reduced concentration of oxygen in the carrier gas as a result of the sample’s TOD. Yttrium-doped Zirconium Oxide Ceramic Tube Coated on both sides with a porous layer of platinum. It is maintained at an elevated temperature and also provides an output that represents the reduction in oxygen concentration in the carrier gas that is a result of the sample’s TOD. The operation of these oxygen detectors involves the ionization of oxygen in both a sample and a known reference gas stream. When the sample and reference gas streams come in contact with the electrode surfaces, oxygen ionizes into O-2 ions. The oxygen ion concentrations in each stream is a function of the partial pressure of oxygen in the stream. The potential at each electrode will depend on the partial pressure of oxygen in the gas stream. The electrode with higher potential (higher oxygen concentration) will generate oxygen ions, whereas the electrode with lower potential (lower oxygen concentration) will convert oxygen ions to oxygen molecules. Calibration Analysis is by comparison of peak heights or areas to a standard calibration curve. To prepare this curve, known TOD concentrations of a primary standard (KHP) are prepared in distilled and deionized water. Standard solutions are stable for several weeks at room temperature. Water solutions of other organic compounds can also be used as standards. Several analyses can be made at each calibration concentration, and the resulting data are recorded as parts per million (ppm) TOD vs. peak height or area. Applications: Correlation: Many regulatory agencies recognize as the basis for oxygen-depleting pollution control only BOD or COD (preferably BOD) measurements of pollution load, because they are concerned with the pollution load on receiving waters, which is related to lowering the DO due to bacterial activity. If other methods described are to be used to satisfy legal requirements of pollution load in effluents or to measure BOD removal, it is important to establish a correlation between the other methods and BOD or COD (preferably BOD). Salient Features: a measurement of property of the sample, i.e. the amount of oxygen required for bacterial oxidation of bacterial food in the water, the BOD dependence of the oxygen demand on the nature of the food as well as on its quantity dependence of the oxygen demand on the nature and amount of the bacteria Another extensive study concluded the following: (1) A reliable statistical correlation between BOD and COD of a wastewater and its corresponding TOD can frequently be achieved, particularly when the organic strength is high and the diversity in dissolved organic constituents is low. (2) The relationship is best described by a least squares regression with the degree of fit expressed by the correlation coefficient (3) The observed correspondence of COD-TOD was better than that of COD-BOD for the wastewaters. (4) The BOD-COD ratio of an untreated wastewater is indicative of the biological treatment possible with the particular wastewater. Comparison: COMPARISON BOD COD TOD Definition The oxygen required when a population of bacteria causes the oxidation reaction in a population of bacteria. The oxygen equivalent when the oxidation is carried out with a chemical oxidizing reagent such as potassium dichromate. The oxygen equivalent when oxidation is caused by heating the sample in a furnace in the presence of a catalyst and oxygen. Analyzer Utilize bacteria to oxidize the pollutants Measured through chemical oxidation and catalytic combustion techniques Oxidize the sample in a catalyzed thermal combustion process and detect both the organic and inorganic impurities in a sample Response-Range 5 days 30 mg/L 2 hours 250-500 ppm 3 minutes 100-100,000 mg/L Inaccuracy-Cost 3 – 20% / $500 $20,000 2 – 10% / $8,00 $20,000 2 – 5% / $5,000 $20,000

Sunday, October 13, 2019

Solving the Redundancy Allocation Problem using Tabu Search

Solving the Redundancy Allocation Problem using Tabu Search Efficiently Solving the Redundancy Allocation Problem using Tabu Search Abstract The redundancy allocation problem is a common and extensively studied program involving system design, reliability engineering and operations research. There is an ever increasing need to find efficient solutions to this reliability optimization problem because many telecommunications (and other) systems are becoming more complex while the development schedules are limited. To provide solutions to this, a tabu search meta-heuristic has been developed and successfully. Tabu search is a perfect solution to this problem as it has a lot of advantages compared to alternative methods. Tabu search can be used for more complex problem domain compared to the mathematical programming methods. Tabu search is more efficient than the population based search methodologies such as genetic algorithms. In this paper, Tabu search is used on three different problems in comparison to the integer programming and genetic algorithm solutions and the results show that tabu search has more benefits while sol ving these problems. INTRODUCTION of Articles Redundancy allocation problem(RAP) is a popular and a complex reliability design problem. The problem has been solved using different optimization approaches. Tabu search(TS) has more advantages over the other approaches but has not been tested for its effectiveness. In this paper a TS is used to solve a problem, called TSRAP, and the results are compared to the other approaches. The RAP is used for designs that have large assemblies and are manufactured using off-the shelf components and also have high reliability requirements. Solutions to the RAP problem has the optimal combination of component selections. Mathematical programming techniques have proven to be successful in finding solutions to these problems. Unfortunately, these problems have some constraints which are necessary for the optimization process but not for the actual engineering design process. Genetic Algorithms have proven to be a better alternative to the mathematical programming technique and has provided excellent results. Despite this, genetic algorithms is a population based search requiring the evaluation of multiple prospective solutions because of which a more efficient approach to this problem is desired. TS is an alternative to these optimization methods that has been optimized by GA. Its a simple solution technique that proceeds through successive iterations by considering neighboring moves. In this paper the TS method is used on three different problems and the results are compared with the alternate optimization methods. TS is not like GA, which is population based, instead it successively moves from solution to solution. This helps increase the efficiency of the method. The most commonly studied design configuration for RAP is the series parallel problem. The example of the design is shown below. Nomenclature R(t, x) = system reliability at time t, depending on x; xij = quantity of the jth available component used in subsystem i; mi = number of available components for subsystem i; s = number of subsystems; nmax,i = ni à ¢Ã¢â‚¬ °Ã‚ ¤ nmax,ià ¢Ã‹â€ Ã¢â€š ¬i; C(x) = system cost as a function of x; W(x) = system weight as a function of x; C, W, R = system-level constraint limits for cost,weight, and reliability; k = minimum number of operating components required for subsystem; ÃŽÂ »ij = parameter for exponential distribution, fij(t) = ÃŽÂ »ij exp(à ¢Ã‹â€ Ã¢â‚¬â„¢ÃƒÅ½Ã‚ »ijt); Fj = feasible solutions contained on the tabu list; Tj = total number of solutions on the tabu list; à Ã‚ j = feasibility ratio, à Ã‚ j = Fj/Tj . Explanation of the work presented in journal articles The RAP function can be formulated with system reliability as the objective function or in the constraint set. Problem(p1) maximizes the system reliability and problem(p2) maximizes the system cost. The TS requires determination of a tabu list of unavailable moves as it successively proceeds from one step to another. For the series parallel system, the encoding is a permutation code of size à ¢Ã‹â€ Ã¢â‚¬Ëœi=1 s nmax, I representing the list of components in each subsystem including nonused components. The tabu list length is reset every 20 iterations to an integer value distributed uniformly between [s, 3s] and [14s,18s] for Problems (P1) (s = 14) and (P2) (s = 2), respectively. TSRAP is done through four steps. The first step involves generating a feasible random initial solution. S integers are chosen from the discrete uniform distribution, representing the number of components in parallel for each subsystem. Using this procedure, a solution is produced with an average number of components per subsystem. It becomes the initial solution if feasible, else the whole process is repeated. The second step checks for possible defined moves for each subsystem in the neighborhood. The TSRAP that allows component mixing within the subsystem allows for its first move to change the number of a particular component type by adding or subtracting one. The TSRAP that does not allow component mixing involves changing the number of components by adding or subtracting one for all individual subsystems. These moves are advantageous as they do not require re-calculation of the entire system reliability. The best among the two types of moves that are performed independently are selected. The selected move is the best move available, hence it is called best move. If the solution is TABU and the solution is not better than the best so far solution then it is disallowed and step 1 is repeated, else it is accepted. The third step involves updating the Tabu list. To check for the feasibility of an entry in the Tabu list, the system cost and weight are stored with the subsystem structure involved in the move within the tabu list. The fourth and the final step is checking for the stopping criterion. It is the maximum number of iterations without finding an improvement in the best feasible so far. When reached at a solution, the search is completed and the best feasible so far is the is the TSRAP recommended solution. An adaptive penalty method has been developed for problems solved by TS as they prove to give better solutions. The objective function for the infeasible solution is penalized by using subtractive or additive penalty function. A light penalty is imposed on the infeasible solutions within the NFT region( Near Feasible Treshold) and heavily penalized beyond it. The penalized objective function is based on the unpenalized objective function, the degree of infeasibility and information from the TS short-term and long-term memory. The objective function is for problem 1: Rp(to;x) is the penalized objective function. The un penalized system reliability of the best solution so far is represented by Rall and Rfeas represents the system reliability of the best feasible solution found so far. If Rall and Rfeas are equal or close to each other in value then the search continues, else if Rall is greater then Rfeas, there is a difficulty in finding the feasible solutions and the penalty is made larger to filter the search into the feasible region. Similarly, the objective function for problem 2 is: Cp(x) is the penalized objective function. Call is the unpenalized (feasible or infeasible) system cost of the best solution found so far, and Cfeas is the system cost of the best feasible solution found so far. Discussion of Contributions The most important contribution is that as a result of this paper it is now proved that the Tabu search is a more efficient method that the mathematical programming technique and the genetic algorithms. The penalization method was used which proved to give better results too. As a result of this paper, complex problem domains can now be optimized better using the Tabu search. As a result of this paper, weve come to realize that TSRAP is better in performance and results in greater efficiency than GA although they are almost similar in procedures. Due to the short schedules to find the optimal solution for complex redundancy allocation problems, Tabu search is found to be the most efficient approach. Discussion of Dificiency and Potential Improvements Although an unexploited approach to find the optimal solution has been tried and tested to be efficient, there is potential for future scope. In this paper , the TS approach used is rather simple in a way that few factors that could have been were not incorporated. Features that are normally used such as candidate lists and long term memory strategies which prove to be more effective were not used. The use of these features can prove to be more efficient in complex problems. There are opportunities for improved effectiveness and efficiency by considering the addition of these features to the TS devised  here. Summary TS has previously been demonstrated to be a successful optimization approach for many diverse problem domains. So, TS approach , as a result of this paper has been tried and tested to be more efficient approach to the complex problems domain of the redundancy allocation problem. The use of penalty function in this research has promoted the search in the infeasible region by changing the NFT. In this paper, TS has been tested in three different problems and has provided more efficient results than the other alternative methods. When compared, the TS produces better results than the genetic algorithm method. In spite of this, the use of features such as candidate lists and long term memory strategies could have been to be more effective in complex problem domains. References Bellman, R.E. and Dreyfus, E. (1962) Applied Dynamic Programming,  Princeton University Press, Princeton, NJ. Bland, J.A. (1998a) Memory-based technique for optimal structural design.  Engineering Applications of Artificial Intelligence, 11(3), 319-  325. Bland, J.A. (1998b) Structural design optimization with reliability constraints  using tabu search. Engineering Optimization, 30(1), 55-74. Brooks, R.R., Iyengar, S.S. and Rai, S. (1997) Minimizing cost of redundant  sensor-systems with non-monotone and monotone search  algorithms, in Proceedings of the Annual Reliability and Maintainability  Symposium, IEEE, New York, pp. 307-313. Bulfin, R.L. and Liu, C.Y. (1985) Optimal allocation of redundant components  for large systems. IEEE Transactions on Reliability, 34, 241-247. Chern, M.S. (1992) On the computational complexity of reliability redundancy  allocation in a series system. Operations Research Letters,11, 309-315.  

Saturday, October 12, 2019

Shooting in Football :: Papers

Shooting in Football How to shoot Kicking is the basis of football (soccer). There are two types of shots - ground and air. Ground shots  · On ground shots the supporting (non-kicking) leg is more important than the kicking leg. In order to produce a good shot you'll need balance. The right way to keep your balance is to place your supporting foot in line with the ball. By stepping a little behind you will produce a high kick (most young players that are not taught how to shoot do not know about keeping the leg in line with the ball and when they try to kick hard the ball always rises).  · The second important thing in ground shooting is that in order to get the maximum power in a shot, the knee of your kicking leg has to be above the ball at the moment your foot and the ball touch.  · The follow-through is a swing of your leg after you've touched the ball. You should follow-through in the direction of your aim. If you have trouble understanding this concept try landing on your kicking or think about touching the knee of your kicking leg in your opposite shoulder after you kick the ball. Air shots  · On air shots you have to adjust to the flight of the ball by moving your legs very quickly with short steps. After you decide that you're in the right spot then you swing at the ball.  · Some shots require jumping. Be very careful when doing so and time your jump, just like when taking a header. How to practice shooting First start practicing your technique, then add accuracy to it, and then you worry about power. Start just by shooting a still ball and then add one or more of the things below. Ground shots  · Shooting a ball at a goal  · Shooting from a lower angle  · Shooting a moving ball  · Shooting while turning in the direction of the goal