Monday, September 30, 2019

Food Critique History Essay

Food history is an interdisciplinary field that examines the history of food, and the cultural, economic, environmental, and sociological impacts of food. Food history is considered distinct from the more traditional field of culinary history, which focuses on the origin and recreation of specific recipes. Food historians look at food as one of the most important elements of cultures, reflecting the social and economic structure of society. Food history is a new discipline, considered until recently a fringe discipline. The first journal in the field, Petits Propos Culinaires was launched in 1979 and the first conference on the subject was the Food & History is a multilingual (French, English, German, Italian and Spanish) scientific journal that has been published since 2003. Food & History is the biannual scientific review of the European Institute for the History and Cultures of Food (IEHCA) based in Tours. It publishes papers about the history and culture of food. The review  Food & History is the biannual scientific review of the Institut Europeen d’Histoire et des Cultures de l’Alimentation / European Institute for the History and Culture of Food (IEHCA) in Tours, France. Founded in 2003, it is the first journal in Europe, both in its vocation and concept, specialised in the specific field of food history. Food & History aims at presenting, promoting and diffusing research that focuses on alimentation from an historical and/or cultural perspective. The journal studies food history (from prehistory to the present), food archaeology, and food culture from different points of view. It embraces social, economic, religious, political, agronomical, and cultural aspects of food and nutrition. It deals at the same time with questions of food consumption, production and distribution, with alimentation theories and practices (medical aspects included), with food-related paraphernalia and infrastructures, as well as with culinary practices, gastronomy, and restaurants. Being positioned at the cross-roads of the humanities and social sciences, the review deliberately promotes interdisciplinary research approaches. Although most contributions are concerned with European food history, the journal principally also welcomes articles on other food cultures. Food & History is a fully-fledged academic journal which applies the usual methodical instruments for assessing incoming articles, i. e. a double-blind reviewing process by external referees, recruited from a large and ever-growing intercontinental pool of experts in the field of social and cultural food studies. Food & History belongs to a decreasing spectrum of journals which openly expresses its European and international character by accepting manuscripts in five European languages (English, French, Spanish, Italian, and German). Food & History gains official recognition from the Institut des Sciences Humaines et Sociales of the CNRS (Centre National de la Recherche Scientifique) and is indexed by the European Reference Index for the Humanities (ERIH) of the European Science Foundation (History category B). Food & History can be published thanks to the financial support from the Ministere de l’Education nationale, Ministere de l’enseignement superieur et de la recherche, Universite Francois-Rabelais de Tours, and the Conseil Regional du Centre. [edit] History Food and History was created by a network of academic researchers and students, with the help of the French Ministry for National Education and the University of Tours. The journal is sustained by the French National Center for Scientific Research (CNRS)[1] and is cited by the European Science Foundation in its European Reference Index for the Humanities (ERIH)[2]. The launch of Food & History was on the one hand a logical fruit of the foundation of the European Institute for the History of Food in December 2000 in Strasbourg (redefined in 2005 as European Institute for the History and Culture of Food), and on the other hand a clear manifestation of the gradual breakthrough of social and cultural food studies as an independent field of research during the first decades of the 21st century. The emergence of this sub-discipline had, of course, been anticipated in an impressive record of food-related research, conducted by scholars from adjacent fields, such as e.g. economic history, agricultural history, history of the body etc. However, the scholars behind these pioneering works were generally operating on a rather individual base and they would not have defined themselves as food historians. It was only with the foundation of the journal Food and Foodways in 1986 and of the International Commission for Research into European Food History (ICFREH) by Hans-Jurgen Teuteberg in Munster 1989 that a first infrastructural framework for social and cultural  food studies was provided. In the decades around the turn of the century, a lot of new food-related research initiatives became visible, thus demonstrating the vitality of this research area. In 1997, the Department of History at the University of Adelaide established a Research Centre for the History of Food and Drink. In 2001, a new web-journal The Anthropology of Food was launched and in 2004 the American Association for the Study of Food and Society re-launched a journal, entitled Food, Culture and Society. Around the turn of the century, due to – amongst others – new appointments in the editorial board, the research interest of the journal Food and Foodways changed in a two-fold sense: on the one hand â€Å"it shifted away from familiar disciplines (history, sociology, ethnology) toward ‘unexpected’ones (communication sciences, linguistics, tourism)†, on the other hand it became increasingly dominated by Anglo-Saxon input, especially from scholars from the USA, whereas the influence of the traditional French research schools significantly diminished. Some scholars argue that this ‘exotic’ publication strategy of Food and Foodways may have led to the launch of the new food history journal Food & History. Be that as it may, it was from the very start of the European Institute for the History of Food obvious that this new Europe-wide food research initiative should be accordingly accompanied by the launch of a new publication platform. And so happened: three years after its foundation, the IEHA announced the introduction of a new journal, Food & History, which still appears under the aegis of IEHCA, represented by its director Francis Chevrier (series editor). It started with a 7-persons board, consisting of four historians, one sinologist, one sociologist and Secretary Christophe Marion. As from volume 4. 2 (publication year 2006), the editorial board was almost doubled, with the addition of a philologist, archaeologist, classicist, and three historians. After a transition period and the appointment of a new secretary in 2007, the journal has been increasingly professionalised, amongst others by the introduction of a new uniform style sheet (link) and by the application of a comprehensive peer reviewing system (starting with volume 5. 1). These assessments are usually carried out on an entirely honorary base. However, by way of acknowledgement, the names of external referees are regularly published, usually in the last issue of each volume. Another development that bears witness of the increasing professionalisation of the journal was the change in its direction. During the initial period, Massimo Montanari had served as editor in chief, but in 2008 the editorial board declared itself openly in favour of a new dual leading structure, which rotates among the board members, giving each tandem a triennial turn (which is once renewable for another turn of three years). During a transitional year (2009), Montanari was accompanied by Allen Grieco and Peter Scholliers, who in the subsequent year took over the torch of the journals direction. Yet another step towards further professionalisation was the introduction of a group of corresponding members as from 2010, with the aim to represent the journal’s interests in different world regions and to establish a permanent flow of food research related information between these regions and the journal’s â€Å"headquarters†.

Sunday, September 29, 2019


What Conrad & Poole (1998) refer to as a â€Å"relational strategy of organizing† is more commonly called the â€Å"human relations approach† or â€Å"human relations school† of management by organizational theorists. This human relations approach can be seen as being almost entirely antithetical to the principles of classical management theory. Where classical management focused on the rationalization of work routines, human relations approaches stressed the accommodation of work routines and individual motional and relational needs as a means of increasing productivity.To a great extent, the human relations approach can be seen as a response to classical management an attempt to move away from the inflexibility of classical management approaches. The human relations approach can also be seen as a response to a highly charged and polarized social climate in which labor and management were viewed as fundamentally opposed to one another, and communism was seen as a very real and immediate danger to the social order the otion of class struggle propounded by Marxist theorists was taken very seriously.By focusing on the extent to which workers and managers shared economic interests in the success of the organization, the human relations approach can be seen as an attempt to move beyond the class struggle idea. Of course, the human relations approach (which really emerged in the late 1930s) was made possible by the fairly coercive suppression of the most radical organized labor movements.The sidebar describes one such movement, and is provided in order to indicate the social climate extant in the period immediately preceding the emergence of the human relations approach. In essence, the human relations approach sees the organization as a cooperative enterprise wherein worker morale is a primary contributor to productivity, and so seeks to improve productivity by modifying the work environment to increase morale and develop a more skilled and cap able worker.

Saturday, September 28, 2019

Marijuana Debate

Erica Del Vigna Coms 2 Negative Outline Proposition: The state of California should legalize marijuana. I. Introduction Thesis: Though I agree that marijuana should be put into a controlled environment, I believe it should not be legalized due to its poor health attributes, and its negative influence towards the youth and drug users. Preview: I will be explaining today why the affirmatives plan does will not work as a sufficient plan in California. I will start by refuting his claims that marijuana is not a gateway drug. I will also explain the future harm that legalizing this drug could do to the youth of our state.Finally, I will connect the link on drug users to criminals. Overall this drug does not benefit our future generations socially or for their health. According to Scripps Alcohol and Treatment Center in California, â€Å"we have yet to see a patient come through here who doesn’t attribute his addiction to having started with marijuana as a gateway drug†. II. Body A. Ills and significance refutation 1. The affirmative claims that marijuana is not a gateway drug, which is the farthest from the truth. Most people who are in a treatment center started off by occasionally using marijuana.As I stated in my previous quote from the Scripps alcohol center, most addicts blame their addiction habits to starting with a gateway drug like marijuana or alcohol. The clinician who was interviewed stated that society realizes the real dangers of marijuana as a gateway drug. Even though in 1996, medical marijuana was passed by California voters with Proposition 215 by a 56 % passing rate; in 2010, Proposition 19 failed because California voters did not want to legalize marijuana, as stated in the Christian Science Monitor dated May 2012. . The affirmative argues that law enforcement should spend their days fighting something more important than drug users. I strongly disagree with this because of the evidence showing that drug users lead to harsher crimes . Allowing people to use drugs is telling the youth of California that it is okay to smoke weed. This could potentially turn otherwise respectable children into drug using, criminal adults. In the article by the American Academy of Pediatrics, â€Å"Legalization of Marijuana: Potential impact on youth† in 2004, the doctors state that legalization of marijuana would have a negative effect on youth because in would decrease the adolescents’ perceptions of risk and increase their exposure to the drug. In comparison to a Dutch study from 1984 to 1992, decriminalization increases marijuana use by adolescents because making marijuana legal makes it available. American manufacturers of alcohol and tobacco market their products to young people and marijuana would be the same.Marketing research shows that if only 1% of 15-19 year old Americans began using marijuana, there would be approximately 190,000 new users. B. Cure refutation 1. —The affirmative’s plan will not work for multiple reasons. Although some may use the drug for health benefits, it will cause more problems to society than help. The Office of National Drug Control Policy director, John Walters states that Marijuana damages the brain, heart, lungs, immune system and contains cancer-causing compounds. It also impairs learning, memory, perception and judgment which are connected to car accidents and workplace accidents.It should not be legalized because it is too dangerous and causes severe health problems. In the article by Taxman and Thanner, â€Å"Risk, Need, and Responsivity† in Crime & Delinquency dated 2006, the authors agree that marijuana should not be legalized because 20% of the state drug offenders reported involvement with firearms and 24% of the state drug offenders had prior convictions for violent offenses.. Repeat offenders connected with weapons and violent offenses incur high costs; but keeping these criminals off of the streets is worth it. C. Cost-Benef its –There are 4 main disadvantages that could take place if we legalize marijuana: 1. Drug users throughout the general population may rise. 2. Many more people will be using firearm and could demonstrate violent behavior 3. More health damage than good could affect millions of people either as users or from second hand smoke 4. Moral and ethical values could be put in jeopardy III. Conclusion 1. California currently only allows medical marijuana users to legally purchase marijuana. If we allow all citizens to have access to this drug, we could potentially lead California down a very bad path.We would see far more crimes and cases of drug addiction. We do not want the future leaders and adults to think that it is politically or socially correct to use this drug. 2. It is clear from previous California elections that California’s people do not want the law to be changed. In order to keep the state safe, and healthy, it is crucial that marijuana is not legalized for rec reational use. Works Cited 1. Joffe, Alain and W. Samuel Yancy. â€Å"Legislation of Marijuana: Potential Impact on Youth. † American Academy of Pediatrics. 113:6 (2004): 632-638. 2. Taxman, Faye and Meridith Thanner. Risk, Need and Responsivity. † Crime & Delinquency. 52:28 (2005): 28-51. 3. Weil, A. T. et. al. â€Å"Clinical and Psychological Effects of Marijuana in Man. † Science Magazine. 162:1234 (1968): 129-132. 4. Benson, John et. al. â€Å"Medical Marijuana – should marijuana be a medical option? † Neighborhood Link National Network. Retrieved from www. neighborhoodlink. com/article/Community/Medical_Marijuana. 5. Khatapoush, S. and D. Halifors. â€Å"Sending the Wrong Message: Did Medical Marijuana Legalization in California Change Attitudes about use of Marijuana? † Journal of Drug Issues. 34:4 (2012): 751-770.

Friday, September 27, 2019

Finish part B and C Essay Example | Topics and Well Written Essays - 1000 words

Finish part B and C - Essay Example As Ventura capitalists venture into CF ltd, CF ltd needs to function based on equity, and it ought to have a market large enough to validate the millions being invested in the company. Value refers to the combined elements that contribute towards creating the worthiness of a company. The venture capitalist measures the value of CF ltd through identifying certain attributes of the firm such as its assets, shares, liabilities and capital funds. That is an essential tool that aids in identifying the future expectation in company growth. In the proposed investment, Ventura ltd assesses how important and otherwise untapped value creation occurs through the use of anticipated technology and products, and also defining the revenue stream precisely. CF ltd aims at acquiring a new drug line. Most investors know that new drug targets have large barriers to entry due to regulatory processes. As a venture capitalist works on investing their money into the company, first it needs to monitor the company’s trends in the industry and how it conducts itself in terms of adhering to regulations. Knowledge about a crucial investment requires the investor to find established partners who are early adopters in validating a product and endorse it, allowing more sales. At what stage does the firm develop technology? Can the organisation identify and mitigate on its risks? Every business is vulnerable to risk in one way or the other; therefore, the management team needs to formulate strategies that counter the risks, which might damage the firm if it lacks mitigation policies. As CF ltd develops a new drug line, it renders itself to a number of risks; hence, it needs to employ technology that deals with any future uncertainties. As a venture capitalist, one needs to know whether the proposed products stand a chance in the market. As an investor, the competitive edge of a new

Thursday, September 26, 2019

PKG 381 assignment #1 Example | Topics and Well Written Essays - 250 words

PKG 381 #1 - Assignment Example Crisp vegetables were packed in sacks made of cotton and sisal and these are products which are recyclable (Guerrero 2013). They have longer lifespan and can be used again and again before disposal or recycling. When disposed, they decompose over a short period of time. Filtered water are mostly packed in plastic, these plastics take hundreds of years to break down when disposed and are also expensive to recycle. They pollute the environment widely and expose flora and fauna in danger. Plastics even cause death to animals which humans really depend on. Being difficult to recycle makes man to dispose them any how due to our laziness posing threat to our environment (Guerrero 2013). Pudding containers are also made of plastic though somehow light. Still, they threaten our environment since it’s a loss recycling them. They require good disposal but not dumping them anywhere since they take long to break down. Humans have the greatest part to play in the conservation of the environment. Their action determines our environment’s stability. Good practices should be put in place especially when it comes to products we use. There should be laws to help us manage our environment and avoid laziness (Guerrero 2013). With good practices towards environmental conservation, the organisms we depend on will be able to survive and in long run human life will be

Medical Article review Essay Example | Topics and Well Written Essays - 1750 words

Medical Article review - Essay Example cs and different types of benign tumours exhibit numerous morphological features that mislead and confuse pathologists who are making efforts by identifying the invasive endocervical adenocarcinoma. The authors of this article seek to present a detailed analysis of how to distinguish the endocervical adenocarcinoma from the malignant mimics and benign tumours. Being able to differentiate the cancerous adenocarcinoma in situ from the benign tumours is a critical move towards positive diagnosis of the different types of cervical cancer that women suffer. The authors define adenocarcinoma in situ in the first section of the article. This is denoted as a pre-cursor of the invasive cervical adenocarcinoma. According to the authors, there is evidence linking adenocarcinoma in situ and the human papilloma virus (HPV), specifically the HPV type 18. Usually, many patients diagnosed with adenocarcinoma in situ present no visible symptoms and the lesions are only detectable after a specific tes ting and evaluation. In some cases, though, virginal bleeding may serve as a symptom of the presence of the cancerous lesions (Loureiro & Oliva, 2014). The article describes the architectural and cytologic features used in the diagnosis of adenocarcinoma in situ. Usually, adenocarcinoma exhibits partial or complete involvement of glands in the endocervix. Moreover, adenocarcinoma may exhibit the preservation of normal glandular architecture and may often change to look like normal endocervical epithelium. On the other hand, cytologic features considered during diagnosis include the presence of musin in the cytoplasm as well as the level of stratification, crowding, enlargement or the presence of hyperchromatic nuclei. Other cytologic features that identify adenocarcinoma in situ include frequent mitoses and either small or inconspicuous nucleoli. Sometimes, multiple nuclei, which are smaller, may be present. In a bid to enlighten the reader further, the article discusses where

Wednesday, September 25, 2019

Assumptions Essay Example | Topics and Well Written Essays - 500 words

Assumptions - Essay Example Celeste being a loyal wife wants to help her husband as she realizes that the responsibility of her children lies on her shoulder as well in the same way as her husband. She wants to ease the burden put on her husband by working along him and earning for the house. 4. Jim should look for any other job which is flexible with his routine. The job should have a flexible timetable which is according to the demands of the family. This job would also help him to give adequate time to his family. 1. Jims resignation from other jobs would provide him with enough time to spend with his family and it would create good effects on his children and wife. Jim should consider having a leave from his part-time job so that he can check if his resignation from that particular job would matter or not. 2. Division of work would make it difficult for Celeste to give appropriate time to her home and job and hence it would be hectic for her. She should get advices from different working women and work

Tuesday, September 24, 2019

Arch Communications Group Essay Example | Topics and Well Written Essays - 1000 words

Arch Communications Group - Essay Example As the research declares still consider Arch to be a sound buy. One of these analysts is John Adams, at Wessels, Arnold & Henderson, who believes that Arch’s stocks are undervalued. In his analysis, using EBITDA, Adams concludes that the company’s stocks are still a profitable investment because of its impressive historical growth. This implores investors to ask if Arch’s stocks were undervalued. In Adams’ valuation estimates, where he presents a ten-year horizon long estimate of Arch’s cash flow until the year 2005, this seems to be the case. This paper explores that EBITDA trends of the top paging companies highlights Arch as having one of the highest EBITDA margins in 1995 – a staggering 37% similar with Pagenet, the largest paging company in the country. Arch was also presented to have the highest growth rates in the industry at a 273% subscriber growth rate, 224% revenue growth rate, and 303% EBITDA growth rate, all of which are significantly higher than its competitors. Its Enterprise Value / EBITDA ratio is also the second highest at 18.9, second only to MobileComm, at 27.8 in 1995, and its Enterprise Value / Subscriber ratio is projected to be the highest in 1996 at $422, significantly higher than the average ratio, which is at $326. Based on these values, one can see a clear picture of Arch’s position vis-Ã  -vis its competitors in the industry. By using EBITDA margins to draw comparable conclusions regarding Arch’s value against competing companies in the industry, it failed to consider seve ral factors.

Monday, September 23, 2019

Video Reaction paper 3 Essay Example | Topics and Well Written Essays - 750 words

Video Reaction paper 3 - Essay Example It made me think about how far behind our society is in terms of true acceptance and inclusiveness of the people who are born as one gender but have the ability to perform the duties and responsibilities of both genders. Provided they are given the freedom to choose who they truly know their selves to be. The death of Fred Martinez left me with a feeling of shame as I viewed the documentary. While Fred was somebody who was admired in his native American tribe for the uniqueness of his spirit, he was condemned by our society for being gay. But in reality, was his sexual orientation or gender identity really something to kill a person about? The violence that he experienced through bullying is something that our society would never stand for if it were done to a White Man. No, if Fred Martinez had been white, he would have been protected by our bullying laws. He would have had access to a restraining order. He would have been able to stand up for himself because our society protects by law and respects the White gays and lesbians. Instead Martinez was killed because our society refused to understand the uniqueness of his person and lacked respect for Fred's own Native American traditions. The puzzlement for me while watching the documentary was how our own LGBT community can fight for and most the time, gain the rights and respect that they demand for their gender identities but when the native Americans, the original settlers in this country show that they too deserve to be accorded the same respect, they are not only denied the opportunity to live under the same cloak of protection as their U.S, counterparts, but are murdered because of it. We are often told to keep an open mind and broaden our thinking and understanding of the LGBT sector of society. But when push comes to shove, we always seem to fall short of this commitment to understand those who are different from us in terms of national heritage. However, as Fred showed those around him, native Americans have always been more than accepting, loving, caring, and respectful of those who are like Fred. As long as we refuse to accept and respect the culture of those whom we consider to have a different society from us, we will never live in a truly equal world. Fred's death serves to remind us of that disrespect. While our own LGBT community continues to struggle and win their right to fair treatment, our native American brothers and sisters who are also a member of the LGBT sector of society have been shown to live a life full of fear and uncertainty because the rules of the White Man does not apply to them and their cause. It saddens me to learn that we have a long way to go, centuries after Columbus â€Å"discovered† America and massacring the original settlers, long after we segregated them to tribal camps in the outskirts, that we as a society, have still failed to offer them the chance to be assimilated into our society based upon equality and fair treatment regardless of sexual preference or orientation. As a society, we are centuries behind our Native American counterparts who learned early on the value of the uniqueness in a person. We have a lot to learn from them in terms of the ways and means through which a thorough understanding of the two spirits that exist in the LGBT's can actually help our society evolve into a highly intricate and accepting society.

Sunday, September 22, 2019

Instructional Design Essay Example for Free

Instructional Design Essay Organizational success primarily lies upon the quality of the people working with ay concerned organization. Under this concept, soft skills must be essentially established through efficient and effective hiring or recruitment methods and polished through rational and functional training facilities and techniques. In essence, this was strategically applied by American Express by initiating an extensive and all-encompassing training program that would address the technical and attitudinal qualities of its workforce in order to meet its corporate goals and objectives.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Basically, the corporate training department at the American Express sought to train and empower the organization’s distributed servicing network. But targeting this goal is not as easy as formulating corporate policies. There are factors and concerns to consider— that is, to address the five Ws and one H in human resource management— who may conduct the training, what kind of training program may be executed, where to carry it out, why is it needed, when will it be initiated, and how will it be initiated.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   To the leaders of the America Express, training of the organizational workforce is needed in order to prepare its workforce for their corporate function, which is to communicate with and promote the company’s products and services to patrons of all ages, and from any part of the world and socioeconomic background.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   This initiative did not come without any purpose or goal at all. The organization, in determining that such workforce training was necessary, conducted extensive study and even perhaps consequence-benefits analysis so to justify the carrying out of said training program. The executives of the training department of the company actually determined a number of years ago that they had to improve the workforce’s skills and capability to communicate and interact with costumers around the world. Upon realizing that â€Å"some representatives continued to struggle to master both content and communication ability,† based on the assessment of Ms Beth Harmon, the acting vice president of operations training, the training department decided to install a simulated environment for new recruits’ preparation and training.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The goal of the training is mainly to successfully prepare the company’s representatives for their organizational functions, which is to efficiently and effectively relay to and communicate the company’s products and services to prospective buyers from around the globe. As a result, new recruits were trained on how to handle time efficiently, how to ensure quality service, how to execute costumer treatment and how to observe availability. The challenge here is to initiate a different â€Å"philosophy†, which is the result of the transformation from an academic to a professional model.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The goal of the training is basically to ensure and achieve quality performance on the part of the company’s representatives and productivity and profitability on the part of the company. But it does not just end there. In may be inferred then based on the corporate acts that the company’s executives wanted to provide for its costumers of any age and socioeconomic background that American Express does not just mean business— in that it cares for and considers the costumers’ interests, satisfaction, and contentment.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Some of the special attributes of the training situation lies on the fact that every aspect of the costumers’ satisfaction and even standpoint is taken into account. The integration of information technology makes it effective and reliable in ensuring that the following goals may be achieved— simulations, role play, speech recognition, and close instruction and coaching support. The employment of the simulated call environment— SIMON (Simulated Online Network)— made the training program more effective.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The training was developed by way of involving a â€Å"holistic solution† that infuses or enmeshes an assortment of IT-based technologies into the training system, which includes those goals already mentioned above such as simulations, role play and so on.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   E-learning or virtual learning is part of the solution mainly because the tool by which the representatives communicate with and relay the company’s products and services to costumers is essentially virtualized. As what Harmon said, technology is just the means or instrument to get to the issues of the costumers.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The training was implemented by starting with a pilot program, a way of considering the performance metrics. A two-day training was conducted and it was observed that two weeks hence, results were positive as there was substantial increase in Easy to Understand, Listening, Courtesy, which had 55 percent, 13 percent, and 8 percent ratings, respectively.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The intended result of the training is to brace the representatives for these increasing functions of relaying to and communicating the company’s products and services to its target market. But one of its purposes, particularly in the aspect of e-learning, is to know the issues of the costumers and the way to get it is through virtual learning. There is only one goal of this training— to further establish American Express in the globalized market, thereby ensuring productivity and profitability.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   In measuring the effectiveness of the program, the company conducted a pre-assessment and post-assessment method in order to know whether the training resulted in positive changes. It then leveraged the training efforts and expanded the effects of learning in both globalization and work purpose.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Overall, my impression of the organization’s solution is that it fits well for the organizational goals and objectives, as well as the nature of its business. For being a company engaged in selling, it has to hone and improve the selling and communication capabilities and skills of its representatives. This training initiative only means that global competition is getting stiffer and stiffer and corporate organizations, in order to survive, must not only need to brace their strategies and acquire new technologies, but must prepare and train their representatives as well.

Saturday, September 21, 2019

Shear Bond Strength of Nanocomposite Resin

Shear Bond Strength of Nanocomposite Resin ABSTRACT OBJECTIVE: To compare the shear bond strength of nanocomposite resin to superficial dentin and deep dentin using two different dentin bonding systems. METHOD All teeth were sectioned at various levels (Superficial Dentin: Dentin within 0.5-1 mm of DEJ; Deep Dentin: Dentin within 0.5 mm of the highest pulp horn) using a carborundum Disc and embedded in acrylic block of specific size. Selected specimens (60 premolar teeth) were grouped randomly into three groups, the groups were differentiated into superficial dentin, deep dentin and control group which were further divided into sub group a and subgroup b containing 10 teeth each, depending on the bonding agents used. In subgroup A, Tetric-n-bond, and in subgroup B Single bond universal was used. In the control group no bonding agent was used. The specimens were thermocycled for 500 cycles between 5 degree c and 55 degree c water bath for 40 seconds. Finally the specimens were subjected to shear bond strength study under INSTRON machine (UNIVERSAL TESTING MACHINE). The maximum shear bond strengths were noted at the time of fracture (de-bonding) of the restorative material. Results were analysed using ANOVA test, Bonferroni test and paired t test. RESULTS Bond strength values of fifth generation bonding system (Tetric n bond) showed higher mean shear bond strength compared to seventh generation bonding system (single bond universal). There was a significant fall in bond strength values as one reaches deeper levels of dentin from superficial to deep dentin. CONCLUSION- There was a significant difference between the bond strength of fifth generation bonding system (Tetric n bond) and seventh generation bonding system (single bond universal).Decrease in the bond strength values is seen for deeper level of dentin as compared to superficial dentin. INTRODUCTION The success of any dental restoration is based on the high adhesive property of the material. Various materials are available which utilizes this adhesive property such as, glass ionomer cement restoration, composite restorations and pit and fissure sealants. Among these composite resins have been developed since few years in order to provide the best esthetics to the anterior restorations as well as for posterior restorations. Dental adhesive systems are agents used to promote adhesion between composite resin and dental structure, and they should present similar performance on enamel and dentine. Bonding to enamel and dentin has been known to be clinically reliable with the advent of acid etching technique. It differs from enamel, as it has more organic contents, presence of fluid inside the dentinal tubules, smear layer and inherent wetness on the surface[1]. Dentin has been characterized as a biologic composite of a collagen matrix filled with sub-micron to nanometer sized calcium deficient carbonate rich apatite crystallites dispersed between hyper mineralized collagen poor hollow cylinders. It is very well understood that the density of dentinal tubules varies with dentinal depth and as well as the water content of dentin is lowest in superficial dentin and highest in deep dentin. In superficial dentin which contains fewer tubules and the permeation of resin into intertubular dentin will be responsible for most of the bond strength. In deep dentin, dentinal tubules are more in number and hence, intratubular permeability of resins will be responsible for higher bond strength. Two major simplified bonding approaches have been developed namely. Total etch technique –involves the simultaneously removal of the smear layer from both enamel and dentin surface followed by the application of one bottle agent that combines the primer and adhesive in one solution. Self-etching technique – their bonding mechanism is based upon the simultaneous etching, priming and adhesive of the dentin surface in a single bottle[2]. Bonding to enamel was achieved earlier and easier (Buonocore,1955) because enamel is mostly composed of hydroxyapatite crystals. Although it is possible to obtain predictable and reliable adhesion to enamel, adhesion to dentin, which is the largest part of the tooth, has proved to be more challenging because of its heterogeneous nature. The mechanism of dentin adhesion, enhanced by hybrid layer formation between the resin and dentin, was proposed by Nakabayashi (1982). The adequate hybrid layer formation is believed to be essential to create a strong and durable bond between resin and dentin. Adhesive restorations have been widely accepted for both anterior and posterior use in restorative dentistry. Patient’s demands for esthetic restorations have caused a recent increase in the use of tooth colored restorative materials. To achieve clinical success with such restorations, good adhesion between restorative materials and tooth substrates is of crucial importance in order to ensure good marginal sealing, reinforcement of the tooth structure, and longer life of the restoration. During the last two decades, a variety of adhesive systems have been continuously developed in order to produce good adhesion to dental substrates. These great advances in the adhesive dentistry have changed the concepts of cavity preparation based on the principals proposed by GV Black (1955) into more conservative and minimally invasive ones. The current self-etching adhesives provide monomer formulation for simultaneous conditioning and priming of both enamel and dentin. As of today less research are available to indicate the effectiveness of new generation self-etching primers against superficial and deep dentin. Shear bond strength measurements are commonly used to evaluate effectiveness of dentin bonding systems. The aim of the study was to evaluate the Shear Bond Strength of the newer bonding systems on superficial dentine and deep dentin. MATERIALS AND METHOD: The present in- vitro study was conducted in the department of conservative dentistry and endodontic, M.R.Ambedkar Dental College and Hospital, Bangalore. Sixty intact human maxillary pre molar teeth extracted for orthodontic reasons were collected from Oral and Maxillo-Facial Department at M.R.Ambedkar Dental College Hospital. The teeth were stored, disinfected and handled as per the recommendations and guidelines laid down by OSHA and CDC. Teeth selected were randomly divided into three groups of twenty teeth each. Group A, Group B and Control group. Group A and B were further subdivided into Subgroup A Subgroup B, of ten each. All teeth were sectioned at various levels using a Carborundum Disc under copious water and embedded in acrylic block of specific size. Group I: Superficial Dentin – 20 specimens Sub group A – Superficial Dentin (Tetric N Bond) 10 specimens Sub group B – Superficial Dentin (Single Bond Universal) 10 specimens Group II: Deep Dentin – 20 specimens Sub group A – Deep Dentin (Tetric N Bond) 10 specimens Sub group B – Deep Dentin (Single Bond Universal) 10 specimens Group III: Control Group – 20 specimens Sub group A – Superficial Dentin 10 specimens Sub group B – Deep Dentin 10 specimens The occlusal surfaces of teeth were ground on a water-cooled trimming wheel to prepare flat dentin surfaces. Group 1 (Superficial Dentin) Subgroup A All the specimens were etched on the prepared dentinal flat surface with (N Etch), and washed. The surface was blotted with gauze to produce a visible moist dentin surface. The total-etching adhesive (Tetric N Bond) was applied on the prepared dentinal flat surface left undisturbed for 20 seconds and the excess solvent was removed with a gentle stream of air. Light curing was done for 40 seconds with a visible light curing unit. After curing the bonding agent, nanocomposite resin (Tetric N Ceram) was placed on the prepared dentinal surface using Teflon mold and cured according to manufacturers instructions. The same procedure was carried out on the 10 specimens in this group. Subgroup B The self-etching adhesive (single bond universal) was applied on the prepared dentinal flat surface left undisturbed for 20 seconds and the excess solvent was removed with a gentle stream of air. Light curing was done for 40 seconds with a visible light curing unit. After curing the bonding agent, nanocomposite resin was placed on the prepared dentinal surface using Teflon mold and cured according to manufacturers instructions. The same procedure was carried out on the 10 specimens in this group. Group II (Deep Dentin) Subgroup A The same procedure as carried out in the group I, subgroup A is carried out on all specimens in this group. Sub group B The same procedure as carried out in the group I, subgroup B is carried out on all specimens in this group. Group III (Control Group) No bonding agent was applied. Nanocomposite resin was placed and cured according to manufacturers instructions. Specimens were then stored under room temperature for 48 hours. The specimens were then thermocycled for 500 cycles between 50 c and 550 c water bath. A dwell time of 40 seconds were used for each bath. All the sixty specimens were transferred to the Instron testing machine individually and subjected to shear bond strength test. STATISTICAL ANALYSIS: The statistical data derived from the four subgroups were analysed using ANOVA test, Bonferroni test and paired t test . RESULTS: For superficial dentin Higher mean shear bond strength was recorded in Fifth generation bonding system followed by seventh generation bonding system and control respectively. The difference in mean shear bond strength between the groups was not statistically significant (P>0.05). Deep dentin Higher mean shear bond strength was recorded in fifth generation bonding agent followed by seventh generation bonding agent and Control respectively. The difference in mean shear bond strength between the groups was found to be statistically significant (P The difference in bond strength using fifth generation bonding agent in superficial dentin and deep dentin was not statically significant. (P>0.05). The difference in bond strength using seventh generation bonding agent in superficial and deep dentin was statically significant. (P DISCUSSION Adhesion to acid etched enamel was proposed by Buonocore in 1955. Bond strength to enamel or dentin is an important indicator of an adhesive system’s effectiveness. The bonding layer must not only support composite shrinkage stress, but also occlusal loads in stress bearing area to avoid gap formation leading to micro leakage, secondary caries and post operative sensitivity[3]. Bond strength testing and measurement of marginal – sealing effectiveness are the two most commonly employed methodologies to determine bonding effectiveness in the laboratory in predicting clinical performance. Dentin is a dynamic tissue. It represents a challenge to resin based adhesives while the bond strength of enamel has been studied extensively, bonding to dentin with the generation of bonding systems has remain unsolved. The dentin substrate has been characterized as a biologic composite of collagen matrix filled with apatite crystals dispersed between parallel micrometer sized hypermineralized collagenpoor dentinal tubules containing peritubular dentin. The composition of dentin substrate is made up of 50 % minerals, 20% of water and 30% of organic matrix. But as the dentin deepens this composition may change accordingly. This is due to the fact that the superficial dentin has few tubules and is composed predominantly of intertubular dentin. Deep dentin is composed mainly of larger funnel shaped dentinal tubules with much less intertubular dentin[4]. The intertubular dentin plays an important role during hybrid layer formation in superficial dentin and the contribution to resin retention is proportional to the intertubular dentin available for bonding[5]. Adhesive dentistry is based on the development of materials which establish an effective bond with the tooth tissues. Successful adhesive bonding depends on the chemistry of adhesive, on appropriate clinical handling of the material as well as on the knowledge of the morphological changes caused on the dental tissue by different bonding procedures[6]. The rationale behind the bond strength testing is that higher the actual bonding capacity of an adhesive, the better it will withstand such stresses and longer the restorations will survive in vivo. Bond strength testing is relatively easy and fast and remains most popular methodology for measuring the bonding effectiveness of adhesive systems[7]. The results of the present study revealed that superficial dentin presented bond strength values that were statistically higher and different from values obtained in dentin at deep level. Tagami J et al (1990) attributed this either to differences in chemical composition or regional differences in wetness (dentin permeability). Thus there are several factors that may contribute to high coefficient of variation that is often reported in dentin shear bond strength studies. Several earlier reports indicate that the bond strength of resin is highest on superficial dentin and lowest in deep dentin[8]. Suzuki T et al (1988) studied the efficacy of dentin bonding systems based on the site of dentin with reference to the observation of Causton et al that bond strengths to deep dentin were considerably lower than those to superficial dentin. The present study has confirmed the observation of Causton et al that the efficacy of dentin adhesives depends upon the dentin surface from superficial to deep dentin in the tooth tested[8]. Different from etch and rinse adhesives, self-etch adhesives do not require a separate etching step as they contain acidic monomers that simultaneously condition and prime the dental substrate. Consequently, this approach has been claimed to be user friendlier and less technique sensitive, thereby resulting in a reliable clinical performance. Self-etch adhesives are user friendly because of shorter application time and less steps and less technique sensitive because of no wet bonding but simple drying. Comparatively with the self-etch adhesives there is lower incidence of post-operative sensitivity experienced by the patient. This should to a great extent be attributed to the less aggressive and thus more superficial interaction with the dentin leaving tubules largely obstructed with smear layer[9]. This study is in consensus with Suzuki et al, with regard to, higher bond strength at all levels of dentin with TETRIC N BOND which belongs to the-etch and rinse approach. Pegadu Rafeal et al (2010)[4]compared the effect of different bonding strategies on adhesion to deep and superficial dentin and concluded that bond strength obtained in superficial dentin was significantly higher than that in deep dentin for all adhesives tested. They further concluded that the bond strengths of dentin bonding agents at any depth is dependent on the area occupied by resin tags, the area of intertubular dentin that is infiltrated by the resin and the area of surface adhesion. In the present study, comparison (paired t test) among the tetric n bond group, higher mean bond strength was recorded at the superficial dentin level than deep dentin. And comparison (paired t test) among the single bond universal group higher bond strength was recorded at the superficial dentin level than deep dentin. Van Meerbeek et al (2011) [9] recommended that for further optimization of the self-etch approach, synthesis of functional monomers tailored to exhibit good chemical bonding potential following a mild self-etch approach. The approach appears to guarantee the most durable bonding performance at dentin provided that it deals adequately with the debris smeared across the surface by the bur. Micromechanical interlocking is still the best strategy to bond to enamel. Selective phosphoric acid etching of enamel cavity margins is therefore today highly recommended followed by applying a self-etch procedure to both the earlier etched enamel and un-etched dentin. Such mild self-etch adhesives should contain functional monomers with a high chemical affinity to hydroxyapatite. CONCLUSION: At superficial dentin level higher mean shear bond strength was recorded in Fifth generation bonding system followed by Seventh generation bonding system and control group respectively. The difference in mean shear bond strength between the groups was not statistically significant (P>0.05). At deep dentin level, higher mean shear bond strength was recorded in Fifth generation bonding system followed by Seventh generation bonding system and control group respectively. The difference in mean shear bond strength between the groups was found to be statistically significant (P At deep dentin level, statistically significant results were obtained with the Fifth generation (Tetric N Bond) bonding system which had higher mean shear bond strength values compared to the Seventh generation self-etch bonding system (Single Bond Universal). There was a statistically significant difference in shear bond strength values with Fifth generation bonding system and control group ( without bonding system) at deep dentin. There was a significant fall in bond strength values as one reaches deeper levels from Superficial dentin to Deep dentin.

Friday, September 20, 2019

Intrusion detection system for internet

Intrusion detection system for internet ABSTRACT The visibility to detect the rapid growth of Internet attacks becomes an important issue in network security. Intrusion detection system (IDS) acts as necessary complement to firewall for monitoring packets on the computer network, performing analysis and incident-responses to the suspicious traffic. This report presents the design, implementation and experimentation of Network Intrusion Detection System (NIDS), which aims at providing effective network and anomaly based intrusion detection using ANOVA (Analysis of Variance) statistic. A generic system modelling approach and architecture are design for building the NIDS with useful functionalities. Solving the shortcomings of current statistical methods in anomaly based network intrusion detection system is one of the design objectives in this project as all of them reflect the necessary improvements in the network-based IDS industry. Throughout the system development of NIDS, several aspects for building an affective network-based IDS are emphasized, such as the statistical method implementation, packet analysis and detection capabilities. A step by step anomaly detection using ANOVA (Analysis of Variance) test has been calculated in the report. Chapter 1 Introduction This chapter is introduction to the whole project. This chapter introduce the project, its motivation, main objective and advance objectives. The chapter also give brief methodology of the research. Introduction The Though with the rapid growth of computer networks make life faster and easier, while on the other side it makes life insecure as well. Internet banking, on line buying, selling, on internet, is now part of our daily life, along with that, if we look at growing incidents of cyber attacks, security become a problem of great significance. Firewalls are no longer considered sufficient for reliable security, especially against zero error attacks. The security concern companies are now moving towards an additional layer of protection in the form of Intrusion Detection System. D.Yang, A.Usynin W.Hines (2006) explain intrusion and intrusion detection as: Any action that is not legally allowed for a user to take towards an information system is called intrusion and intrusion detection is a process of detecting and tracing inappropriate, and incorrect, or anomalous activity targeted at computing and networking resources [16]. Idea of intrusion detection was first introduced in 1980 (J.P Anderson) and first intrusion detection model was suggested in 1987 (D.E.Denning). Intrusion Prevention System (IPS) is considered as first line of defence and Intrusion Detection Systems are considered as second line defence [16]. IDS are useful once an intrusion has occurred to contain the resulting damage. Snot is best example of working Intrusion Detection System and Intrusion Prevention Systems (IDS/IPS) developed by Sourcefire. Which combine the benefits of signature, protocol and anomaly based inspection. IDS can be classified in to misuse detection and anomaly detection. Misuse detection or signature based IDS can detect intrusion based on known attack patterns or known system vulnerabilities or known intrusive scenarios where as anomaly intrusion detection or not-use detection systems are useful against zero -day attacks, pseudo zero-day attack. Anomaly based IDS based on assumption that behaviour of intruder is different from normal user. Anomaly detection systems can be divided into static and dynamic, S.Chebrolu, et al A.Abraham J.P.Thomas (2004). Static anomaly detectors assume that the portion of system being monitored will not change and they mostly address the software area of the system [17]. Protocol anomaly detection could be the best example of static anomaly detection [17]. Dynamic anomaly detection systems operate on network traffic data or audit records and that will be the main area of my interest in research. Anomaly IDS has become a popular research area due to strength of tracing zero-day threats, B.Schneier (2002). It examines user profiles and audit records etc, and targets the intruder by identifying the deviation from normal user behaviour and alert from potential unseen attacks [18]. Active attacks have more tendencies to be traced as compared to passive attacks, but in ideal IDS we try to traces both. Anomaly based Intrusion detection system are the next generation IDS and in system defence they are considered as second line of defence. In that research my main concentration will be Denial of service attacks their types and how to trace them. Motivations Though Internet is the well knowing technology of the day but still there are security concerns such as internet security and availability. The big threat to information security and availability is intrusion and denial-of-service attacks. Since the existing internet was developed about 40 year ago, at that time the priorities were different. Then unexpected growth of internet result exhaustion IPV4 address along with that it brings lots of security issues as well. According to the CERT statistical data 44,074 vulnerabilities had been reported till 2008. Intrusion is the main issue in computer networks. There are too many signature based intrusion detection are used within information systems. But these intrusion detection systems can only detect known intrusion. Another approach called anomaly based intrusion detection is the dominant technology now. Many organizations are working on anomaly based intrusion detection systems. Many organizations such as Massachusetts Institute of Technology are providing data set for this purpose. Motivated by the observation that there is lots of work is done using the Massachusetts Institute of Technology (MIT) data sets. Another aspect of the anomaly based intrusion detection system is statistical method. There are too many good multivariate statistical techniques e,g Multivariate Cumulative Sum (MCUSUM) and Multivariate Exponentially Weighted Moving Average (MEWMA) are used for anomaly detection in the wild of manufacturing systems [3]. Theoretically, these multivariate statistical methods can be used to intrusion detection for examining and detecting anomaly of a subject in the wild of information science. Practically it is not possible because of the computationally intensive procedures of these statistical techniques cannot meet the requirements of intrusion detection systems for several reasons. First, intrusion detection systems deal with huge amount of high-dimensional process data because of large number of behaviours and a high frequency of events occurrence [3]. Second, intrusion detection systems demand a minimum delay of processing of each event in computer systems to make sure an early d etection and signals of intrusions. Therefore, a method which study the variation is called ANOVA statistic would be used in this research. But there is no research available that have implemented ANOVA and F statistic on data sets collected by The Cooperative Association for Internet Data Analysis (CAIDA). The data sets provided by CAIDA are unique in their nature as it does not contain any session flow, any traffic between the attacker and the attack victim. It contains only reflections from the attack victim that went back to other real or spoof IP addresses. It creates trouble in estimating the attack. I will take that trouble as challenge. Research Question In this section I will explore the core objective of the research and a road map to achieve those objectives. During that research I will study data sets called backscatter-2008, collected by CAIDA for denial of services attacks. I will use statistical technique ANOVA to detect anomaly activities in computer networks. My research is guided by five questions. What is an intrusion and intrusion detection system? How can we classify intrusion detection system? What are different methodologies proposed for intrusion detection systems? How to analyse the CAIDA Backscatter-2008 data sets and make them ready for future study and analysis. How to figure out the different types of DOS attacks. How to implement ANOVA statistical techniques to detect anomaly in networks traffics Aims and Objectives Dos attacks are too many in numbers and it is not possible to discuss all the dos attacks in one paper. In this paper I will look to detect anomaly in network traffic using number of packets. Main/Core objectives of the research Review literature of recent intrusion detection approaches and techniques. Discuss current intrusion detection system used in computer networks Obtaining a data set from CAIDA organization for analysis and future study. Pre-process the trace collected by CAIDA, make it ready for future analysis. Recognizing the normal and anomaly network traffic in CAIDA dataset called backscatter-2008. Investigate Analyse deviated network traffic using MATLAB for different variants of denial of services attacks. Review of existing statistical techniques for anomaly detection Evaluation of the proposed system model Advance Objectives of the research Extend the system model to detect new security attacks. Investigating and analysing the ANOVA statistical techniques over other statistics for anomaly detection in computer networks. Nature and Methodology The area of research is related with detecting anomaly traffic in computer networks. The revolution in processing and storage capabilities in the computing made it possible to capture, store computer network traffic and then different kind of data patterns are derived from the captured data traffic. These data patterns are analysed to build profile for the network traffic. Deviations from these normal profiles will be considered anomaly in the computer network traffic. This research presents a study of vulnerability in TCP/IP and attacks that can be initiated. Also the purpose of research is to study TCP flags, find distribution for the network traffic and then apply ANOVA statistical techniques to identify potential anomaly traffic on the network. Report Structure Chapter 1: Introduction This chapter is about the general overview of the project .First of all introduction about the topic is given then motivation of the research is discussed. Core objectives and general road map of the project is discussed under the heading of research question. Aims and objectives are described to enable readers to understand the code and advance objectives of the research and general overview of the research. Nature and Methodology includes the nature of research and what methods will be used during that research to answer the research question and to achieve core and advance objectives. Lastly at the end all chapters in the report are introduced. Chapter 2: Research Background The main focus of this chapter to explain what is Intrusion and Detection why we need Intrusion Detection Systems, types and techniques being used for Intrusion Detection Systems, Challenges and problems of Intrusion Detection System. Chapter 3: Security Vulnerabilities and Threats in Computer Networks This area of report is dedicated to the Network Security in general and issues with computer networks. Then types of Denial of services attacks are described in general. This chapter also include Types of DOS attacks and brief description of each attack. Chapter 4: Data Source Data sets collected and uploaded by CAIDA on their web site are not in a format to be processed straight away. This chapter described in detail how to obtain those data sets. Then all the necessary steps that are carried out on the data sets to convert that trace into format that is understood by MATLAB for final analysis. It also includes the problems faced during the pre-processing of data sets as there not enough material available on internet for pre-processing of datasets and the application used during that phase. Chapter 5: System Model As the research is based on TCP/IP protocol So it is vital to discuss the TCP and the weak points that allow that attacker to take advantage and use them for malicious purpose. What measures could be taken to recognize the attacks well before they happen and how to stop them. In this chapter I will discuss the Intrusion detection Model and features of proposed IDS and finally the steps in proposed model. Chapter 6: ANOVA Statistic and Test Results Implementation in Proposed Model This chapter is the core chapter of this project. This chapter all about focus on statistical test in intrusion detection systems particularly on ANOVA statistics. In this chapter first, the existing statistical techniques are analysed for intrusion detection. ANOVA calculation, deployment in intrusion detection system, backscatter-2008 data set distribution and other categories wise distribution will be explained in this chapter. Finally in the chapter, includes the graphs of the data sets and ANOVA and F statistic graphs are shown. Chapter 7: Discussion and conclusion Finally I will sum up my project in this chapter. It will include conclusion of research. Personal improvements of during that project because during that project I been through my experiences that later I found in the project that is helpful in other areas. Finally the goals that are achieved through entire project. Summary This chapter will enable reader to understand the general overview of the research. First of all the different research questions are identified. Then the objectives of the research are described which includes both core and advanced objectives. What is the nature of the research and which method will be used in it are in picture. The topic provides overall background information. Furthermore explanation of the report structure and brief description of all the chapters are also included in this chapter. Chapter 2 Research Background Introduction The focus of this chapter is to explain, what is intrusion and intrusion detection system. Why we need Intrusion Detection System. This chapter also discuss types and techniques used for Intrusion Detection Systems. Goals, challenges and problems are the main parts of the Intrusion Detection System are also explained in this chapter. Intrusion Detection System (IDS) A computer intrusion is the number of events that breaches the security of a system. Such number of events must be detected in proactive manner in order to guarantee the confidentiality, integrity and availability of resources of a computer system. An intrusion into an information system is a malicious activity that compromises its security (e.g. integrity, confidentiality, and availability) through a series of events in the information system. For example intrusion may compromise the integrity and confidentiality of an information system by gaining root level access and then modifying and stealing information. Another type of intrusion is denial-of-service intrusion that compromises the availability of an information system by flooding a server with an overwhelming number of service requests to the server over short period of time and thus makes services unavailable to legitimate users. According to D. Yang, A. Usynin W. Hines, they describe intrusion and intrusion detection as: An y action that is not legally allowed for a user to take towards an information system is called intrusion and intrusion detection is a process of detecting and tracing inappropriate, and incorrect, or anomalous activity targeted at computing and networking resources. Why we need Intrusion Detection System To provide guarantee of integrity, confidentiality and availability of the computer system resources, we need a system that supervise events, processes and actions within an information system [1]. The limitations of current traditional methods, misconfigured control access policies and also the misconfigured firewalls policies in computer systems and computer network security systems (Basic motivation to prevent security failures), along with increasing number of exploitable bugs in computer network software, have made it very obvious to design security oriented monitoring systems to supervise system events in context of security violations [1]. These traditional systems do not notify the system administrator about the misuses or anomaly events in the system. So we need a system which provides proactive decision about misuse or anomaly events, so therefore from last two decades the intrusion detection systems importance is growing day by day. Now a days intrusion detection system plays vital role in an organization computers security infrastructure. Types of Intrusion Detection System Intrusion detection system is a technique that supervises computers or networks for unauthorized login, events, activity, or file deletion or modifications [1]. Intrusion detection system can also be designed to monitor network traffic, so it can detect denial of service attacks, such as SYN, RST, ICMP attacks. Typically intrusion detection system can be classified into two types [1]. Host-Based Intrusion Detection System (HIDS) Network-Based Intrusion Detection System (NIDS) Each of the above two types of intrusion detection system has their own different approach to supervise, monitor and secure data, and each has distinct merits and demerits. In short words, host based intrusion detection system analyse activity occurrence on individual computers, while on the other hand network based IDSs examine traffic of the whole computer network. Host-Based Intrusion Detection System Host based intrusion detection gather and analyse audit records from a computer that provide services such as Password services, DHCP services, web services etc [1]. The host based intrusion detection systems (HIDS) are mostly platform dependent because each platform has different audit record from other platforms. It includes an agent on a host which detect intrusion by examining system audit records, for example audit record may be system calls, application logs, file-system modification (access control list data base modification, password file modification) and other system or users events or actions on the system. Intrusion detection system were first developed and implemented as a host based [1]. In host based intrusion detection systems once the audit records is aggregated for a specific computer, it can be sent to a central machine for analysis, or it can be examined for analysis on the local machine as well. These types of intrusion detection systems are highly effective for detecting inside intrusion events. An unauthorized modification, accesses, and retrieval of files can detect effectively by host based intrusion detection system. Issues involve in host based intrusion detection systems is the collection of audit records for thousands of computer may insufficient or ineffective. Windows NT/2000 security events logs, RDMS audit sources, UNIX Syslog, and Enterprises Management systems audit data (such as Tivoli) are the possible implementations of the host based intrusion detection system. Network-Based Intrusion Detection System Network-based intrusion detection system (NIDS) is completely platform independent intrusion detection system which predicts intrusion in network traffic by analysing network traffic such as frames , packets and TCP segments (network address, port number, protocols TCP headers, TCP flags etc) and network bandwidth as well. The NIDS examines and compared the captured packets with already analysed data to recognize their nature for anomaly or malicious activity. NIDS is supervising the whole network, so it should be more distributed than HIDS. NIDS does not examine information that originate from a computer but uses specials techniques like packet sniffing to take out data from TCP/IP or other protocols travelling along the computer network [1]. HIDS and NIDS can also be used as combination. My project focus on network based intrusion detection systems, in this project we analyse TCP flags for detecting intrusions. Techniques Used in Existing IDS In the above section we discussed about the general existing type of the intrusion detection system. Now the question arises that how these intrusion detection system detect the intrusion. There are two major techniques are used for above each intrusion detection system to detect intruder. Signature Detection or Misuse Detection Anomaly Detection Signature Detection or Misuse Detection This technique commonly called signature detection, this technique first derives a pattern for each known intrusive scenarios and then it is stored in a data base [3]. These patterns are called signatures. A signature can be as simple as a three failed login or a pattern that matches a specific portion of network traffic or it may be a sequence of string or bits [1]. Then this technique tests the current behaviour of the subject with store signature data base and signals an intrusion when there is a same pattern match. The main limitation in this technique, that it cannot detect new attacks whose signatures are unknown. Anomaly Detection In this technique the IDS develop a profile of the subjects normal behaviour (norm profile) or baseline of normal usage patterns. Subject of interest may be a host system, user, privileged program, file, computer network etc. Then this technique compare the observed behaviour of the subject with its normal profile and alarm an intrusion when the subjects observe activity departs from its normal profile [3]. For comparison, anomaly detection method use statistical techniques e,g ANOVA K-mean, Standard Deviations, Linear regressions, etc [2]. In my project, I am using ANOVA statistic for anomaly detection. Anomaly detection technique can detect both known and new intrusion in the information system if and only if, there is departure between norm and observed profile [3]. For example, in denial of service attack, intrusion occurs through flooding a server, the ratio of the events to the server is much higher than the events ratio of the norm operation condition [3]. Issues and Challenges in the IDS An intrusion detection system should recognize a substantial percentage of intrusion while maintain the false alarm rate at acceptable level [4]. The major challenge for IDS is the base rate fallacy. The base rate fallacy can be explained in false positive false negative. False positive means when there is no intrusion and the IDS detect intrusion in the event. False negative when there is an intrusion in the events and the IDS does not detect it. Unfortunately, the nature of the probability includes, and the overlapping area between the observed and training data, it is very difficult to keep the standard of the high rate of detections with low rate of false alarms [4]. According study held on the current intrusion detection systems depicted that the existing intrusion detection systems have not solved the problem of base rate fallacy [4]. Summary An intrusion into information system compromises security of the information system. A system, called intrusion detection is used to detect intrusion into information system. The two major types of IDS are HIDS and NIDS. The host based intrusion detection system monitor mostly the events on the host computer, while the NIDS monitor the activity of the computer network system. There are two approaches implemented for intrusion detection in IDS, anomaly and signature. Anomaly use statistical methods for detecting anomaly in the observed behaviour while signature check patterns in it. Base rate fallacy is the major challenge for IDS. Chapter 3 Security Vulnerabilities and threats in Networks Introduction In this chapter we are going to discuss the computer and network security. For computer security, there are some other terminologies like vulnerability, exploitability and threats are discussed as well in the chapter. Then chapter focus on Denial of Service attack, which is the most dominant attack in the wild of computer science. The chapter also concentrate the all aspects of the denial of service attack. Computer Security In the early days of the internet, network attacks have been a difficult problem. As the economy, business, banks and organization and society becomes more dependent on the internet, network attacks put a problem of huge significance. Computer security preclude attacker from getting the objectives through unauthorized use of computers and networks [5]. According to the Robert C. Searcord Security has developmental and operational elements [5]. Developmental security means, developing secure software with secure design and flawless implementation [5]. Operational Security means, securing the implemented system and networks from attacks. In computer security the following terminologies are used most commonly [5]. Security Policy: A set of rules and rehearses that are typically implemented by the network or system administrator to their system or network to protect it from attacks are called security policies. Security Flaw: A software fault that offers a potential security risk is called security flaw. Vulnerability: the term vulnerability is a set of conditions through malicious user implicitly or explicitly violates security policy. Exploit: a set of tools, software, or techniques that get benefit of security vulnerability to breach implicit or explicit security policy [5]. The term information security and network security are often used interchangeably. However, this project focus intrusion in computer networks, so we are going to discuss network security. The term network security is the techniques that are used to protect data from the hacker travelling on computer networks. Network security Issues There are many issued involved in the network security but the following are the most common. Known vulnerabilities are too many and new vulnerabilities are being discovered every day. In denial of service attack when the malicious user, attack on the resources of the remote server, so there is no typical way to distinguish bad and good requests. Vulnerability in TCP/IP protocols. Denial of service Attacks A denial of service attacks or distributed denial of service attack is an attempt to make computer resources exhausts or disable or unavailable to its legitimate users. These resources may be network bandwidth, computing power, computer services, or operating system data structure. When this attack is launched from a single machine, or network node then it is called denial of service attack. But now days in the computer wild the most serious threat is distributed denial of service attack [4]. In distributed denial of service attack, the attacker first gain access to the number of host throughout the internet, then the attacker uses these victims as launch pad simultaneously or in a coordinated fashion to launch the attack upon the targets. There are two basic classes of DoS attacks: logic attacks and resource attacks. Ping-of-Death, exploits current software flaws to degrade or crash the remote server is an example of the logic attacks. While on the other hand in resource attacks, the victims CPU, memory, or network resources are overwhelmed by sending large amount of wrong requests. Because the remote server, does not differentiate the bad and good request, so to defend attack on resources is not possible. Various denials of service attacks have some special characteristics Oleksii ignatenko explain the characteristics of the denial of service attacks as in the figure 1. Your browser may not support display of this image. Figure 1 Denial of service attack characteristics Attack type: a denial of service can be a distributed (when it comes from many sources) or non-distributed (when it comes from only one source). Attack Direction: attack direction may be network or system resources. Attack Scheme: Attack Scheme can be direct from malicious users source or it can be reflections form other victims systems, or it can be hidden. Attack Method: Method means that vulnerability that allows attack. Targeted attack utilizes vulnerability in protocols, software and services, while consumption method consumes all possible resources. Exploitive attacks take advantages of defects in operating system. operating system Methods for Implementing Denial of Service Attacks A denial of service attack can be implemented in many ways; the following are the most common implantation techniques Attempt to flood a network, thereby stopping legitimate network traffic Attempt to interrupt connections between two systems, thereby preclude access to a service Attempt to prevent a specific user from accessing a service The flood method can be deployed in many ways but the following are well known in the wild of networks system. TCP-SYN Flood ICMP Flood RST attack TCP-SYN Flood: In order to achieve the TCP-SYN flood the attacker tries to establish the connection to the server. Normally a client establishes a connection to the server through three way handshake. In three way handshake, The client or any sender sends the TCP packet with the SYN flag set. The server or receiver receives the TCP packet, it sends TCP packet with both SYN and ACK bits are set. The client receives SYN-ACK packet and send ACK packet to the server. The three way handshake can easily be understood in the figure 2: Client Server Your browser may not support display of this image.Your browser may not support display of this image.Your browser may not support display of this image. Your browser may not support display of this image. Your browser may not support display of this image. Figure 2 Three way Handshake This is called three way handshake of TCP connection establishment. So in SYN flood what the attacker does, he sends SYN packet to the server and the server responds with SYN-ACK packets but the attacker does not sends the ACK packet. If the server does not receive the ACK packet from the client it will resends a SYN-ACK packet again after waiting for 3 seconds. If SYN-ACK still does not arrive, the server will send another SYN-ACK after 6 seconds. This doubling in time continuous for a total of 4 or 6 attempts (the exact number depends upon the implementation of the TCP protocol on the server side) [8]. So in SYN flood the attacker install Zombies on Internet hosts and sends huge amount of SYN request from spoof IP to the server or any host on the internet and utilize all the server or host memory and data structure. In this way the server get busy and is not able to accept request or respond to

Thursday, September 19, 2019

Botticelli Essay example -- Biography

He used his paintbrush like a pen or a pencil to outline. He was more interested in making his paintings beautiful in a fantasy type of way. He died a lonely man having done little or no more painting in the last ten years. Who was this famous artist? Botticelli. Thoughtful and clever, Botticelli painted many famous masterpieces. Botticelli’s real name was: Alessandro Filipepi. He was born in 1445 in Florence, Italy. This was the time of the Renaissance. Botticelli was the youngest of five children. He got his nickname when working with a goldsmith. The goldsmith named him Botticelli, meaning, â€Å"Little barrel†. Many other people of the Renaissance said he had a deep-set of eyes and flowing locks. But they also said he was a jokester and a prankster to his friends (â€Å"WebMuseum† par 2). By the time he was 15, he had his own workshop to show off his work. (â€Å"Historylink† par 2). In addition, when he was 15 years old he already was training with a very popular painter from the Renaissance. His name was Fra Filippo Lippi (Historylink). Fra Filippo Lippi taught him how to mix colors and how to paint pictures. In 1465 Botticelli made his own studio (â€Å"WebMuseum† par 3). In comparison Botticelli and Fra Filippo Lippi are very similar. They both painted a picture beginning with: The Adoration of the†¦ Botticelli’s picture: The Adoration of the Maji is a painting of the birth of Christ. Lippi’s picture: The Adoration of the Kings is a picture of the Kings. (â€Å"FactMonster† par 1). Botticelli spent most of his life in Florence. He painted many pictures of mythology. His most famous masterpiece was the Birth of Venus (â€Å"Artchive† par 2.). He was devoted to only paint pictures of mythological beings instead of religious subjects. That’s what he was... ...elli made a big difference in Florence, Italy. He worked for the famous Medici family. The Medici family was very important in the Renaissance. They controlled the Florence city and they were very wealthy. They valued him very much. Since Botticelli’s paintings were known for their poetic feeling, they either told a story or showed a famous scene from a mythological or religious subject. The masterpieces never had anything to do with science or nature. Not all of the characters were real they just had to stand for a purpose in the painting. Botticelli’s master Fra Filippo Lippi impacted his life by getting him to start to paint pictures. Without his assistance he would have never learned to paint any of the famous masterpieces in the Renaissance. He learned about mythological subjects and how to use decorative details. Lippi got him to be the gifted artist he was.

Wednesday, September 18, 2019

The Devlopment of Modern Africa Essay -- essays research papers fc

The Development of Modern Africa   Ã‚  Ã‚  Ã‚  Ã‚  There are over 40 countries in sub-Saharan Africa and the wealth of natural resources and the prevalence of wealth in the northern segments of Africa have led many to speculate about the equity and economic development in the sub-Sahara. Unfortunately, the progression of economic, political and social factors in this region have done little to improve the overall conditions, and have instead demonstrated a consistent bias towards the government and the social elites that has impacted the chances of successful development in the region. Since the end of World War II, changes in the infrastructure, the political forces, and in the capacity for collective action in many of these countries has underscored what some have described as the â€Å"Africa crisis† (Stryker, 1986).   Ã‚  Ã‚  Ã‚  Ã‚  One of the major issues that still remain in this region is the history of development in the sub-Sahara, generally traced back to the history of British rule, and the relinquishing of colonial control which led to greater regionalization. But there was little in place in terms of expansion planning or economic development in the period following the end of the Second World War, and it can be argued that the struggle for economic development is linked to existing and maintained inequities, based both on social conditioning and political control, that has weakened the agrarian force and impacted the development of industrialization. During the 1980s, when many countries through out the world were experiencing the successful pull away from years of recession, the countries of the African sub-Sahara were not impacted by this positive transformation, and instead, it was posited that the decline in economic conditions would result in years of continued recession (Stryker, 1986). A number of theorists have attributed this crisis to different components of the politics, the economic base, and the social perspectives, as well as basic problems like the lowest world-wide life expectancy, lowest nutritional and literacy rates, lack of access to medical care, safe water supplies, and support services, and high population growth coupled by the highest infant mortality rates in the world (Stryker, 1986). It has been recognized that of the 40-50 poorest counties of the world, most (2/3) are located in the sub-Saha... ...ility, the perception that reforms could somehow promote a major transformation within the varied communities of the sub-Sahara placed too great an emphasis on the process of development and too little emphasis on the impact that the division itself would have on existing communities. Bibliography Berry, Sara (1992, Summer). Hegemony on a shoestring: indirect rule and access to agricultural land. Africa, v62 n3, pp. 327(29). Gyimah-Brempong, Kwabena (1992, May). Do African governments favor defense in budgeting? Journal of Peace Research, v29 n2, pp. 191(16). Jaycox, Edward (1993, March). Structural adjustment spurs African development. Africa News, v38 n2-3, pp. 14(1). Lonsdale, J.M. (1970). Nationalism and Traditionalism in East Africa. in Collins, R., Ed. Problems in the History of Colonial Africa, 1860-1960. Englewood Cliffs, NJ: Prentice-Hall, Inc. Seitz, Steven (1991, January-April). The military in black African politics. Journal of Asian and African Studies, v26 n1-2, pp. 61(15). Stryker, Richard (1986). Poverty, inequality, and development choices in contemporary Africa. in Martin, P. and O’Meara, P., Eds. Africa. Bloomington, IN: Indiana University Press.

Tuesday, September 17, 2019

Music Firms Want EU to Cut Off Pirates

The plan, backed by French President Sarkozy, asks Internet service providers to disconnect users who illegally download copyrighted music by Leigh Phillips With sales of compact discs across Europe in free-fall, the record industry has called on the EU to follow French president Nicolas Sarkozy's lead and force internet service providers to disconnect customers who illegally download music. â€Å"Up until now, ISPs have allowed copyright theft to run rampant on their networks, causing a massive devaluation of copyrighted music,† said John Kennedy, the CEO of the International Federation of the Phonographic Industry (IFPI), the record industry trade association. â€Å"The time for action is now — from the EU and other governments.† The IFPI believes the mood of indulging ISPs and their downloading customers is coming to an end. â€Å"2007 was the year ISP responsibility started to become an accepted principle,† he said. â€Å"2008 must be the year it becomes reality.† Last November, president Sarkozy backed an initiative in partnership with the record industry and internet providers that would see ISPs automatically disconnect customers who illegally download copyrighted material. â€Å"More than anyone else in 2007, our industry has to thank French President Nicolas Sarkozy and the chairman of FNAC [the France-based chain of record and electronics superstores], Denis Olivennes, for the change of mood,† said Mr Kennedy. The Sarkozy agreement, announced in November, is the most significant milestone yet in the task of curbing piracy on the internet. The French president's move requires ISPs to disconnect customers using an automated system and to test filtering technologies. Mr Kennedy made comments in an IFPI report on the state of the sector. Although there was a 40 percent increase in digital sales globally in 2007, according to the report, there was a 10 percent decline in sales of compact discs last year. The report also praised government moves against illegal downloading in Sweden, Belgium, the UK, the US and Asia. Provided by EUobserver—For the latest EU related news BusinessWeek Europe January 28, 2008 1:04PM EST

Monday, September 16, 2019

Marco Polo’s Influence on Christopher Columbus Essay

Marco Polo’s Travels formulated in Europe of the fourteenth and fifteenth century a new perception of the Eastern world, a world just as advanced and sophisticated as that of the West. Yet, another two centuries were needed for a significant change to take place; this was Christopher Columbus’ voyage. For Christopher Columbus, Marco Polo’s travelogue was a valuable and solid resource that contained the necessary details of the East. The geographical descriptions in his writing generated a basis for Columbus’ scientific calculations for his expedition and the explicit depictions of the luxury of Cipangu and Cathay, flawed though they were, created a strong motivation for Columbus. In the 12th of May 1492, Christopher Columbus, accompanied by the writings of Marco Polo, sets sail to change history forever. Marco Polo’s travelogue was the only written account to have enlightened the European world with details of the Eastern world. In the year 1254, when Marco Polo was born in a noble family of Venice, the public knowledge of the East was close to nothing. Ever since the years of Alexander the Great, Europe had scarce information about its neighboring civilization. Although basic trade routes were present along the Silk Road, â€Å"no one in the West seems to have had any notion of the country from which it had come or those through which it had passed.† Islamic countries that surrounded Europe, along with the Atlantic Ocean created a natural barrier, isolating the Europeans from the rest of the world. Even the vigorous merchants of Venice, Genoa, and Constantinople could not penetrate beyond the Mediterranean and the Black Sea. â€Å"The religion and commerce of Islam were flourishing throughout that continent† after the first Crusades. Due to this strong â€Å"Islamic curtain†, the Europeans were unaware of the existence of the Mongol empire gradually rising as one of the world superpowers until Marco Polo came back with fascinating stories after his service under the Great Khan. The seventeen years of service under Kublai Khan safely and conveniently provided Marco with a wide range of experiences in the Asian continent. The Polo brothers, Maffeo and Nicolo Polo (father and uncle of Marco Polo) had initially met with the Khan some years before they took Marco on their second journey to China. Let me tell you next of the personal appearance of the Great Lord of Lords whose name is Kubilai Khan. He is a man of good stature, neither short nor tall but of moderate height. His limbs are well fleshed out and modeled in due proportion. His complexion is fair and ruddy like a rose, the eyes black and handsome, the nose shapely and set squarely in place. The Khan was a wise and brave man, and Marco being a master four languages and young and healthy as he was, the Khan appointed him to a high post in the administration. Marco was given a golden tablet in the shape of a tiger’s head, â€Å"which grated Ch’ang Ch’un a free pass and the right of assistance everywhere in the Mongol Realm.† With sufficient access, Marco was able to visit various places in Asia and gained an abundant amount of experience with its culture. He illustrates the geography, climate, people, and religions of the East in depth, even mentioning the recipe of Mongolian dried milk. Marco Polo’s achievements were only completed after his return home, when encounters Rustichello of Pisa, a romance writer who became his collaborator in putting his stories into a book. Two years before the death of Kublai Khan, the Polos were assigned their last mission to escort the Mongol princess Kokachin to marry the Persian prince, and then to return home. Painstakingly, they accomplished their mission and arrived home in the winter of 1295. Marco begins a new life with the jewels and gold acquired in his journey. When a war between Venice and Genoa starts off, Marco is captured and imprisoned for a year in the Genoese prison. Here he meets Rustichello, to whom Marco tells the stories of his great journey. After his return home, Marco, although Rustichello did most of the work, publishes his travelogue: Marco Polo Travels. Marco’s book remained more for entertainment purposes until the 1450s and 60s when Johann Gutenberg invents the letterpress and catalyzes its spread. At first, many people were skeptical about his book. His writing contained many mentions of legends and myths that seemed to be quite exaggerated. Neither did Marco include any descriptions about the Great Wall. Regardless of these  controversies, his book became one of the first books to be massively published through the Gutenberg’s letterpress. Travels spreads out through Europe in no time. By the time all of Europe is shocked by his book, Marco approaches his death, leaving the last words: â€Å"I have only told the half of what I saw!† Whether or not Marco’s words were reliable was not an issue at this point. In the years following Marco’s death, immense changes occurred in the minds of Europeans including the perception of world geography, directly affecting Columbus’ preparations. The TO map best represents the medieval understanding of the world. (Diagram attached to the back) The circle O, represents the world and the branches of the T, the Don and the Nile. Asia fills the upper semi circle and in the left and right of the upright section of the T, which represents the Mediterranean, lays Africa and Europe. In the center is Jerusalem and at the top is Earthly Paradise of Adam and Eve, believed at the time to be the source of great rivers such as the Tigris and the Euphrates. Images of Noah’s Ark, the Tower of Babel, and others of the bible can be found on the map. As presented, the TO map signifies the primitive form of the world map before the years of Marco Polo. The world map rapidly evolved starting from the publishing of Marco’s book to the time of Columbus. The impact of Marco Polo’s works is displayed in these maps. Among the numerous versions of different maps, â€Å"the first maps known to us†¦ strongly influenced by Marco’s Books and which still remain to †¦[is]†¦ the Catalan Atlas,† drawn up by the Majorcan Jew Abraham Cresques at around 1380. Here is introduced for the first time, India, in the form of a peninsula and images and lands of the Great Khan. The map also includes on it images of traditional legends of the area. Great resemblance can be found between them and those of Marco’s book. Representations of the world grew bigger and wider until finally, even the notion of a path westward to Asia is brought up. When the impact of Marco Polo started to take place, a physician of Florence by the name of Paolo Toscanelli, played the role of transforming the ideas of Marco Polo into the scientific inspirations for Christopher Columbus.  Toscanelli was one of Marco Polo’s believers, who supported Marco Polo’s estimate of the length of Asia to be correct. He argued that, according to his calculations, â€Å"a voyage of 3000miles from Lisbon to Cipangu and 500miles from Lisbon to Quinsay† was possible. With this calculation, he urged men that an expedition for the search of Japan, described as the â€Å"most fertile in gold,† should be organized. Among these men was the young and ambitious Christopher Columbus. The theories of Toscanelli stimulated the intellectual interest of Columbus and soon Columbus was determined to find out more. Columbus wrote Toscanelli questioning him for more comprehensive information. Toscanelli replied with an encouragement of Columbus’ aspirations and a chart of calculations, which he carried with him on his voyage. By this time, Columbus was determined to put his thoughts into action. Although Columbus’ calculations were carefully made, most of it was erroneous. One of his major calculations was his misconception of a degree. He thought the length of a degree was 562/3 Italian nautical miles. (â€Å"the Italian nautical mile used by Columbus contained 1480 meters† ) This was not his own idea, but of the general public of his time. According to Henry Vignaud, he obtained his results â€Å"because he knew in advance what he wanted to find.† Based on his degree and other elements including the calculations of Toscanelli, Columbus’ conclusion came out to be far from the truth. It came out that Tokyo would be on the meridian that runs through Western Cuba, Chattanooga, Grand Rapids, and Western Ontario. In other words, â€Å"he underestimated the size of the world by 25 percent.† Yet, until his actual departure, he had no clue whatsoever of his mistakes or of the American continent. Marco Polo had provided Columbus with crucial information of the East, but Columbus had not known that there were so many more things to consider, such as the existence of another world in the West. Fifteenth century Europe was an age of exploration and discovery; interest of the Eastern world was increasing rapidly everyday. Trade with the Indies, which referred to most of Eastern Asia, flourished during the time of Columbus, especially in Portugal and Spain where spent most of his life. â€Å"The account of Polo’s travels told how to buy spices from the East,† and  other goods such as silk, gold, silver, or perfumes were also taken by caravans across Asia to Constantinople and then redistributed through Europe. Although the price was costly due to long and burdensome process of shipping and handling, the demand for these merchandises continued to rise as the amount of luxury and wealth of Europe also increased. Thus, it was soon evident for a new and shorter route for the importing of these valuables. Repeated attempts were made to get around Africa to India. Columbus, however, â€Å"decided that the African route was the hard way to the Indies.† He was thinking of an easier and quicker way to reach the East; he proposed to travel west. His rather rash plot satisfied the desires for expansion of the people of his time. After Columbus made up his mind, his next task was to convince the wealthy Princes to provide the necessary equipment and money for his expedition. Unfortunately, Columbus was turned down in the Portuguese committee, where he had gained a certain level of respect as a merchant. He, then moved to Spain, and started his six years of persuasion. It was hard for Columbus to support with solid evidence his requests at first. He, thus, turned to Marco Polo. Columbus used the tempting descriptions of the Cipangu, or today’s Japan, for his first argument against the princes. By the time of Columbus, â€Å"The Travels of Marco Polo became one of the best-known tales in western Europe.† One of the biggest issues of Marco Polo’s book was whether or not its magnificent portrayal of Japan’s luxury was true. According to Marco Polo, wealth of no other civilization matched that of the Japanese. They have gold in great abundance, because it is found there in measureless quantities†¦so much indeed that I can report to you in sober truth a veritable marvel concerning a certain palace of the ruler of the island. You may take it for a fact that he has a very large palace entirely roofed with find gold. Just as we roof our houses or churches with lead, so this palace is roofed with fine gold. Even the most stubborn princes gazed open-mouthed at the imagination of such  luxury. Certainly, the search for Cipangu sounded much more convincing after such descriptions. Another part of Columbus’ argument was based on religious reasoning. The failure of the Crusades was a huge disgrace for the Christian ruling class of Europe and many attempts to regain control of the Holy Land, which was then occupied by the Turks, were made. The Mongol empire, which the Europeans still believed to exist way after its actual downfall, sounded like a strategically profitable deal. Horrific impressions faded away as benevolent descriptions of Kublai Khan and the rest of his subjects were made in Marco Polo’s book. Now let me tell you something of the bounties that the Great Khan confers upon his subjects. For all his thoughts are directed towards helping the people who are subject to him, so that they may live and labor and increase their wealth. Likewise, Europeans were shocked at the incredibly civilized qualities of the Mongols they previously considered barbaric. In 1492, after six years of tenacious persuasion, King Ferdinand and Queen Isabella of Spain finally accept Columbus’ proposal. The end to Columbus’ persuasion of princes only brings forth about a new beginning of an arduous journey of exploration and a new world. Marco Polo’s Travels acted as a basis for Christopher Columbus’s achievement and the Age of Discovery. Columbus may have formulated a flawed theory of the world, but it was convincing enough for the princes who bought into it. This surely could not have been done without evidence found in Marco Polo’s book. Without Marco Polo, there would not have been Columbus, and furthermore, no America. Marco Polo’s possibly false information has made one of the biggest changes in history.