Sunday, August 23, 2020

Case Study of a Philosophical Argument of Francis Bacon Assignment

Contextual investigation of a Philosophical Argument of Francis Bacon - Assignment Example The fundamental structure of Bacon's hypothesis can be summarized by saying that he demanded that a decent researcher ought not be a subterranean insect and carelessly assemble information however nor should he be a bug and turn void speculations. Any great researcher will lie somewhere close to the two, and accumulate information and define hypotheses and logical realities from these perceptions of nature. This idea of Bacon's has been credited with maybe setting the ball moving on the immense measure of logical advances made in the seventeenth century, basically on the grounds that the old strategies didn't depend on perception and thought. Bacon accepted, as researchers do today, that science is something that ought to follow certain frameworks and methods. Experimentation is key since it drives individuals to reality, instead of something that just moves their very own thoughts and wants. Truth is eventually what we focus on in science today, and it appears to be odd this would b e an original thought in the seventeenth century, yet Bacon's way of thinking was one of the first to recommend target induction as a technique in science Be that as it may, despite the fact that this may sound clear to the cutting edge peruser, there are some philosophical contentions that have been utilized for and against the thoughts of Bacon. For instance, the logical strategy depends on perceptions, however there is additionally the point that the faculties themselves are untrustworthy and can prompt predisposition, regardless of whether we free our brains from symbols or not. It is hard to tell whether our perceptions on nature and science are genuine on account of how they work.; optical figments are a genuine case of a contention against logical request thusly. In any case, it must be proposed that there are no different methods of watching any logical request separated from to utilize the faculties, since they are all we have. All tests depend on estimations, pictures or results that must be seen to be noted down and to shape speculations from them. There is almost no else a researcher can do when it

Friday, August 21, 2020

How Does George Eliot bring about our sympathy with Silas Essay Example

How Does George Eliot achieve our compassion for Silas Essay Example How Does George Eliot achieve our compassion for Silas Essay How Does George Eliot realize our compassion for Silas Essay what accompanied misfortune may go when found. There are various elements to the pity Silas Marner gets, yet in the day Silas Marner was composed the world was a repulsive spot which turned on religion and was encircled by no questions about religion. I for one don't feel frustrated about Silas Marner and as we read through the book turned out to be increasingly mindful that I abhorred the book and was exhausted insane. I had to make steady notes since I couldnt recall any of the book since I despised it to such an extent.

Thursday, August 13, 2020

The Lost Secret of Narcissism Essay Topics

<h1>The Lost Secret of Narcissism Essay Topics </h1> <p>The easiest approach to make sense of such an a paper is to understand the author's perspective. The contentions given in order to demonstrate your point must be solid and persuading. With the guide of good examples you will watch the perfect structure of the article, right way of composing and the limit of the author to convince the peruser in his point of view. Set forth plainly, you will require time and practice to know the fine elements behind a significant structure and wording that wows the peruser. </p> <h2> The War Against Narcissism Essay Topics </h2> <p>Writing contention paper might be a workmanship in the sense so it requires careful comprehension of the subject, along with expertise. Understudies need to form papers dependent on the instructor's guidelines or their favored style recorded as a hard copy. The school exposition is among the most essential attributes of your sc hool application. Composing pugnacious paper is a mind boggling work, as it requires the nearness of numerous aptitudes at the indistinguishable second. </p> <p>Occasionally, choosing an amazing factious article points will be very intense. Some example points are given beneath. It's in this way essential to carefully consider distinctive school exposition themes. Considering that bunches of convincing articles concern questionable subjects, before composing, you can need to plunk down and consider what your conclusion on the theme really is. </p> <p>In a factious exposition you should introduce contentions about either side and please so observe significant occasions and court decisions about the subjects you're talking about. Motivation to make your own publicizing or media factious paper subjects isn't hard to find. Delineation articles are written so as to illuminate study subjects and flexibly fascinating and beautiful portrayals. It's critical to choose far from being obviously true pugnacious article subjects since you need restricting focuses you may counter to your own focuses. </p> <h2> The Tried and True Method for Narcissism Essay Topics in Step by Step Detail </h2> <p>It is conceivable to similarly do the articles offered in the absolute first area of every last one of the tests in the Official Study Guide. In this sort of circumstance, it's increasingly helpful to find instant expositions and use them for instance. It is certainly better to ask into the subject of your exposition you rself. There are a few fascinating and testing Shakespeare exposition points to choose from. </p> <p>Your sections don't interface each other's significance along with the full idea of your article may be inconceivable. A large portion of the people tend to flee from governmental issues and in this manner the legislative issues articles likewise, subsequently it is very critical to snatch the enthusiasm of the perusers till the finish of the exposition and that would be a lot of troublesome I know. At that point, as an approach to compose an extraordinary paper an individual should peruse a totally free model exposition on Freud's narcissism in the web. On the off chance that you wish to succeed, the absolute first thing you are to do is to pick the reasonable point for your paper. </p> <p>Don't stress, get a totally free full exposition, which can work as a manual for finish your assignments. Stephen's article is somewhat compelling. Meeting papers are composed subject to a meeting, performed by the creator. </p> <p>Writing an incredible enticing article isn't a simple activity, notwithstanding, it's attainable. An essayist just can't make the spaces, and they need to remain with different focuses. Let it feel as though you are extremely enthusiastic about what you're expounding on. Therefore, in the event that you're somebody who is befuddled about paper composing, at that point it's encouraged to all of you that, before you begin composing any article, experience the guidance. </p> <p>Bridget's exposition is very solid, however there keep on being a couple easily overlooked details that could be improved. The presentation is only one of the imperative components of the paper, as it makes the absolute initial introduction required to keep the enthusiasm over the arrangement of the article. There are various acceptable themes for delineation papers to pick from. </p> <p>Some researchers feel that narcissism is a self-exacted as such a private esteem. After some time, narcissism was conceptualized from various perspectives, contingent incompletely upon the instruments that were utilized to quantify it. Head narcissism results to consummation along with the need to conquer assignments. On the other side, optional narcissism begins from the nonappearance of adoration from parental figures. </p>

Tuesday, August 4, 2020

Essay on War Samples - Writing an Interesting Essay

Essay on War Samples - Writing an Interesting EssayDo you want to write an essay about war samples? If so, then you will need to get one that is interesting and informative, yet avoids saying 'all' about the war samples. However, this may not be easy to do if you want to make your essay interesting.It would be a good idea to start by reading up on the topic. You can use Google to look for samples and research the subject of the war sample. Once you have learned more about the subject, you can get an idea of how to go about writing an essay about war samples. This way, you can avoid saying too much and instead include facts and figures that are most important to the issue at hand.However, you still may find yourself needing to include all the information that you have learned about the war samples. This is where you need to consider finding an essay on war samples that has been written before. This way, you can familiarize yourself with the information and eventually be able to find o ut the necessary information. Then, you can edit your essay so that it contains only facts and figures from the war samples that you have read.In addition, you can also get a list of sources that are relevant to your essay. In order to do this, you can use a search engine to search for relevant websites. It will take a little time to do this, but once you have completed it, you will be able to learn more about sources that are relevant to your essay.Next, you should be able to write your essay without all the clutter. When you want to write an essay on war samples, you should avoid repeating information as this can leave out information that you need. Instead, you should focus on using facts and figures in your essay.Another thing you can do is to use research papers and questionnaires in your essay. Doing this can help you give your essay a more professional tone. Therefore, you will be able to know more about the topic and therefore be able to write an essay about war samples.Last ly, you should make sure that you make and summarize the most important points. In order to do this, you should first summarize the purpose of your essay in the introduction. Then, you should summarize your main points and the rest of the essay will follow this form.If you follow these tips, you will be able to write an essay on war samples that is interesting and informative. Then, you will be able to earn a good grade because you know what to say and what to leave out. So, get started now and make an interesting essay.

Tuesday, July 21, 2020

Who Can Provide Business Writing Help?

<h1>Who Can Provide Business Writing Help?</h1><p>Business composing help comes in various structures. Now and then, it is from an outside source, different occasions, it is accessible available as a piece of a full assistance program that is given by a set up company.</p><p></p><p>One of the principle wellsprings of business composing help are the expert editors that utilization their ability to compose and alter articles for business. Many individuals have business composing help on their resumes, regardless of whether it is just to clean the substance and punctuation. It is essential to get great practice in business composing help, particularly on the off chance that you need to do likewise for an assortment of activities that you will have in the future.</p><p></p><p>Another wellspring of business composing help is different writers who can help you with a wide range of archive composing that is required for diffe rent ventures. This incorporates white papers, letters, and pamphlets. It is now and again accommodating to recruit individuals who have some expertise in such things, particularly on the off chance that you have just done the innovative work of the material before employing others.</p><p></p><p>It is likewise conceivable to do your own exploration yourself to locate the perfect individual to work with. You may need to discover proficient references, converse with loved ones who have worked with that individual or take a gander at online catalogs that are explicitly equipped towards business composing help. There are sites that permit you to post your own data and see who comes in the list items. Make certain to set aside some effort to contrast these various alternatives with locate the best one for you.</p><p></p><p>More formal types of business composing help are accessible from outsiders, organizations, and offices that are committ ed to giving you the best out of their experience and aptitude. You can discover these by reaching them straightforwardly. You ought to have the option to get a statement at work, and there might be limits for mass requests, which may give you enormous reserve funds on your project.</p><p></p><p>When searching for somebody to deal with your specific segment of business composing help, remember that your necessities are exceptional. At times, you will be approached to deal with a particular subject in the business world, while in different cases, you may need to work with a gathering of individuals who are looking for answers for their own specific issues. Ensure you recognize what you are getting into, and make certain to follow the correct advances while picking somebody to work with.</p><p></p><p>One thing that can assist with isolating the good product from the waste with regards to business composing help is the organization that y ou decide to work with. Search for ones that had a decent notoriety and one that had created strong outcomes before. One that has built up a decent notoriety in the field is probably going to have the best materials for your necessities, yet in the event that it isn't what you requirement for your venture, you might need to consider taking your undertaking elsewhere.</p><p></p><p>As you can see, business composing help comes in a wide range of organizations. A decent business composing help specialist organization can furnish you with all that you need, from promoting methodologies to site content. While numerous organizations have a wide scope of necessities, understand that they all offer a similar objective of expanding benefits and fulfillment for their customers.</p>

Saturday, July 11, 2020

Introduction and Conclusion - Tips for Writing Them

<h1>Introduction and Conclusion - Tips for Writing Them</h1><p>Introductions and ends are both vital pieces of a paper. A decent presentation can be the contrast between a first or runner up finish and a poor completion. It sets the state of mind and style of the paper for the remainder of the paper. A decent end should integrate everything and give the peruser a short rundown of what's been said in the paper.</p><p></p><p>There are a few reasons why these two sentences are significant. It establishes the pace of the paper for the remainder of the paper. It gives the writer a thought regarding the substance and will control them to compose a wonderful consummation of the paper. The exact opposite thing that you need is to end the paper with something you don't concur with, and this is particularly valid for the presentations and ends. Thus, it is ideal to get ready for these segments by keeping in touch with them well in advance.</p>< ;p></p><p>This will tell you where you're going and what you're going to cover, and this is the place an acquaintance can truly help with get you destined for success. On the off chance that you start the paper well and give it a proper presentation, it will follow with great end to help balance the work out. Numerous individuals who compose all the time ordinarily feel somewhat bothered when they initially start another task and that is where they ought to get ready for their presentations and conclusions.</p><p></p><p>The presentations are the ones that hold the significant structure and the primary thoughts in the paper. They help to set the topic and to maneuver perusers into the paper by giving them an outline of what is being discussed.</p><p></p><p>Now, these acquaintances need with spread the principle subjects, yet additionally be finished with applicable realities, models, and even a fitting hotspot for each se ction. You'll need to tell the perusers why the themes in the paper are important, and why it merits their opportunity to peruse the paper. Individuals who haven't done research may discover the presentation part of a paper irritating. With the real factors close by, the essayist ought to have the option to direct perusers through the paper to assist them with arriving at the end and feel as if they're perusing a pleasant article.</p><p></p><p>Writing an end is another significant advance. Regardless of whether you've arranged the paper well ahead of time, you should realize how to finish up the paper. Numerous individuals additionally feel a little overpowered by this progression, which is the reason you should give yourself sufficient opportunity to get ready. When composing the end, the best methodology is to integrate everything so the paper streams starting with one thought then onto the next. A decent closure should remain solitary from everything else. </p><p></p><p>As you proceed through the article, you should begin on various levels. When composing presentations and ends, you should keep on being on the more significant levels. Give your perusers something of significant worth, and afterward move to the more elevated levels where you'll need to connect things together in an elegantly composed and strong manner. You should keep on doing this all through the whole essay.</p>

Thursday, July 9, 2020

The functions of an Insurance Firm - Free Essay Example

The insurance firms functions making insurance products and attains profitability through charging premiums exceeding overall expenses of the firm and making wise investment decisions in maximizing returns under varied risk conditions. The method of charging premiums depends on various underlying factors such as number of policy holders, number of claims, amount of claims, health condition, age, gender of the policy holder and so on. Some of these factors such as loss aggregate claims and human mortality rates have adverse impact on determining the premium calculation to remain solvent. Likewise, these factor need to be modelled using large amount of data, loads of simulations and complex algorithms to determine and manage risk. In this dissertation, we shall consider two important factors affecting the premiums, the aggregate loss claims and human mortality. We shall use theoretical simulations using R and use Danish loss insurance data to model aggregate claims. The Human Mortality Database (HMD)1 is used and smoothed human mortality rates are computed to price life insurance products respectively. In chapter 2, we shall examine the concepts of compounds distribution in modelling aggregate claim and perform simulations of the compound distribution using R packages such as MASS and Actuar. Finally, we shall analyse Danish loss insurance data from 1980 to 1990 and fit appropriate distributions using customized generically implemented R methods. In chapter 3 we shall explain briefly on concepts of graduation, generalised linear models and smoothing technique s using P-splines. We shall obtain deaths and exposure data from human mortality database for selected countries Sweden and Scotland and implement mortality rates smoothing using MortalitySmooth package. We compare the mortality rates based on various sets such as males and females for specific country or total mortality rates across countries like Sweden and Scotland for a given time frame ranging age wise or year wise. In chapter 4, we shall look into various life insurance and pension related products widely used in the insurance industry and construct life tables and commutation functions to implement annuity values. Finally, we shall provide the concluding comments of this dissertation in chapter 5. Chapter 2 Aggregate Claim distribution 2.1 Background Insurance based companies implement numerous techniques to evaluate the underlying risk of their assets, products and liabilities on a day- to-day basis for many purposes. These include Computation of premiums Initial reserving to cover the cost of future liabilities Maintain solvency Reinsurance agreement to protect from large claims In general, the occurrence of claims is highly uncertain and has underlying impact on each of the above. Thus modelling total claims is of high importance to ascertain risk. In this chapter, we shall define claim distributions and aggregate claims distributions and discuss some probabilistic distributions fitting the model. We also perform simulations, goodness of fit on data and conclude this chapter by fitting aggregate claim distribution to Danish fire loss insurance data. 2.2 Modelling Aggregate Claims The dynamics of insurance industry has different effects on the number of claims and amount of claims. For instance, expanding insurance business would have proportional increase on number of claims but negligible or no impact on amount of claims. Conversely, cost control initiatives, technology innovations have adverse effect on amount of claims but have zero effect on number of claims. Consequently, the aggregate claim is modelled based on the assumption that the number of claims occurring and amount of claims are modelled independently. 2.2.1 Compound distribution model We define compound distribution as follows S Random variable denoting the total claims occurring in a fixed period of time. Denote the claim amount representing the i-th claim. N Non-negative, independent random variable denoting number of claims occurring in a time. Further, is a sequence of i.i.d random variables with probability density function given by and cumulative density function by with probability of 0 is 1 for 1iN. Then we obtain the aggregate claims2 S as follows With Expectation and variance of S found as follows Thus S, the aggregate claims is computed using Collective Risk Model3 and follows compound distribution. 2.3 Compound Distributions for Aggregate Claims As discussed in Section 2.2, S follows compound distribution, were the number of claims (N) is the primary distribution and the amount of claims(X) being secondary distribution. In this Section we shall describe the three main compound distributions widely used to model aggregate claims models. The primary distribution can be modelled based on non-negative integer valued distributions like Poisson, binomial and negative binomial. The selection of a distribution depends from case to case. 2.3.1 Compound Poisson distribution The Poisson distribution is referred to distribution of occurrence of rare events, number of accidents per person, number of claims per insurance policy and numbers of defects found in product manufacturing are some of the real time examples of Poisson distribution. Here, the primary distribution N has a Poisson distribution denoted by N ~ P(ÃÆ'Ã… ½Ãƒâ€šÃ‚ » with parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». The probability density function, expectation and variance are given as follows for x=0,1. Then S has compound Poisson distribution with parameters ÃÆ'Ã… ½Ãƒâ€šÃ‚ » and denoted as follows S ~ CP(ÃÆ'Ã… ½Ãƒâ€šÃ‚ », and 2.3.2 Compound Binomial distribution The binomial distribution is referred to distribution of number of successes occurring in an event, the number of males in a company, number of defective components in random sample from a production process are real time examples representing this distribution. The compound binomial distribution is a natural choice to model aggregate claims when there is an upper limit on the number of claims in a given time period. Here, the primary distribution N has a binomial distribution with parameters n and p denoted by N ~ B(n,p. The probability density function, expectation and variance are given as follows For x=0,1,2.n Then S has compound binomial distribution with parameters n, p and denoted as follows S ~ CB(n, p , -p) 2.3.3 Compound Negative Binomial distribution The compound negative binomial distribution models aggregate claim models. The variance of negative binomial is greater than its mean and thus we can use negative binomial over Poisson distribution if the data has greater variance than its mean. This distribution provides a better fit to the data. Here, the primary distribution has a negative binomial distribution with parameters n and p denoted by N ~ NB(n,p with n0 and 0p1. The probability density function, expectation and variance are given as follows for x=0,1,2. Then S has a compound negative binomial distribution with parameters n, p and denoted as follows S ~ CNB(n,p, 2.4 Secondary Distributions Claim Amount Distributions. In previous Section 2.3, we defined the three different compound distributions widely used. In this section, we shall define the generally used distributions to model secondary distributions for claim amounts. We use positive skewed distributions. Some of these distributions include Weibull distribution used frequently in engineering applications. We shall also look into specific distributions such as Pareto and lognormal which are widely used to study loss distributions. 2.4.1 Pareto Distribution The distribution is named after Vilfredo Pareto4 who used it in modelling economic welfares. It is used these days to model income distribution in economics. The random variable X has a Pareto distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Pareto(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.2 Log normal Distribution The random variable X has a Log normal distribution with parameters and where, 0 and is denoted by X ~ LN(, ), Where, and are the mean and variance of Log(X). The log normal distribution has a positive skew and is a very good distribution to model claim amounts. The probability density function, expectation and variance are given as follows For x0 and 2.4.3 Gamma distribution The gamma distribution is very useful to model claim amount distribution. The distribution has shape parameter , and scale parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». Then the random variable X has a Gamma distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Gamma(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.4 Weibull Distribution The Weibull distribution is extreme valued distributions, because of its survival function it is used widely in modelling lifetimes. The random variable X has a Weibull distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( The probability density function, expectation and variance are given as follows For x0 2.5 Simulation of Aggregate claims using R In Section 2.3 we discussed about aggregate claims and the various compound distributions used to model it. In this section we shall perform random simulation using R program. 2.5.1 Simulation using R The simulation of aggregate claims was implemented using packages like Actuar, MASS5. The generic R code available in Programs/Aggregate_Claims_Methods.r is given in Appendix 1 implements simulation of random generated aggregate claim of any compound distribution samples. The following R code below generates simulated aggregate claim data for Compound Poisson distribution with gamma as the claim distribution denoted by CP(10,. require(actuar) require(MASS) source(Programs/Aggregate_Claims_Methods.r) Sim.Sample = SimulateAggregateClaims (ClaimNo.Dist=pois, ClaimNo.Param =list(lambda=10),ClaimAmount.Dist=gamma,ClaimAmount.Param= list(shape = 1, rate = 1),No.Samples=2000 ) names(Sim.Sample) The SimulateAggregateClaims method in Programs/Aggregate_Claims_Methods.r generates and returns simulated aggregate samples along with expected and observed moments. The simulated data can then be used to perform various tests, comparisons and plots. 2.5.2 Comparison of Moments The moments of expected and observed are compared to test the correctness of the data. The following R code returns the expected and observed mean and variance of the simulated data Respectively. Sim.Sample$Exp.Mean;Sim.Sample$Exp.Variance Sim.Sample$Obs.Mean;Sim.Sample$Obs.Variance The Table 2.1 given below shows the simulated values for different sample size. Clearly the observed and expected moments are similar and the difference between them converges as number of sample increases. Samples size 100 1000 10000 100000 Observed Mean 10.431 09.953 10.008 09.986 Expected Mean 10 10 10 10 Observed Variance 20.72481 19.692 20.275 19.810 Expected Variance 20 20 20 20 Table 2.1 Comparison of observed and expected moments for different sample size. 2.5.3 Histogram with curve fitting distributions Histograms can provide useful information on skewness, information on extreme points in the data, the outliers and can be graphically measured or compared with shapes of standard distributions. The figure 2.1 below shows the fitted histogram of simulated data compared with standard distributions like Weibull, Normal, Lognormal and Gamma respectively. The function PlotAggregateClaimsData(Agg.Claims) is used to plot the histogram along with fitted standard distributions. The histogram is plotted by dividing them in to breaks of 50. The simulated data is then fitted using the fitdistr() function in the MASS package and fitted for various distributions like Normal,Lognormal,Gamma and Weibull distribution. The following R program describes how the fitdistr() function in MASS is used to compute the Gamma parameters and plot the respective curve as described in Figure 2.1 gamma = fitdistr(Agg.Claims,gamma) Shape = gamma$estimate[1] Rate= gamma$estimate[2] Scale=1/Rate Left = min(Agg.Claims) Right = max(Agg.Claims) Seq = seq(Left,Right,by= 0.01) lines(Seq,dgamma(Seq,shape=Shape, rate= Rate, scale=Scale), col = blue) Figure 2.1 Histogram of simulated aggregate claims with fitted standard distribution curves. 2.5.4 Goodness of fit The goodness of fit compare the closeness of expected and observed values to conclude whether it is reasonable to accept that the random sample fits a standard distribution or not. It is type of hypothesis testing were the hypotheses are defined as follows. : Data fits with the standard distribution : Data does not fit with the standard distribution The chi-square test is one of the ways to test goodness of fit6. The test uses histogram and compares it with the fitted density. It is used by grouping data into different intervals using k breaks. The breaks are computed using quantiles. This computes the expected frequency,. , the observed frequency is calculated using the product of difference of the c.d.f with sample size. The test statistic is defined as Where is the observed frequency and is expected frequency for k breaks respectively. To perform simulation we shall use breaks of 100 to split the data into equal cells of 100 and use histogram count to gro up the data based on the observed values. Large values of leads to rejecting null hypothesis The test statistic follows distribution with k-p-1 degrees of freedom where p is the number of parameters in the standard fitted distribution. The p-value is computed using 1- pchisq() and is accepted if p-value is greater than the significance level . The following R code computes chi-square test Test.ChiSq=PerformChiSquareTest( Samples.Claims= Sim.Sample$AggregateClaims,No.Samples=N.Samples) Test.ChiSq$DistName Test.ChiSq$X2Val;Test.ChiSq$pvalue Test.ChiSq$Est1; Test.ChiSq$Est2 Test Statistic Gamma Normal Lognormal Weibull 125.466 160.2884 439 91 p-value 5.609* 0 Table 2.2 Chi-Square and p-value for compound Poisson distribution The highest p-value signifies better fit of data with the standard distribution. In the above simulation, table 2.2 explains that Weibull distribution provides a better fit with the following para meters shape =2.348 and scale = 11.32, The eye-ball of the histogram confirms the same. 2.6 Fitting Danish Data 2.6.1 The Danish data source of information In this Section we shall use a statistical model and fit a compound distribution to compute aggregate claims using historical data. Fitting data into a probability distribution using R is an interesting exercise, and is worth quoting All models are wrong, some models are useful George E. P.; Norman R. Draper (1987). In previous section we explained fitting distribution, comparison of moments and goodness of fit to simulated data. The data source used is Danish data7 composed from Copenhagen Reinsurance and contains over 2000 fire loss claims details recorded during 1980 to 1990 period of time. This data is adjusted for inflation replicating 1985 values and are expressed in Danish Krone (DKK) currencies in millions. The data recorded are large values and are adjusted for inflation. There are 2167 rows of data over 11 years. Grouping the data over years results in 11 aggregate samples of data. This would be insufficient information to fit and plot the distribution. Therefore, the dat a is grouped month-wise aggregating to 132 samples. The figure 2.2 shows the time series plot against the aggregate claims inferring the different claims occurred monthly from 1980 to 1990, it also shows the extreme values of loss claims and the time of occurrence. There are no seasonal effects on the data as the 2 sample test for summer and winter data is compared and the t-test value infers there is no difference and conclude that there is no seasonal variation. Figure 2.2 Time series plot of Danish fire loss insurance data month wise starting 1980-1990. The expectation and variance of the aggregate claims are 55.572 and 1440.7 respectively. The expectation and variance of aggregate claims number are 16.41667 and 28.2. As discussed previously in Section 2.3.3, negative binomial distribution can be considered as a natural choice for modelling claim numbers since variance is greater than the mean. The data is plotted and fitted into an histogram using fitdistr() function in MA SS package of R. 2.6.2 Analysis of Danish data We shall do the following steps to analyse and fit the Danish loss insurance data. Obtain the claim numbers and loss aggregate claim data month wise. As discussed in Section 2.6.1, we choose primary distribution to be negative binomial and use fitdistr() function to obtain the parameters. Conduct Chi-square test to test the goodness of fit for claims distribution on aggregate claims and obtain the necessary parameters Simulate for 1000 samples using Section 2.5.1, also plot the histogram along with the fitted standard distributions as described in Section 2.5.2. Perform chi-square test to identify the optimal fit and obtain the distribution parameters. 2.6.3 R program Implementation We will do the following to implement Danish data fitting using R program. The following R code reads the Danish data available in DataDanishData.txt, segregate the claims month wise, to calculate sample mean and variance and plots the histogram with fitted standard distributions. require(MASS) source(Programs/Aggregate_Claims_Methods.r) Danish.Data = ComputeAggClaimsFromData(Data/DanishData.txt) Danish.Data$Agg.ClaimData = round(Danish.Data$Agg.ClaimData, digits = 0) mean(Danish.Data$Agg.ClaimData) var(Danish.Data$Agg.ClaimData) Danish.Data$Agg.ClaimData mean(Danish.Data$Agg.ClaimNos) var(Danish.Data$Agg.ClaimNos) Figure 2.3 Actual Danish fire loss data fitted with standard distributions of 132 samples. In the initial case N has negative binomial distribution with parameter; k= 25.32 and p=.6067. Test Statistic Gamma Normal Lognormal Weibull 95.273 142.243 99.818 118 p-value .53061 .0019 .40199 .072427 T able 23 Chi-Square and p-value for Danish fire loss insurance data Based on chi-square goodness of fit test shown in table 2.3, we shall consider the secondary distribution as gamma distribution with parameters; Shape =3.6559 and scale = 15.21363. We simulate using 1000 samples and obtain aggregate claim samples using Section 2.5.1. The plot and chi square test values are defined below as follows. The generic function PerformChiSquareTest(), previously discussed in Section 2.4 is used here to compute values of and p-value pertaining to = distribution. Figure 2.4 Histogram of simulated samples of Danish data fitted with standard distributions The figure 2.4 above shows simulated samples of Danish data calculated for sample size 1000, it also shows the different distribution curves fitted to the simulated data. The chi-square values are tabulated in table 2.4 below. Test Statistic Normal Gamma Lognormal Weibull 123.32 84.595 125.75 115.50 p-va lue .036844 .8115 .02641 .09699 Table 2.4 Chi-Square and p-value for compound Negative Binomial distribution for Danish insurance loss data. The results described in Table 2.4 suggest that the optimal possible choice of model is Gamma distribution with parameters Shape = 8.446 and Rate = .00931 Chapter 3 Survival models Graduation In the previous Chapter 2, we discussed about aggregate claims and how it can be modelled and simulated using R programming. In this chapter, we shall discuss on one of the important factors leading to the occurrence of a claim, the human mortality. Life insurance companies use this factor to model risk arising out of claims. We shall analyse and investigate the crude data presented in human mortality database for specific countries like Scotland and Sweden and use statistical techniques in smoothing data. MortalitySmooth package is used in smoothing the data based on Bayesian information criterion BIC, a technique used to determine smoothing parameter; we shall also plot the data. Finally we shall conclude by performing comparison of mortality of two countries based on time. 3.1 Introduction Mortality data in simple terms is recording of deaths of species defined in a specific set. This collection of data could vary based on different variables or sets such as sex, age, years, geographical location and beings. In this section we shall use human data grouped based on population of countries, sex, ages and years. Human mortality in urban nations has improved significantly over the past few centuries. This has attributed largely due to improved standard of living and national health services to the public, but in latter decades there has been tremendous improvement in health care which has made strong demographic and actuarial implications. Here we use human mortality data and analyse mortality trend compute life tables and price different annuity products. 3.2 Sources of Data Human mortality database (HMD)1 is used to extract data related to deaths and exposure. These data are collected from national statistical offices. In this dissertation, we shall look into two countries Sweden and Scotland data for specific ages and years. The data for specific countries Sweden and Scotland are downloaded. The deaths and exposure data is downloaded from HMD under Sweden Deaths https://www.mortality.org/hmd/SWE/STATS/Deaths_1x1.txt Exposure https://www.mortality.org/hmd/SWE/STATS/Exposures_1x1.txt Scotland Deaths https://www.mortality.org/hmd/GBR_SCO/STATS/Deaths_1x1.txt Exposure https://www.mortality.org/hmd/GBR_SCO /STATS/Exposures_1x1.txt They are downloaded and saved as .txt data files in the directory under /Data/Conutryname_deaths.txt and /Data/Conutryname_exposures.txt respectively. In general the data availability and formats vary over countries and time. The female and male death and exposure data are shared from raw data. The total column in the data source is calculated using weighted average based on the relative size of the two groups male and female at a given time. 3.3 P-Splines Techniques in Smoothing Data. A well-known actuary, Benjamin Gompertz observed that over a long period of human life time, the force of mortality increases geometrically with age. This was modelled for single year of life. The Gompertz model is linear on the log scale. The Gompertz law8 states that the mortality rate increases in a geometric progression. Hence when death rates are A0 B1 And the linear model is fitted by taking log both sides. = a + bx Where a = and b = The corresponding quadratic model is given as follows 3.3.1 Generalized Linear models and P-Splines in smoothing data Generalized Linear Models (GLM) are an extension of the linear models that allows models to be fit to data that follow probability distributions like Poisson, binomial, and etc. If is the number of deaths at age x and is central exposed to risk then By maximum likelihood estimate we have and by GLM, follows Poisson distribution denoted by with a + bx We shall use P-splines techniques9 in smoothing the data. As mentioned above, the GLM with number of deaths follows Poisson distribution. We fit a quadratic regression using exposure as the offset parameter. The splines are piecewise polynomials usually cubic and they are joined using the property of second derivatives being equal at those points, these joints are defined as knots to fit data. It uses B-splines regression matrix. A penalty function of order linear or quadratic or cubic is used to penalize the irregular behaviour of data by placing a penalty difference. This function is then used in the log likelihood along with smoothing parameter .The equations are maximised to obtain smoothing data. Larger the value of implies smoother is the function but more deviance. Thus, optimal value of is chosen to balance deviance and model complexity. is evaluated using various techniques such as BIC Bayesian information criterion and AIC Akaikes information criterion techniques. Mortalitysmooth package in R implements the techniques mentioned above in smoothing data, There are different options or choices to smooth data using P-splines, The number of knots ndx ,the degree of p-spine whether linear, quadratic or cubic bdeg and the smoothing parameter lambda. The methods in MortalitySmooth package fit a P-splines model with equally-spaced B-splines along x axis. There are four possible methods in this package to smooth data, and BIC is the default value chosen by MortalitySmooth in smoothing data. AIC minimization is also available but BIC provides better outcome for large values. In this di ssertation, we shall smooth the data using default option BIC and using lambda value. 3.4 MortalitySmooth Package in R program implementation In this section we describe the generic implementation of using R programming to read deaths and exposure data from human mortality database and use MortalitySmooth10 package to smooth the data based on P-splines. The following code presents below loads the following require(MortalitySmooth) source(Programs/Graduation_Methods.r) Age -30:90; Year - 1959:1999 country -scotland ;Sex - Males death =LoadHMDData(country,Age,Year,Deaths,Sex ) exposure =LoadHMDData(country,Age,Year,Exposures,Sex ) FilParam.Val -40 Hmd.SmoothData =SmoothedHMDDataset(Age,Year,death,exposure) XAxis - Year YAxis-log(fitted(Hmd.SmoothData$Smoothfit.BIC)[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) plotHMDDataset(XAxis ,log(death[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) ,MainDesc,Xlab,Ylab,legend.loc ) DrawlineHMDDataset(XAxis , YAxis ) The MortalitySmooth package is loaded and the generic implementation of methods to execute graduation smoothening is avail able in Programs/Graduation_Methods.r. The step by step description of the code is explained below. Step:1 Load Human Mortality data Method Name LoadHMDData Description Return an object of matrix type which is a mxn dimension with m representing number of ages and n representing number of years. This object is specifically formatted to be used in Mortality2Dsmooth function. Implementation LoadHMDData(Country,Age,Year,Type,Sex) Arguments Country Name of the country for which data to be loaded. If country is Denmark,Sweden,Switzerland or Japan the SelectHMDData function of MortalitySmooth package is called internally. Age Vector for the number of rows defined in the matrix object. There must be at least one value. Year Vector for the number of columns defined in the matrix object. There must be at least one value. Type A value which specifies the type of data to be loaded from Human mortality database. It can take values as Deaths or Exposures Sex An optional filter value based on which data is loaded into the matrix object. It can take values Males, Females and Total. Default value being Total Details The method LoadHMDData in Programs/Graduation_Methods.r reads the data available in the directory named Data to load deaths or exposure for the given parameters. The data can be filtered based on country, age, year, type based on Deaths or Exposures and lastly by sex. Figure: 3.1 Format of matrix objects Death and Exposure for Scotland with age ranging from 30 to 90 and year 1959 to 1999 The Figure 3.1 shows the format used in objects Death and Exposure to store data. A matrix object representing Age in rows and Years in column. The MortalitySmooth package functions only for specific countries listed in the package. They are Denmark,Switzerland,Sweden and Japan. The data for these countries can be directly loaded by using SelectHMDData() function available in MortalitySmooth R package. LoadHMDData function checks the value of the variable country and if Country is equal to any of the four then SelectHMDData() function is implemented else customized generic function i s called to return the data objects. The return matrix objects format in both functions remains exactly the same. Step 2: Smoothing HMD Dataset Method Name SmoothedHMDDataset Description Returns a list of smoothed object based BIC and Lambda of matrix object type which are of mxn dimension with m representing number of Ages and n representing number of years. These object are specifically formatted to be used in Mortality2Dsmooth() function and are customized for mortality data only. Smoothfit.BIC and Smoothfit.fitLAM objects are returned along with fitBIC.Data fitted values. SmoothedHMDDataset (Xaxis,YAxis,ZAxis,Offset.Param) Arguments Xaxis Vector for the abscissa of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, age vector is value of XAxis. Yaxis Vector for the ordinate of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, year vector is value of YAxis. ZAxis Matrix count response used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, Death is the matrix object value for ZAxis and dimensions of ZAxis must correspond to the length of XAxis and YAxis. Offset.Param A Matrix with prior known values to be included in the linear predictor during fitting the 2d data. Details. The method SmoothedHMDDataset in Programs/Graduation_Methods.r smoothes the data based on the death and exposure objects loaded as defined above in step 1. The Age, year and death are loaded as x-axis, y-axis and z-axis respectively with exposure as the offset parameter. These parameters are internally fitted in Mortality2Dsmooth function available in MortalitySmooth package in smoothing the data. Step3: plot the smoothed data based on user input Method Name PlotHMDDataset Description Plots the smoothed object with user given information such as axis, legend, axis scale and ain description details. Implementation PlotHMDDataset (Xaxis,YAxis,MainDesc,Xlab,Ylab,legend.loc,legend.Val,Plot.Type,Ylim) Arguments Xaxis Vector for plotting X axis value. Here the value would be age or year based on user request. Yaxis Vector for plotting Y axis value. Here the value would be Smoothened log mortality vales filtered for a particular age or year. MainDesc Main plot caption describing about the plot. Xlab X axis label. Ylab Y axis label. legend.loc A customized location of legend. It can take values topright,topleft legend.Val A customized legend description details it can take vector values of type string. Val,Plot.Type An optional value to change plot type. Here default value is equal to default value set in the plot. If value =1, then figure with line is plotted Ylim An optional value to set the height of the Y axis, by default takes max value of vector Y values. Details The generic method PlotHMDDataset in Programs/Graduation_Methods.r plots the smoothened fitted mortality values with an option to customize based on user inputs. The generic method DrawlineHMDDataset in Programs/Graduation_Methods.r plots the line. usually called after PlotHMDDataset method. 3.5 Graphical Representation of Smoothed Mortality Data. In this section we shall look into graphical representation of mortality data for selected countries Scotland and Sweden. The generic program discussed in previous Section 3.4 is used to implement the plot based on user inputs. Log mortality of smoothed data vs. actual fit for Sweden. Figure 3.3 Left panel: Plot of Year vs. log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Right panel: Plot of Age vs. log(Mortality) for Sweden based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Figure 3.3 describes plot of smoothed mortality vs. actual data for Sweden for ages and years respectively. The actual data are displayed as points and red and blue represents the smoothed curves BIC and lambda. MortalitySmooth package uses default smoothing technique BIC and lambda=10000 to smooth data in two different ways. Log mortality of smoothed data vs. actual fit for Scotland Figure 3.4 Left panel: Plot of Year vs. log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Right panel: Plot of Age vs. log(Mortality) for Scotland based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Figure 3.4 describes plot of smoothed mortality vs. actual data for Scotland for ages and years respectively. The actual data are displayed as points and red and blue represents the smoothed curves BIC and lambda. MortalitySmooth package uses default smoothing technique BIC and lambda=10000 are set to adjust the smoothing parameter. Log mortality of Females Vs Males for Sweden The Figure 3.5 given below represents the mortality rate for males and females in Sweden for age wise and year wise. 3.5 Left panel reveals that the mortality of male is more than the female over the years and has been a sudden increase of male mortality from mid 1960s till late 1970s for male The life expectancy for Sweden male in 1960 is 71.24 vs. 74.92 for women and it had been increasing for women to 77.06 and just 72.2 for male in the next decade which explains the trend11. The 3.5 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.06 at birth and has been consistently decreasing to 1.03 during 15-64 and .79 over 65 and above clearly explaining the trend for Sweden mortality rate increase in males12 is more than in females. Figure 3.5 Left panel: Plot of Year vs. log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males an d females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Right panel: Plot of Age vs. log(Mortality) for Sweden based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Log mortality of Females Vs Males for Scotland The figure 3.6 Left panel describes consistent dip in mortality rates but there has been a steady increase in mortality rates of male over female for a long period starting mid 1950s and has been steadily increasing for people of age 40 years. The 3.6 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.04 at birth and has been consistently decreasing to .94 during 15-64 and .88 over 65 and above clearly explaining the trend for Scotland mortality rate13 increase in males is more than in females. Figure 3.6 Left panel: Plot of Year vs. log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Right panel: Plot of Age vs. log(Mortality) for Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Log mortality of Scotland Vs Sweden The figure 3.7 Left Panel shows that the mortality rates for Scotland are more than Sweden and there has been consistent decrease in mortality rates for Sweden beginning mid 1970s where as Scotland mortality rates though decreased for a period started to show upward trend, this could be attributed due to change in living conditions. Figure 3.7 Left panel: Plot of Year vs. log(Mortality) for countries Sweden and Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and BIC Scotland respectively. Right panel: Plot of Year vs. log(Mortality) for countries Sweden and Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and BIC Scotland respectively.