Sunday, January 26, 2020

The Reinsurance Expected Loss Cost Formula

The Reinsurance Expected Loss Cost Formula ELCF is the excess loss cost factor (as a percentage of total lost cost). PCP is the primary company/subject premium. PCPLR is the primary company permissible loss ratio (including any loss adjustment expenses covered as a part of loss) RCF is the rate correction factor which is the reinsurers adjustment for the estimated adequacy or inadequacy of the primary rate Given that the coverage of this treaty is per-occurrence, we must also weigh the manual difference rate for the clash exposure. In order to determine the reinsurers excess share the ALAE is added to each claim, and therefore claims from policy limits which are below the attachment point will be introduced into the excess layer. The reinsure may have own data that describe the bi-variate distribution of indemnity and ALAE, or such information can be obtained from ISO or similar organization outside of United States of America. With these data the reinsurer is able to construct the increased limits tables with ALAE added to the loss instead of residing in its entirety in the basic limits coverage. Another more simple alternative is to adjust the manual increased limits factors so that they to account for the addition of the ALAE to the loss. A basic way of doing this is to use the assumption that the ALAE for each and every claim is a deterministic function of indemnity amount for the claim, which means adding exactly ÃŽÂ ³% to each claim value for the range of claim sizes that are near the layer of interest. This ÃŽÂ ³ factor is smaller than the overall ratio of ALAE to ground-up indemnity loss, as much of the total ALAE relates to small claims or claims closed with no indemnity. Assumption: when ALAE is added to loss, every claim with indemnity greater than $300,000 = (1+ ÃŽÂ ³) enters the layer $1,400,000 excess of $600,000, and that the loss amount in the layer reaches $1,400,000 when the ground-up indemnity reaches $2,000,000 = (1+ ÃŽÂ ³). From this the standard increased limits factors can be modified to account for ALAE added to the loss. In this liability context, Formula for RELC can be used with PCP as the basic limit premium and PCPLR can be used as the primary company permissible basic limits loss ratio. Assumption: Given the clash exposure an overall loss loading of ÃŽÂ ´% is sufficient enough to adjust the loss cost for this layer predicted from the stand-alone policies. Then ELCF determines the excess loss in the layer $1,400,000 with excess of $600,000 which arises from each policy limit and plus its contribution to the clash losses as a percentage of the basic limits loss that arise from the same policy limit. The formula for ELCF which is evaluated at limit (Lim) is as follows: Formula : Liability ELCF for ALAE Added to Indemnity Loss ELCF(Lim) = 0 Where Attachment Point AP = $600,000 Reinsurance Limit RLim = $1,400,000 Clash loading ÃŽÂ ´ = 5% Excess ALAE loading ÃŽÂ ³ = 20% The table 2 displays this method for a part of Allstates exposure using the hypothetical increased limits factors to calculate the excess loss cost factors with both ALAE and risk load excluded. Table 2: Excess Loss Cost Factors with ALAE Added to Indemnity Loss at 20% add-on and a Clash Loading of 5% Table : Excess Loss Cost Factors with ALAE Added to Indemnity Loss at 20% add-on and a Clash Loading of 5% (1) Policy Limit in $ (2) ILF w/o risk load and w/o ALAE (3) ELCF 200,000 1.0000 0 500,000 1.2486 0 600,000 1.2942 0.0575 1,000,000 1.4094 0.2026 1,666,666 1.5273 0.3512 2,000,000 or more 1.5687 0.4033 Source: own calculation based on Patrik (2001) Using the Formula 4., the ELCF($600,000) = 1.20*1.05*(1.2942-1.2486) = 0.0575, and ELCF($2,000,000) =1.20*1.05*(1.5687-1.2486) = 0.4033. Assumption1: for this exposure the Allstates permissible basic limit loss ratio is PCPLR = 70%. Assumption2: reinsurers evaluation indicates that the cedants rates and offsets are sufficient and therefore RCF is 1.00. The reinsurer can now calculate the exposure rate RELC and the reinsurers undiscounted estimate of loss cost in the excess layer as can be seen in the table 3. Table 3: Reinsurance Expected Loss Cost (undiscounted) Table : Reinsurance Expected Loss Cost (undiscounted) (1) Policy Limit in $ (2) Estimated Subject Premium Year 2009 in $ (3) Manual ILF (4) Estimated Basic Limit Loss Cost 0.70x(2)/(3) (5) ELCF (6) RELC in $ (4)x(5) Below 600,000 2,000,000 1.10 (avg.) 1272727.27 0 0 600,000 2,000,000 1.35 1,037,037.04 0.0575 59,629.63 1,000,000 2,000,000 1.50 933,333.33 0.2026 189,093.33 2,000,000 or more 4,000,000 1.75 (avg.) 1,600,000.00 0.3512 562,920.00 Total 10,000,000 n.a. 4,843,197.64 n.a. 811,642.96 Source: own calculation based on Patrik (2001) An exposure loss cost can be estimated using probability models of the claim size distributions. This directly gives the reinsurer the claim count and the claim severity information which the reinsurer can use in the simple risk theoretic model for the aggregate loss. Assumption: the indemnity loss distribution underlying Table 2 is Pareto with q =1.1 and b =5,000. Then the simple model of adding the 20% ALAE to the indemnity per-occurrence changes the indemnity of a Pareto distribution to a new Pareto with q =1.1and b=5,000*1.20 = 6,000. The reinsurer has to adjust the layer severity for a clash and this can be done by multiplying with 1+ÃŽÂ ´ =1.05. The reinsurer can therefore calculate from each policy limit the excess expected claim sizes, after dividing the expected claim size by the RELC for each limit the reinsurer obtains the estimates of expected claim count. This is done in Table 4. The expected claim size can be calculated as follows: Firstly the expected excess claim severity over the attachment point d and subject to the reinsurance limit RLim for a policy limit ÃŽÂ » can has to be calculated. This can be done as follows: For ÃŽÂ »= 600,000 For ÃŽÂ »=1,000,000 For ÃŽÂ »=2,000,000 The reinsurer is now able to calculate the expected claim count, the estimation can be seen in the table 4: Table 4: Excess Expected Loss, Claim Severity and Claim Count Table : Excess Expected Loss, Claim Severity and Claim Count Policy Count in $ (2) RELC in $ (3) Expected Claim Size in $ (4) Expected Claim Count (2)/(3) 600,000 59,629.63 113,928 0.523 1,000,000 189,093.33 423,164 0.447 2,000,000 or more 562,920.00 819,557 0.687 Total 811,642.96 1,356,649 1.68 Source: own calculation based on Patrik (2001) The total excess expected claim size for this exposure is $1,356,649. If the independence of claim events across all of the exposures can be assumed, the reinsurer can also obtain total estimates of the overall excess expected occurrence (claim) size and the expected occurrence (claim) count. Now we are going to estimate the experience rating. Step 3: Gather and reconcile primary claims data segregated by major rating class groups. As in the Example of property quota share treaties, the reinsurer needs the claims data separated as the exposure data, and the reinsurer also wants some history of the individual large claims. The reinsurer usually receives information on all claims which are greater than one-half of the proposed attachment point, but it is important to receive as much data as possible. Assumption: a claims review has been performed and the reinsurer received a detailed history for each known claim larger than $100,000 occurring 2000-2010, which were evaluated 12/31/00, 12/31/01à ¢Ã¢â€š ¬Ã‚ ¦, 12/31/09, and 6/30/10. Step 4: Filter the major catastrophic claims out of the claims data. The reinsurer wants to identify clash claims and the mass tort claims which are significant. By separating out the clash claims, the reinsurer can estimate their size and their frequency and how they relate to the non-clash claims. These statistics should be compared to the values that the reinsurer knows from other cedants and therefore is able to get a better approximation for the ÃŽÂ ´ loading. Step 5: Trend the claims data to the rating period. As with the example for the property-quota share treaties, the trending should be for the inflation and also for other changes in the exposure (e.g. higher policy limits) which may affect the loss potential, but unlike with the proportional coverage, this step cannot be skipped. The reason for this is the leveraged effect which has the inflation upon the excess claims. The constant inflation rate increases the aggregate loss beyond any attachment point and it increases faster than the aggregate loss below, as the claims grow into the excess layer, whereas their value below is stopped at the attachment point. Each ground-up claim value is trended at each evaluation, including ALAE, from year of occurrence to 2011. For example, consider the treatment of a 2003 claim in the table 5. Table 5: Trending an Accident Year 2003 Claim Table : Trending an Accident Year 2003 Claim (1) Evaluation Date (2) Value at Evaluation In $ (3) Trend factor (4) 2011 Level Value in 4 (5) Excess Amount in$ 12/31/03 0 1.62 0 0 12/31/04 0 1.62 0 0 12/31/05 250,000 1.62 405,000 0 12/31/06 250,000 1.62 405,000 0 12/31/07 300,000 1.62 486,000 0 12/31/08 400,000 1.62 648,000 48,000 12/31/09 400,000 1.62 648,000 48,000 06/30/10 400,000 1.62 648,000 48,000 Source: own calculation based on Patrik (2001) The reasoning for a single trend factor in this example is that the trend affects the claim values according to the accident date and not by an evaluation date. The trending of the policy limits is a delicate issue, because if a 2003 claim on a policy which has limit that is less than $500,000 inflates to above $600,000 ( plus ALAE), will be the policy limit that will be sold in the year 2011 greater than $500,000? It seems that over long periods of time, that the policy limits change with inflation. Therefore the reinsurer should over time, if possible, receive information on the Allstates policy limit distributions. Step 6: Develop the claims data to settlement values. The next step is to construct the historical accident year, thus we want to develop the year triangles for each type of a large claim from the data which was produced in column (5) of Table 5. Typically all claims should be combined together by major line of business. Afterwards the loss development factors should be estimated and applied on the excess claims data while using the standard methods. Also in order to check for reasonableness and comparable coverages we want to compare the development patterns that were estimated from Allstates data to our own expectations which have their basis in our own historical data. When considering the claim in Table 5 we see that only $48,000 is over the attachment point, and also only at the fifth development point Table 6: Trended Historical Claims in the Layer $1,400,000 Excess of $600,000 (in $1,000s) Table : Trended Historical Claims in the Layer $1,400,000 Excess of $600,000 (in $1,000s) Assumption: our triangle looks like the Table 6: Acc. Year Age 1 in $ Age 2 in $ Age 3 in $ à ¢Ã¢â€š ¬Ã‚ ¦ Age 9 in $ Age 10 in $ Age 10.5 in $ 2000 0 90 264 à ¢Ã¢â€š ¬Ã‚ ¦ 259 351 351 2001 0 0 154 à ¢Ã¢â€š ¬Ã‚ ¦ 763 798 à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ à ¢Ã¢â€š ¬Ã‚ ¦ 2008 77 117 256 2009 0 0 2010 0 ATA 4.336 1.573 1.166 à ¢Ã¢â€š ¬Ã‚ ¦ 1,349 n.a. n.a. ATU 15.036 3.547 2.345 à ¢Ã¢â€š ¬Ã‚ ¦ 1.401 1.050 = tail Smoothed Lags 11.9% 28.7% 47.7% à ¢Ã¢â€š ¬Ã‚ ¦ 93.1% 95.3% 96.7% Source: own calculation based on Patrik (2001) Where: ATA is Age-To-Age development factor ATU is Age-To-Ultimate development factor Lag(t) is the percentage of loss reported at time t The selection of the tail factor of 1.05 is based upon the general information about the development for this type of an exposure beyond ten years. By changing to the inverse for the point of view from the age-to-ultimate factors, the time lags of the claim dollar reporting, the loss reporting view is transformed to that of the cumulative distribution function (CDF) whose domain is [0,), this transformation gives a better outlook of the loss development pattern. It also allows considering and measuring the average (expected) lag and some other moments, that are comparable to the moments of loss development patterns from other exposures. Given the chaotic development of excess claims, it is a important to employ smoothing technique. If the smoothed factors are correctly estimated they should more credible loss development estimates which are more credible. They also allow to evaluate the function Lag( ) at every positive time. The smoothing which was introduced in the last row of Table 6 is based on a Gamma distribution with a mean of 4 (years) and a standard deviation of 3. It is also usually useful to analyze the large claim paid data, if possible, both to estimate the patterns of the excess claims payment and also to supplement the ultimate estimates which are based only on the reported claims that were used above. Sometimes the only data available are the data on aggregate excess claims, which would be the historical accident year per development year $1,400,000 excess of $600,000 aggregate loss triangle. Pricing without specific information about the large claims in such a situation, is very risky, but it is occasionally done. Step 7: Estimate the catastrophic loss potential. The mass tort claims such as pollution clean-up claims distort the historical data and therefore need special treatment. As with the property coverage, the analysis of Allstates exposures may allow us to predict some suitable loading for the future mass tort claim potential. As was said in the Step 4, the reinsurer needs to identify the clash claims. With the separation of the clash claims, for each claim, the various parts are then added together to be applied to the occurrence loss amount at the attachment point and at the reinsurance limit. If it is not possible to identify the clash claims, then the estimation of the experience of RELC has to include a clash loading which is based on judgment of the general type of exposure. Step 8: Adjust the historical exposures to the rating period. As in the example on the property quota-share treaties the historical exposure (premium) data has to be adjusted in such a manner that makes the data are reasonably relevant to the rating period, therefore the trending should be for the primary rate, for the underwriting changes and also for other changes in exposure that may affect the loss potential of the treaty.. Step 9: Estimate an experience expected loss cost, PVRELC, and, if desirable, a loss cost rate, PVRELC/PCP. Assumption: we have trended and developed excess losses for all classes of Allstates casualty exposure. The standard practice is to add the pieces up as seen in the table 7. Table 7: Allstate Insurance Company Casualty Business Table : Allstate Insurance Company Casualty Business (1) Accident Year (2) Onlevel PCP in $ (3) Trended and Developed Loss and Excess Loss (estimated RELC) in $ (4) Estimated Cost Rate in % (3)/(2) 2002 171,694 6,714 3.91 2003 175,906 9,288 5.28 2004 178,152 13,522 7.59 2005 185.894 10,820 5.82 2006 188,344 9,134 4.58 2007 191,348 6,658 3.48 2008 197122 8,536 4.33 2009 198,452 12,840 6.47 2010 99,500 2,826 2.84 Total 1,586,412 80,336 5.06 Total w/o 2010 1,486,912 77,510 5.21 Source: own calculation based on Patrik (2001) The average loss cost rate for eight years is 5.21%, where the data from the year 2010 was eliminated as it is too green (undeveloped) and there does not seem to be a particular trend from year to year. Table 7 gives us the experience-based estimate, RELC=PCP =5.21%, but this estimate has to be loaded for the existing mass tort exposure, and also for the clash claims if we had insufficient information on the clash claims in the claims data. Step 10: Estimate a credibility loss cost or loss cost rate from the exposure and experience loss costs or loss cost rates The experience loss cost rate has to be weighed against the exposure loss cost rate that we already calculated. If there is more than one answer with different various answers that cannot be further reconciled, the final answers for the $1.400, 000 excess of $600,000 claim count and for the severity may be based on the credibility balancing of these separate estimates. All the differences should however not be ignored, but should be included in the estimates of the parameter (and model) uncertainty, and therefore providing a rise to a more realistic measures of the variances, etc., and of the risk. Assumption: simple situation, where there are weighed together only the experience loss cost estimate and the exposure loss cost estimate. The six considerations for deciding on how much weight should be given to the exposure loss cost estimate are: The accuracy of the estimate of RCF, the primary rate correction factor, and thus the accuracy of the primary expected loss cost or loss ratio The accuracy of the predicted distribution of subject premium by line of business For excess coverage, the accuracy of the predicted distribution of subject premium by increased limits table for liability, by state for workers compensation, or by type of insured for property, within a line of business For excess coverage, the accuracy of the predicted distribution of subject premium by policy limit within increased limits table for liability, by hazard group for workers compensation, by amount insured for property For excess coverage, the accuracy of the excess loss cost factors for coverage above the attachment point For excess coverage, the degree of potential exposure not contemplated by the excess loss cost factors The credibility of the exposure loss cost estimation decreases if there are problems with any of these six items listed. Also the six considerations from which can be decided how much weight can be given to the experience loss cost estimate are: The accuracy of the estimates of claims cost inflation The accuracy of the estimates of loss development The accuracy of the subject premium on-level factors The stability of the loss cost, or loss cost rate, over time The possibility of changes in the underlying exposure over time For excess coverage, the possibility of changes in the distribution of policy limits over time The credibility of the experience loss cost estimate lessens with problems with any of the six items. Assumption: the credibility loss cost rate is RELC/PCP = 5.75%. For each of the exposure category a loss discount factor is estimated, which is based on the expected loss payment pattern for the exposure in the layer $1,400,000 excess of $600,000, and on a chosen investment yield. Most actuaries support the use of a risk-free yield, such as U.S. Treasuries for U.S. business, for the approximation of the maturity of the average claim payment lag. Discounting is significant only for longer tail business. On a practical base for a bond maturity which is between five to ten years it is better to use a single, constant fixed rate. Assumption: the overall discount factor for the loss cost rate of 5.75% is RDF= 75%, which gives PVRELC/PCP = RDF*RELC/PCP =0.75*5.75%= 4.31%, or PVRELC= 4.31% * $200,000,000 = $8,620,000. The steps 11 and 12 with this example are reversed. Step 12: Specify values for RCR, RIXL, and RTER Assumption: the standard guidelines for this size and type of a contract and this type of an exposure specify RIXL = 5% and RTER = 15%. The reinsurance pure premium RPP can be calculated as RPP = PVRLC/(1-RTER) = $8,620,000/0.85 = $10,141,176 with an expected profit as RPP PVRELC = $10,141,176 $8,620,000 = $1,521,176 for the risk transfer. As the RCR = 0% we can calculate the technical reinsurance premium of RP = RPP/(1-RIXL) = $10,141,176 /0.95 = $10,674,922. This technical premium is therefore above the maximum of $10,000,000 which was specified by the Allstate Insurance Company. If there is nothing wrong with technical calculations, then the reinsurer has two options. The first one is to accept the expected reinsurance premium of $10,000,000 at a rate of 5%, with the expected profit reduced to $10,000,000 $8,620,000 = $1,380,000 Or secondly the reinsurer can propose a variable rate contract, with the reinsurance rate varying due to the reinsurance loss experience, which in this case is a retrospectively rated contract. As the Allstate Insurance Company is asking for a retrospectively rated contract we select the second possibility. To construct a fair and balanced rating plan, the distribution of the reinsurance of an aggregate loss has to be estimated. Now we proceed with step 11. Step 11: Estimate the probability distribution of the aggregate reinsurance loss if desirable, and perhaps other distributions such as for claims payment timing. In this step the Gamma distribution approximation will be used. As our example is lower (excess) claim frequency situation, the standard risk theoretic model for aggregate losses will be used together with the first two moments of the claim count and the claim severity distributions to approximate the distribution of aggregate reinsurance loss. The aggregate loss in the standard model is written as the sum of the individual claims, as follows. Formula : Aggregate Loss L=X1 + X2 +à ¢Ã¢â€š ¬Ã‚ ¦+ XN with L as a random variable (rv) for aggregate loss N as a rv for number of claims (events, occurrences) Xi as rv for the dollar size of the ith claim The N and Xi are referring to the amount of the ith claim and to the excess number of claims. To see how the standard risk theoretic model relates to the distributions of L, N and the Xis see Patrik (2001). We are working with the assumption that the Xis are both identically and independently distributed and also independent of N, further we assume that the kth moment of L is determined completely by the first k moments of N and the Xis. There is following relationships. Formula : First Two Central Moments of the Distribution of Aggregate Loss under the Standard Risk Theoretic Model E[L] = E[N] x E[X] Var[L] = E[N] x E[X2] + (Var[N] E[N]) x E[X]2 Assumption: the E[L] = RELC =5.75%*$200,000,000 = $11,500,000 (undiscounted). We assume simplistically independent and identical distribution of the excess claim sizes and also the independency of the excess claim (occurrence) count. Usually this is a reasonable assumption. For our layer $1,400,000 excess of $600,000, our modeling assumptions and results are shown in the formula below. Formula : Allstate $1,400,000 Excess of $600,000 Aggregate Loss Modeling Assumptions and Results

Saturday, January 18, 2020

Case Study: Active Data Warehousing

1. Describe â€Å"active† data warehousing as it is applied at Continental Airlines. Does Continental apply active or real-time warehousing differently than this concept is normally described? An active data warehousing, or ADW, is a data warehouse implementation that supports near-time or near-real-time decision making. It is featured by event-driven actions that are triggered by a continuous stream of queries that are generated by people or applications regarding an organization or company against a broad, deep granular set of enterprise data.Continental uses active data warehousing to keep track of their company’s daily progress and performance. Continental’s management team holds an operations meeting every morning to discuss how their company is performing in regards to the data collected by their active data warehousing program. The management team believes, â€Å"you can’t manage what you can’t measure,† so they use active data warehou sing to keep track of their customers experience while using Continental Airlines.The information that the management team uses to analyze their company in regards to customer relationship is on-time arrival, on-time departures, baggage handling, and other key performance indicators. Continental also uses active data warehousing for revenue management, revenue accounting, flight operations, fraud detection and airline security. Continental restructured their goals to try to become customers â€Å"favorite† airline to use. They use their active data warehousing to gain as much information about the company’s performance as well as the customers experience.They use this real-time warehousing program to interpret information that is provided and make changes that will better improve their customers experience and help Continental better suit their business in regards to their customers’ needs. 2. In what ways does real-time data warehousing fit with the Continental strategy and plans? Continental Airlines decided to shift their strategy once they went from â€Å"worst to first. † The new goal that they wanted to achieve was making the move from â€Å"first to favorite. Continentals’ new strategy and plan of becoming their customers’ favorite airline could only be achieved by using real-time data warehousing. Continental made plans to become the â€Å"favorite† airline and their strategy involved making business decisions based on information they receive from real-time data warehousing such as: on-time arrival, on-time departures, baggage handling, and other key performance indicators. This information gives the Continental management team the necessary information needed to make corrections or changes in order to better their customers’ experience while using Continental Airlines.Continental’s strategy and plans to become the â€Å"favorite† airline would be much harder to accomplish without real-time data warehousing. They need this information in order to realize what parts of their company need to be tweaked to keep the customer happy. Without real-time data warehousing Continental wouldn’t be able to achieve their goal of moving from â€Å"first to favorite. † 3. Describe the benefits of real-time data warehousing at Continental. Real-time data warehousing has allowed Continental to make significant changes to its business in a variety of ways.According to Continental’s president and COO Larry Kellner, â€Å"Real-time BI is critical to the accomplishment of our business strategy and has created significant business benefits. † There is a wide-range of benefits that Continental has gained from real-time or â€Å"active† data warehouse in the categories of marketing, corporate security, IT, and revenue management. One key benefit in the marketing field is the average increase of travel amongst Continental’s most valuable custo mers, approximately $800 per customer (35,000 customers).A central benefit in is that all employees have the ability to access important facts and information about its customers and the business in its entirety. This in turn allowed Continental to check passenger reservations and flight manifests by cross-referencing with the FBI’s â€Å"watch† list only hours about the 9/11 attacks, deciding if it was safe to fly. Above all, Continental has recognized over $500 million of cost saving and revenue generation (tracking and forecasting, fare designs and analysis, and full reservation analysis) due to the advantages of the business intelligence. . What elements of the data warehousing environment at Continental are necessary to support the extensive end-user business intelligence application development that occurs? There are numerous elements of the data warehousing environment at Continental that are necessary in the support of the extensive end-user BI application deve lopment that takes place. Two important elements that are necessary to discuss are the systems scalability and data security. Since the real-time data warehousing warehouse never gets rid of information, the amount of data increases exponentially over time.Additionally, with the development of BI application, the number of uses will also increase. To deal with the amount of usage and data, the data warehouse at Continental should have scalability which allows the data warehouse to expand the accessible disc space and throughput. The Continental design team took this into account when going through the architecture design of the warehouse. The other element that is important to take care of is data security. Data security is extremely important when a company handles customer information and personal data.Continentals’ warehouse stores all of the customer’s information that can be accessed by other users in order to gain the data that they need. The customers can rest a ssured knowing that their personal information (i. e. social security numbers and credit card numbers) are protected from being opened by any users that are not authorized to view this sensitive information. 5. What special issues about data warehouse management (e. g. , data capture and loading for the data warehouse (ETL processes) and query workload balancing) does this case suggest occur for real-time data warehousing? How has Continental addressed these issues?Real-time data warehousing creates some special issues that need to be solved by data warehouse management. These can create issues because of the extensive technicality that is involved for not only planning the system, but also managing problems as they arise. Two aspects of the BI system that need to be organized in order to elude any technical problems are: the architecture design and query workload balancing. Architecture design is important because when a company is progressively receiving business and different asp ects of the customers’ usage of the company changes the warehouse needs to frequently be updated.Continental planned for the company to use real-time data warehousing so they structured the design to accommodate for the demand of real-time information. The information then became easier to update the warehouse in a timely manner. Query workload balancing is another important aspect of the warehouse that needed to be addressed in order to fulfill Continentals’ need to use the warehouse for tactical and strategic purposes. Continental would run into issues of backed up query processing in their warehouse if query workload balancing wasn’t introduced.The queries would be processed in a â€Å"first in first out† system and would essentially cause backups. Continental resolved this issue by making the warehouse process queries according to the query type. They set up the warehouse to process the specific queries that access single records first and marked them with high priority. Then they prioritized other queries to either be marked with medium priority or low priority depending on what information they are asking for and for what reason it’s needed.Continental’s utilization of using prioritized groupings of queries has led them to be able to process information in a timely manner that is most convenient for the person trying to access this information. (Information regarding the case as well: not sure which one you wanted so I included both) There are two issues that the case study suggests in terms of data management. The first is to recognize that some data cannot and should not be real-time, for three reasons. The first is that Continental knows that real-time data feed are hard to administer because the constant flow of transaction data must always be supervised.The second reason is the need for extra hardware. The last reason Continental is extremely cautious with the movement of additional data is because real-time data feed is extremely costly to bring about. The second issue Continental deals with when it comes to data warehouse management is having the right people in the right positions. Anny individual who handles any aspect of a real-time warehouse must be highly qualified and knowledgeable in both technology and business.According to the case study, â€Å"At Continental, data warehouse staff members in the more technical positions (e. g. , design of ETL processes) have degrees in computer science. Some of them previously built and maintained reservation systems before they joined the warehouse team. Consequently, they have experience with transaction oriented, real-time systems, which serves them well for real-time BI and data warehousing. The warehouse team members who work closely with the business units have previous work experience in the business areas they now support. †

Friday, January 10, 2020

Fearless

â€Å"Fearless† and â€Å"courageous†; two common words that have been used to describe soldiers for centuries. There is, however, a big difference between being courageous and being fearless. Courage is one of the best terms used to describe a soldier: that no matter how hard the circumstances are and how scared he/she is, they keep pressing on. It is truly amazing how courageous these men and women are. Humans were made to fear, it is humanly impossible to be without fear, there is no soldier that is truly â€Å"fearless†.Soldiers at war often put up a front that they are unafraid, the reason they put up this front is to be trusted by fellow soldiers, to fit into society's view of a soldier and to maintain their sanity. The world often perceives fear as a sign of weakness. The word fear is defined as a â€Å"feeling of anxiety or agitation caused by the presence or nearness of danger, evil, or pain. Extreme fear is terror which applies to an overwhelming often paralyzing fear† (Fear†). In the heat of battle, a soldier's senses are heightened to the danger that surrounds him.Any civilian in these circumstances would choose to run and hide or escape, but the soldier has been trained not to selfishly retreat, but to heed orders and advance. He/she may know his/her life is in grave danger, yet for the sake of courage and duty to his/her country he continues on. The courage that a soldier surrounds themself with is the quality of spirit which enables one to face danger or pain without showing fear. If a soldier falters he or she may be dismissed but will certainly not be trusted by his/her fellow soldiers.Each soldier desires trust, loyalty, and respect and each of their lives depend on it. The soldier makes a choice to lay his or her individual fears and emotions aside to be courageous. In the short story, â€Å"The Things They Carried†, by Tim O'Brien, not only were the physical items that they carried described but also the emotional burdens they carried. â€Å"They carried shameful memories. They carried the common secret of cowardice barely restrained, the instinct to run or freeze or hide. They carried their reputations.They carried the soldier's greatest fear, which was the fear of blushing† (O'Brien). Soldiers care about what fellow soldiers think of them. They need to be trusted. The face of fearlessness is formed to avoid being made a coward or being put to shame, and to keep a good reputation. Their fellow soldiers are all they have at war. In general, there is a disconnect between society’s view on war and a soldier’s view on war. Citizens are easily influenced and swayed by different means of communication in society.Society often portrays war and soldiers to be something that they are not: fearless. According to platoon leader, Paul Stanley, â€Å"soldiers realize the cost and effort required to be willing to fight and what it means to be in life or death situations , society thinks they understand but they don't† (Paul Stanley). He said his view is totally different than that of an everyday citizen. Lieutenant Stanley also commented on society’s negative view of soldiers; â€Å"Society believes soldiers are more like weapons instead of people. Sometimes society depicts soldiers in a negative light, as being merciless fighting machines, which is not the case either. Lieutenant Stanley said that his view of his country was better after war; he appreciates life more and is a better person. How society depicts soldiers is what we believe a soldier to be, which often means supernatural beings or war machines. In â€Å"The Things They Carried†, first Lieutenant Jimmy Cross, had gotten sidetracked during battle and because of it one of his men was killed.From then on Jimmy chose to put all thoughts outside of war aside in hopes that it would never occur again. He wouldn't show his emotion towards anything; in other words, he woul d act fearless (O'Brien 232-239). The feelings and emotions are still there, they are just hidden behind a wall, making it easier to keep him and others safe. Soldiers often need to block out any emotion and images to maintain their sanity. A first lieutenant in the Desert Storm War, Paul Stanley said â€Å"especially as an officer, you can't act scared.Everyone looks to you, so you have to be strong for them. † Soldiers have to have confidence in themselves and the team around them and also trust that they received the proper training. They learn to subdue their fear and cope with emotion. A common disorder that soldiers are diagnosed with after war because of their suppression of any emotional burdens is called Post Traumatic Stress Disorder or PTSD. Post-Traumatic Stress Disorder occurs when a person experiences a severe trauma or life threatening event.If soldiers were fearless, they wouldn't be affected by this disorder. Soldiers are heroes, courageous, noble, self-sacri ficing, and brave and so much more, but they are still human, they are not fearless. The aspect of being fearless plays a large role in keeping a soldier alive and sane. Their fearless attitude is the way soldiers are able to fit into society’s view of a soldier, be trusted by their team, and maintain their sanity.

Thursday, January 2, 2020

America s First Narcotics War Essay - 1262 Words

Around the 1900’s the United States was being flooded with multiple feelings, which created an ongoing battle between tension and morals. These conflicts contributed to what is known as the â€Å"noble experiment†, which involved alcoholic products. These continuing conflicts left the population feeling unstable. Instead of dealing with these problems at hand our nation decided to turn to the state for a helping hand. Struggling with a mass immigration increase and the rise to industrialism and capitalism was hard enough on our own, but we also had to somehow stabilize the nation’s social order to prevent further social conflicts. Due to slightly failing on stabilizing our social order our nation’s society decided to campaign against alcohol and start the nation’s first narcotics war. By doing this it was believed that the overall rate of corruption, violence and crime would decrease and solve our social problems. Looking back on history, the way th ings occurred shows that this time it was more than a slight fail. An era for becoming ideally perfect swept across the nation due to religion practices. Around 1820 and 1830 massive changes were taking place. Motions for slavery being abolished were made and even an increase in support for a temperance movement occurred. Massachusetts was one of the first states to pass a law for temperance. This law banned not only the sale of spirits, but they also had size limits as well. Massachusetts temperance law was actually repealed two yearsShow MoreRelatedDrug Usage In America Essay969 Words   |  4 PagesDrug Usage in America America has a problem with drugs. In order to understand the problem, we first need to understand what is considered a drug. It is â€Å"any ingestible substance that has a noticeable effect on the mind or body†. (Schmalleger, 2011) Drugs are used for medicinal as well as recreational purposes. Unfortunately both types of drugs have played a role in American culture. History of Drug Use Drugs have been part of the American culture as far back as the 1800’s. Using drugs for medicinalRead MoreThe Harrison Narcotic Act Of 1914881 Words   |  4 Pages(2012) mentions four major drug control laws which were established, including: (1) The Harrison Narcotic Act 1914, (2) The Marijuana Tax Act 1937, (3) The Boggs Act 1951, and (4) The Controlled Substance Act 1970. The drug control laws formed help regulate importation, manufacture, selling, or distribution of drugs within the United States (Levinthal, 2012). The Harrison Narcotic Act of 1914 was the first of many laws due to the laissez-faire attitude toward drug use in the United States. Brecher 1972Read MoreDrug Abuse Prevention And Control888 Words   |  4 Pages1971). This part did not receive equal public attention as the term war on drugs. This where the term â€Å"war on drugs† was popularized by the mass media, even though Nixon had officially declared a war on drugs, two years prior in 1969. There is a long history of America’s war on drugs that’s dates back to the post-Civil War reconstruction era. The war started locally in San Francisco when the city passed the nation’s first anti-drug law making it a misdemeanor to possess opium (Gieringer, 2000)Read MoreThe History of Hemp or Cannabis, Marijuana, Weed, Pot Essay1727 Words   |  7 Pageshemp through different societies and time periods. Hemp dates back to the early Mesopotamian days in what is considered Turkey today. It is the first known plant to be domestically cultivated. According to research, â€Å"The oldest relic of human history is hemp fabric dated to 8,000 BCE.† In 1492, Christopher Columbus brought hemp as a rope to America. His ships were full of hemp fabric used for uniforms, parachutes, ropes, sails, baggage, shoes and many more military uses. To free the AmericanRead MoreThe Drug Of Drug Abuse1143 Words   |  5 PagesAmerica has been fighting drug abuse for over a century. Four Presidents have waged a â€Å"War on Drugs† and unfortunately, this war continues to be lost at an alarming magnitude. Drug abusers continue to fill our courts, hospitals, and prisons. The drug trade causes violent crime that ravages our neighborhoods. Children of drug abusers are neglected, abused, and even abandoned. The current methods of dealing with this issue are not working. Our society needs to implement new and effective laws and programsRead MoreHeroin Use And Misuse Of Drugs796 Words   |  4 Pag esHeroin use and misuse are certainly nothing new to America, although most people probably could not cite its true origin or history, knowing only what is portrayed on television and movies. Heroin invokes images of dirty needles and equally dirty individuals, barely conscious, and lying in their own filth amongst hollowed, abandoned and dilapidated buildings. These are the images portrayed in movies and promoted among mass media, these are the images conjured when one speaks of heroin addictionRead MoreShould Marijuana Be Legalized?1602 Words   |  7 Pagesvilified in America over the past 70+ years. Despite it’s many practical uses, medicinal and industrial, our Federal government insists on maintaining the status quo that the growth, possession and use of marijuana is criminal despite the evidence that the legalization of marijuana would have a positive influence on America. In this paper I will discuss the history of marijuana, the industrial uses of hemp, the prohibition of marijuana, the economical impact prohibition has on America, the effectsRead MoreThe Legalization Of Marijuan Marijuana1743 Words   |  7 Pagesevidence that the legalization of marijuana would have a positive influence on America. In this paper I will talk about the history of marijuana, the industrial uses of hemp, the ban of marijuana, the economical impact prohibition has on America, the effects of cannabis use on the brain and the physical structure, marijuana for medical purpose, and how legalization of marijuana would have a positive influence on America. Although I defend the legalization of marijuana I do not endorse the legalizationRead MoreThe American War On Drugs1598 Words   |  7 PagesThe American â€Å"War on Drugs† war created to keep an exorbitant amount of people behind bars, and in a subservient status. First, America has a storied history when it comes to marijuana use. However, within the last 50 years legislation pertaining to drug use and punishment has increased significantly. In the modern era, especially hard times have hit minority communities thanks to these drug laws. While being unfairly targeted by drug laws and law enforcement, minorities in America are having aRead MoreMexico’s War on Drugs Essay1252 Words   |  6 Pagesthese weapons. A lot of this attention goes to the U.S. because many of the weapons utilized in the â€Å"drug war† are U.S. made and is interfering with trading relations amongst both the U.S. and Mexico. With this current violent situation in Mexico this has transformed the flow of weapons to an even larger scale. During the mid-2000’s former President of Mexico Felipe Calderon announced his war on the cartels and led to a crackdown against these organizations, along with assistance with the U.S. Ever