Statistics Question

Student Name: ________________________________

Instructions:

Please answer each of the following questions and show work/calculations. Open up sufficient space in the Word document. Insert your answer using a highlight color (blue, green, red).

Please show your calculations and copy and paste outputs from SPSS and/or Excel into this document.

Please note to consult a Chi=Square (X2) Distribution Table use Appendix B in electronic textbook (p 308) or Google a table.

Good luck.

Question 1: Key Definitions and Concepts Part I (10 points)

  1. Explain and define the differences between independent and dependent variables. Give examples. (2 points)
  2. Explain and define the differences between the statistical relationships “association” [correlation] and “causation” [causality]. Give examples. (2 points)
  3. Explain the difference between an experimental research design and a quasi -experimental research design. Which is the good standard? (2 points)
  4. What is sampling error? (2 points)
  5. What is the Central Limit Theorem and with is its significance in the field of statistics (2 points)?

Question 2: True False (10 points)

  1. Are mean, median, mode, and frequency distribution (or count) statistics of central tendency? True___ False_____ (2 points)
  2. Are variance, standard deviation, and statistics of dispersion? True___ False___ (2 points)
  3. Will the sampling distribution of an infinite number of relatively large samples be normally distributed? True____ False____ (2 points)
  4. Pearson’s Correlation Coefficient is the same at the Spearman Correlation Coefficient
  5. Ethics in statistics is not as important as ethics in law and medicine. The latter areas deal with people and the former just with number. True_____ False _____ (2 Points)

True____ False______ (2 points)

Question 3: Key Definitions and Concepts Part II (10 points).

  1. Explain the Kolmogorov-Smirnov test, what does it test for, how do you interpret the computer output?
  2. What is R-Square?
  3. What are the assumptions that must hold for a t-test?
  4. What are the assumptions that must hold for ANOVA?
  5. What are the Chi-square assumptions?
  6. How do you interpret the F Statistic in ANOVA analysis? If the F statistic is significant what does it mean? What does the F Statistic in regression analysis mean?
  7. With what types of variables are t-tests used for and with what type of variables are Chi-Square tests used?
  8. What are the five steps of hypothesis testing?
  9. When do you use the student t distribution?
  10. What is the difference between the observed and predicted values of the dependent variable? What is the key assumption made about the error term in regression?

Question 4: Chi-Square (10 points)

Assume that a citizen survey yielding 1,034 responses has been completed. We as statistical analysts what to check for over sampling or under-sampling with respect to US Census data. We want to whether the age distribution of the survey respondents is consistent with the age distribution in the decennial US Census at the 5 percent level of significance.

Ho The age distribution of the sample is consistent or the same as that of the population

HA The age distribution of the sample is inconsistent or different as that of the population.

Table US Census Response by Age Groups

Age Groups

US Census (Percent)

Survey Sample (Percent)

18-45

62. 3

62.8

46-65

24.1

26.8

66+

13.6

10.4

Write formula, show calculations, determine Chi-Square test value, identify degree of freedom, identify critical value, and make conclusion.

Question 5: Confidence Interval (10 points)

In a sample of 1,000 persons, 15.4 percent of the respondents report personal income at or below the poverty line, whereas 84.6 percent of the respondents report personal income above the poverty line. Please calculate a confidence interval at the 95% and 99% for this proportion of people who live in poverty. Write the appropriate formula to use, apply the formula for proportions, calculate the upper and lower bounds. Show step by step calculations.

  1. 95% confidence interval
  2. 99% confidence interval

Question 6: One Sample T-Test (10 points)

A psychosocial functional score (PFS) is used to assess school age children’s psychosocial behaviors. A score of 25 points or above is considered normal. A sample of 15 students is tested and their and their PFS corers are as follows:

Case ID

Psychosocial Functional Score (PFS)

1

29

2

32

3

18

4

23

5

27

6

19

7

34

8

32

9

27

10

23

11

26

12

32

13

29

14

15

  1. Input the data into SPSS (create new dataset, label Variable PFS_Score).
  2. Evaluate whether the students’ average PFS score is greater than 25 using one sample t test. Write a brief explanation of the results of your t test for a non- technical program manager.

What is mean___

What is standard deviation___

What is the t statistic, the 2 tailed significance value, it is less than .05, the mean difference?

Clip and paste SPSS output

Question 7: Paired T-Test (10 points)

Students at a school are given a test before beginning a special program of instruction and then a test after. Used paired samples t-test to determine if there was evidence of improvement (95% level of confidence).

Student

Before Test Score

After Test Score

1

4.5

6.9

2

3.2

4.8

3

5.8

5.2

4

3.9

4.3

5

4.2

5.0

6

3.9

4.8

7

2.6

3.2

8

5.2

4.8

9

4.5

4.5

10

3.9

4.1

11

3.8

3.6

12

4.2

5.9

Clip and paste SPSS output

Report was is the mean difference and confidence interval (95%)

What is test statistic ____

What is p value ____

How does the p value compare to .05 level of significance?

Question 8: Descriptive Statistics and Graphical Exercise (10 points)

  1. For the graduation data presented in the table below please calculate mean, variance, and standard deviation for each of the schools (3 points)
  2. Produce a line chart for graduation rates for each school. Label graph with title, axis, and legend. (3 points)
  3. What is your interpretation of this data and chart? If you were the School Superintendent for the district that includes these two school what sorts of questions would this chart trigger in your mind? What additional investigation would you like to undertake? (4 points)

Graduation Rates (Number of Graduates per Teacher) at Two Different Schools

Year

School A

School B

1

19

33

2

43

25

3

26

32

4

47

32

5

19

32

Mean

Variance

Standard Deviation

Question 9: Simple Linear Regression —Use Excel

[Data–Data Analysis-Select Regression from Dialogue box—filling in boxes with appropriate cell ranges]

  1. Please enter the following data into an Excel spreadsheet. Note make sure that you have Data Analysis Tool Pak Installed (Tools -Insert-Excel Add-ins-(click to select Data Analysis Tool Pak) for Microsoft 365 Subscription versions. If you have older versions of Excel see page 108 of Chapter 19 Excel User’s Guide, Section Loading the Data Analysis Tool Pak if not try Help-how to install Add-ins)
  2. Run a simple linear regression with DV= EnvironSpend and IV= PopDensity
  3. Interpret results.

Data: Environmental Spending

City ID

EnvironSpend

PopDensity

1

0.11

149

2

0.04

44

3

0.26

459

4

0.07

97

5

0.17

345

6

0.13

523

7

0.08

24

8

0.22

275

9

0.11

183

10

0.1

287

11

0.2

137

12

0.11

86

13

0.18

300

14

0.2

260

15

0.15

380

Variable definitions:

City ID==identification code for city

EnivronSpend=Environmental Spending=percent of annual total spending on environmental protection/concerns

PopDensity=population density=number of people per square mile)

Paste output here:

R-Square____ Interpretation______________

Which coefficients are statistically significant and different from zero at the 5% level

Intercept___________________________

Coefficient for PopDensity______________________

What report would you give a non-technical policy maker about the relationship your discovered with your regression analysis? What is the sign of the relationship between population density and environmental spending as a share of total spending? If population density were to increase in a particular jurisdiction what would you tell policy makers to prepare for in preparing future budgets?

Question 10: Multiple Linear Regression-Use SPSS (10 pairs)

Use the SPSS Public Perceptions formatted database (.sav)

  1. Run a Linear Regression with an intercept
  2. Interpret the sign and meaning of each of the coefficients significantly different from zero. What “economic theory or narrative” could you develop based on the regression findings to explain why people live for long periods in Orange County (Hint: Explain the possible contribution of each significant coefficient/variable)?

Dependent Variable: Yearsorc (Years Lived in Orange County

Independent Variable:

About what is your household income?

Age

Race/ethnicity

Gender

Zipcode of residence

Do you rent or own

How much formal education do you have.

Do you think property taxes are too….

Report R-square and Coefficients for each of the independent variables and significance level

(Clip and Paste SPSS output).

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

095 Elementary Statistics

SJSU

Personalized Course Schedule

Please read these instructions carefully when preparing your schedule. The goal of this activity is to have you plan a schedule of activities required to succeed in this course. Planning is essential to your success! You will also get one more opportunity to practice uploading and submitting documents through Canvas.

Format: You may use bullet points or outline format or provide an actual calendar, if you like. Be as specific as possible. In other words, don’t just write something like “Week 1: will study.”

What is needed: You will need a copy of the current course syllabus. You will also need access to Canvas.

When finished: Upload the entire document (as a PDF file) to Canvas,

Building Your Personalized Course Schedule

Use the course syllabus to make a personalized schedule of activities and due dates. You want to list the dates and times for the various events and activities over the next semester (either 16 weeks, 10 week summer session, or 5 week summer session). Please note that it is better (and easier) to spread out your activities, working a little bit at a time, rather than trying to do everything at the last minute. Don’t try to “cram” for the assignments. Doing so will likely result in poor learning and low grades. Try to keep a steady pace throughout the course.

Goal: To create a course schedule based on our course requirements and schedule that fits your personal schedule so you can succeed in this course.

Questions to consider when creating your personalized schedule (you do not have to answer these questions directly; you just have to think about them when making your schedule):

  1. Based on the syllabus, what are the critical deadlines each week? In other words, when are all of your assignments due?
  2. When and how often will you watch the lessons each week?
  3. How much time will you schedule for the quizzes and Problem Sets?
  4. When and how often will you schedule times to study for the exams?
  5. When will you start and finish the project reports?
  6. Will there be times when you do not have access to a computer or the Internet? What is your plan to complete the assignments before you lack access?

DO NOT just copy the schedule in the syllabus. I want you to reach all of the points listed above.

Instructions for uploading papers to CANVAS

  1. Return to the Modules page in Canvas (https://sjsu.instructure.com/)
  2. Click the “Submit Assignment” link to the right of the “Your 16-week Schedule”
  3. Click on “Choose File,” find, and select the document on your computer with your written answers. Click “Open.”
  4. Click “Submit Assignments” when all the files to be uploaded have been selected.
  5. You should see that the submission status in the upper right side of the page indicates “Turned In!”

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

The Youth Risk Behavior Survey (YRBSS) is a national survey monitoring health behavior among youth and young adults. It is administered by the Centers for Disease Control and Prevention (CDC). For this use, the “Youth Risk Behavior Surveillance System Dataset” is provided to practice calculating and interpreting the t-test. Refer to the instructional videos in the topic resources and the Using and Interpreting Statistics: A Practical Text for the Behavioral, Social, and Health Sciences textbook as a guide.

Youth Risk Surveillance System (helpful resources): https://www.cdc.gov/healthyYouth/data/yrbs/index.htm

YRBSS Data and Documentation (helpful resources): https://www.cdc.gov/healthyyouth/data/yrbs/data.htm

2015 YRBS Data User’s Guide (helpful resources): https://www.cdc.gov/healthyyouth/data/yrbs/pdf/2015/2015_yrbs-data-users_guide_smy_combined.pdf


2015 State and Local Youth Risk Behavior Survey (helpful resources): https://www.cdc.gov/healthyyouth/data/yrbs/questionnaires.htm

Videos: https://www.gcumedia.com/lms-resources/student-success-center/?mediaElement=6616F929-5B03-EB11-9111-005056BDE9D6

Digital textbook: http://www.gcumedia.com/digital-resources/macmillan-learning/2016/using-and-interpreting-statistics_a-practical-text-for-the-behavioral-social-and-health-sciences_ebook_3e.php


Part 1

Refer to the topic resources to review the documentation, questionnaires, and general information pertaining to the YRBSS and YRBS. Then use the 2015 “Youth Risk Behavior Surveillance System Dataset” and conduct a two-sample t-test to determine if weight (in kg) differs by sex. Submit the SPSS output for the t-test.

Part 2

Create an 8-10 slide PowerPoint presentation to discuss the findings for the t-test. For the presentation of your PowerPoint, create lecture notes to further explain the slides displayed of the results and findings. Include an additional slide for references at the end.

Include the following:

  1. Identify which of three t-tests was selected and explain why this is the best statistical test and whether the assumptions were met.
  2. What are the null and alternative hypotheses?
  3. What is the decision rule?
  4. What is the test statistic and p-value?
  5. Interpretation of the t-test results (What was done? What was found? What does it mean? What suggestions are there for the creation of a health promotion intervention?)

side note (please read before accepting this question as a warning of how to get data file out into SPSS Software) —-> When you begin to upload “Youth Risk Behavior Surveillance System Dataset.xlsx” you can convert this file on your end of the computer into a “(.sav)” file into SPSS software of Citrix workspace. Studypool does not accept me dragging and dropping files of (.spv), so therefore I changed it on my end to an “XSLS” in order to provide the variables to apply for my t-Tests. When you open your blank SPSS software then you can click File, click Import Data, under Database will be the first option of Excel to open up your saved file of “Youth Risk Behavior Surveillance System Dataset.xlsx” to covert. You may need to make a separate folder under your username of the computer on either Mac or Windows to find the file easier when the “Open Data” box appears. Click on “Users”, click on “name of computer”, click on the folder you will make for this as “SPSS”, THEN it will be readily available to access. This is very helpful since I can send in (.sav) files.. I hope this helps!!

General Requirements

Submit the SPSS exported output and the PowerPoint.

APA style is not required, but solid academic writing is expected.

This uses a rubric. Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

Whether in a scholarly or practitioner setting, good research and data analysis should have the benefit of peer feedback. For this Discussion, you will perform an article critique on t tests. Be sure and remember that the goal is to obtain constructive feedback to improve the research and its interpretation, so please view this as an opportunity to learn from one another.

To prepare for this Discussion:

  • Review the Learning Resources and the media programs related to t tests.
  • Search for and select a quantitative article specific to your discipline and related to t tests. Help with this task may be found in the Course guide and assignment help linked in this week’s Learning Resources. Also, you can use as a guide the Research Design Alignment Table located in this week’s Learning Resources

Write a 3- to 5-paragraph critique of the article. In your critique, include responses to the following:

  • Which is the research design used by the authors?
  • Why did the authors use this t test?
  • Do you think it’s the most appropriate choice? Why or why not?
  • Did the authors display the data?
  • Do the results stand alone? Why or why not?
  • Did the authors report effect size? If yes, is this meaningful?

Be sure to support your Main Post and Response Post with reference to the week’s Learning Resources and other scholarly evidence in APA Style.

Learning Resources

Required Readings

Frankfort-Nachmias, C., Leon-Guerrero, A., & Davis, G. (2020). Social statistics for a diverse society (9th ed.). Thousand Oaks, CA: Sage Publications.

  • Chapter 8, “Testing Hypothesis” (pp. 243-279)

Wagner, III, W. E. (2020). Using IBM® SPSS® statistics for research methods and social science statistics (7th ed.). Thousand Oaks, CA: Sage Publications.

  • Chapter 6, “Testing Hypotheses Using Means and Cross-Tabulation” (previously read in Week 5)
  • Chapter 11, “Editing Output” (previously read in Week 2, 3, and 4)

Walden University Library. (n.d.). Course Guide and Assignment Help for RSCH 8210. Retrieved from http://academicguides.waldenu.edu/rsch8210

For help with this week’s research, see this Course Guide and related weekly assignment resources.

Fox, J. (1991). Discrete data. In Regression diagnostics (pp. 62-66). SAGE Publications, Inc., https://www-doi-org.ezp.waldenulibrary.org/10.4135/9781412985604

Fox, J. (1991). Regression diagnostics. SAGE Publications, Inc. https://www-doi-org.ezp.waldenulibrary.org/10.4135/9781412985604

Fox, J. (1991). Non-normally distributed errors. In Regression diagnostics (pp. 41-48). SAGE Publications, Inc., https://www-doi-org.ezp.waldenulibrary.org/10.4135/9781412985604

Fox, J. (1991). Nonconstant error variance. In Regression diagnostics (pp. 49-53). SAGE Publications, Inc., https://www-doi-org.ezp.waldenulibrary.org/10.4135/9781412985604

Fox, J. (1991). Nonlinearity. In Regression diagnostics (pp. 54-61). SAGE Publications, Inc., https://www-doi-org.ezp.waldenulibrary.org/10.4135/9781412985604

Fox, J. (1991). Outlying and influential data. In Regression diagnostics (pp. 22-40). SAGE Publications, Inc., https://www-doi-org.ezp.waldenulibrary.org/10.4135/9781412985604

Document: Week 6 t test Scenarios (PDF)

Use these scenarios to complete this week’s Assignment.

Document: Walden University: Research Design Alignment Table

Datasets

Your instructor will post the datasets for the course in the Doc Sharing section and in an Announcement. Your instructor may also recommend using a different dataset from the ones provided here.

Required Media

Laureate Education (Producer). (2016l). The t test for independent samples [Video file]. Baltimore, MD: Author.

Note: The approximate length of this media piece is 5 minutes.

In this media program, Dr. Matt Jones, demonstrates the t Test for independent samples in SPSS.

Accessible player

Laureate Education (Producer). (2016m). The t test for related samples [Video file]. Baltimore, MD: Author.

Note: The approximate length of this media piece is 5 minutes.

In this media program, Dr. Matt Jones, demonstrates the t test for related samples in SPSS.

Accessible player

Optional Resources

Klingenberg, B. (2016). Inference for comparing two population means. Retrieved from https://istats.shinyapps.io/2sample_mean/

Use the following app/weblink to enter your own data and obtain an interactive visual display.

Skill Builders:

  • Research Design and Statistical Design
  • Hypothesis Testing for Independent Samples t-test

To access these Skill Builders, navigate back to your Blackboard Course Home page, and locate “Skill Builders” in the left navigation pane. From there, click on the relevant Skill Builder link for this week.

You are encouraged to click through these and all Skill Builders to gain additional practice with these concepts. Doing so will bolster your knowledge of the concepts you’re learning this week and throughout the course.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

A marketing company based out of New York City is doing well and is looking to expand internationally. The CEO and VP of Operations decide to enlist the help of a consulting firm that you work for, to help collect data and analyze market trends.

You work for Mercer Human Resources. The Mercer Human Resource Consulting website lists prices of certain items in selected cities around the world. They also report an overall cost-of-living index for each city compared to the costs of hundreds of items in New York City (NYC). For example, London at 88.33 is 11.67% less expensive than NYC.

More specifically, if you choose to explore the website further you will find a lot of fun and interesting data. You can explore the website more on your own after the course concludes.

https://mobilityexchange.mercer.com/Insights/ cost-of-living-rankings#rankings

Assignment Guidance:

In the Excel document, you will find the 2018 data for 17 cities in the data set Cost of Living. Included are the 2018 cost of living index, cost of a 3-bedroom apartment (per month), price of monthly transportation pass, price of a mid-range bottle of wine, price of a loaf of bread (1 lb.), the price of a gallon of milk and price for a 12 oz. cup of black coffee. All prices are in U.S. dollars.

You use this information to run a Multiple Linear Regression to predict Cost of living, along with calculating various descriptive statistics. This is given in the Excel output (that is, the MLR has already been calculated. Your task is to interpret the data).

Based on this information, in which city should you open a second office in? You must justify your answer. If you want to recommend 2 or 3 different cities and rank them based on the data and your findings, this is fine as well.

Deliverable Requirements:

This should be ¾ to 1 page, no more than 1 single-spaced page in length, using 12-point Times New Roman font. You do not need to do any calculations, but you do need to pick a city to open a second location at and justify your answer based upon the provided results of the Multiple Linear Regression.

The format of this assignment will be an Executive Summary. Think of this assignment as the first page of a much longer report, known as an Executive Summary, that essentially summarizes your findings briefly and at a high level. This needs to be written up neatly and professionally. This would be something you would present at a board meeting in a corporate environment. If you are unsure of an Executive Summary, this resource can help with an overview. What is an Executive Summary?

Things to Consider:

To help you make this decision here are some things to consider:

  • Based on the MLR output, what variable(s) is/are significant?
  • From the significant predictors, review the mean, median, min, max, Q1 and Q3 values?
    • It might be a good idea to compare these values to what the New York value is for that variable. Remember New York is the baseline as that is where headquarters are located.
  • Based on the descriptive statistics, for the significant predictors, what city has the best potential?
    • What city or cities fall are below the median?
    • What city or cities are in the upper 3rd quartile?

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

Step by step solution required, and a report of at least 200 words.

  1. Convert the (FIT.txt) and (test.txt) datasets.
    1. For this assignment, you will be building a classification model, rather than a numeric prediction model. Therefore it is necessary to convert the last column (FAULTS) of the FIT and TEST data sets from a numeric value to a class value.
    2. To do this, replace the FAULTS column with a new column called CLASS. The values in this column should all be either “fp” or “nfp” depending on the number of FAULTS for that instance. For any instance where the number of FAULTS is 2 or more, the class label should be “fp”, for any instance where the number of FAULTS is less than 2, the class label should be “nfp”.
    3. When you are done, your data set should still have 9 columns. It will be identical to the original dataset except the FAULTS column will be gone, and in its place there will be a CLASS column.
    4. You will also have to change the header in the arff file. Where it used to say “@attribute FAULTS numeric” is should now say “@attribute CLASS {fp,nfp}”
  1. Build a Logistic Regression model using the FIT data and the Weka data mining tool.
    1. Open Weka and use the new FIT data set (made in step 1)
    2. Build your model like in the first assignment, but choose “Logistic” instead of “Linear Regression”
    3. The Weka tool will provide details about the model created and give statistical results. Weka gives the parameters ß0 (intercept) and ß1, ß2, …, ßk (the coefficients) of the Logistic Regression equation:
      eq.png

At this point, you are DONE with Weka. You will not use Weka for the following steps!

  1. Use the General Classification Rule to classify instances
    1. If you haven’t already done so, read Reference 02. Specifically, sections 2.1 through 2.4 will be very valuable in performing this experiment.
    2. Use the above Logistic Regression equation to calculate p for each instance. (Using a spreadsheet program like Excel will be very helpful in this step).
    3. Use the General Classification rule (as described in class) to classify each instance as either “fp” or “nfp” for different values of c. Use the following values for c: 0.1, 0.5, 1, 2, 3, 4, 5, 6, 10, 15, 20, 30, 40, 50
    4. Select the value of c which generates the most appropriate model. Your goal is to find a value of c where the Type 1 and Type 2 (false-positive and false-negative) error rates are balanced.
    5. Report results for both the FIT and TEST data sets.
    6. Don’t forget to include the following in your report:
  1. Introductory information (what are we doing, and how are we doing it?)
  2. Tables and graphs summarizing your results.

iii. The model you selected and the justification for that selection.

  1. In depth discussion of the results, and meaningful conclusions.

NOTES:

  1. The datasets you create in step 1 should NOT have both a FAULTS and a CLASS column. You are replacing the FAULTS column, not adding an additional column.
  2. Remember, you are ONLY using Weka for step 2. Once you have the coefficients and intercept of the Logistic Regression model, you are done with Weka.
  3. The use of a spreadsheet program (for example, Excel) will help you greatly in completing step 3.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

Scenario: Imagine you are a researcher who is interested in studying whether sleep deprivation leads to increased reaction times (i.e., being slower) when driving. You randomly select a sample of 30 licensed drivers. Fifteen participants are randomly assigned to get 5 hours of sleep for three consecutive nights. The other 15 participants are randomly assigned to get 8 hours of sleep for three consecutive nights. For the purposes of this Assignment, assume that all participants sleep exactly the required amounts. After the third night, all participants take a driving simulation test that measures their reaction times.

You can find the data for this Assignment in the Weekly Data Set forum.

By Day 7

To complete this Assignment, submit a response to each of the following. Use SPSS to determine if amount of sleep is related to reaction time.

  1. Explain whether the researcher should use an independent-samples t-test or a related-samples t-test for this scenario. Provide a rationale for your decision.
  2. Identify the independent variable and dependent variable.
  3. Knowing the researcher believes that people who sleep less will have slower reaction times, state the null hypothesis and alternate hypothesis in words (not formulas).
  4. Explain whether the researcher should use a one-tailed test or two-tailed test and why.
  5. Identify the obtained t value for this data set using SPSS and report it in your answer document.
  6. State the degrees of freedom and explain how you calculated it by hand.
  7. Identify the p value using SPSS and report it in your answer document.
  8. Explain whether the researcher should retain or reject the null hypothesis. Provide a rationale for your decision. Are the results statistically significant?
  9. Explain what the researcher can conclude about the relationship between amount of sleep and reaction times.

Be sure to fully explain the rationale for your answer to each question, including evidence from the text and Learning Resources.

Provide an APA reference list.

Submit three documents for grading:

  • Your text (Word) document with your answers and explanations to the assignment questions, your SPSS Data file, and your SPSS Output file.

    Assignment 2: Correlation

    As a consumer of research, you know that relationships are of critical importance. You must first know if a relationship exists between two variables before you can determine if one variable may account for another. In this week’s Learning Resources, you learned about correlations, which are used to determine if two variables are related to one another. You also learned that you cannot infer causation from a significant correlation alone. For example, you might find that years of education and salary are related, but that does not tell you if more education causes your salary to increase. In correlational studies, you cannot show that one variable causes a change in another variable. To determine causation, an experimental design is needed. However, correlations can demonstrate that as one variable increases, another tends to increase as well (positive relationship). You also may find that as one variable increases, the other tends to decrease (negative relationship). You may even find that there is no relationship at all between variables.Recall the researcher who investigated the relationship between hours of sleep and reaction times in the Week 4 Application. As a follow up to that study, the researcher wants to conduct a correlation to investigate further if there is a relationship between hours of sleep and reaction time. For this experiment, participants are allowed to sleep as much as they would like (that is, they are not assigned to sleep any specific number of hours). When 20 participants come to their appointment time, they report to the researcher how many hours of sleep they had the previous night. The researcher then tests their reaction times. You can find the data for this Assignment in the Weekly Data Set forum found in the Discussions area of the course navigation menu.

    By Day 7

    To complete this Assignment, submit your answers to the following. Use SPSS to determine if amount of sleep is related to reaction time.

    1. Before computing the correlation, state the null hypothesis and alternative hypothesis in words (not formulas).
    2. Based on the hypotheses you stated, explain whether the researcher should conduct a one-tailed or two-tailed test.
    3. Identify the correlation coefficient (r) for this data set using SPSS and report it in your answer document.
    4. State the degrees of freedom and explain how you calculated it by hand.
    5. Identify the p value using SPSS and report it in your answer document.
    6. Explain whether the researcher should retain or reject the null hypothesis.
    7. Are the results statistically significant? Explain how you know.
    8. Explain what the researcher can conclude about the relationship between amount of sleep and reaction times. Include a description of the direction (positive, negative, or no relationship) and strength (weak, moderate, or strong) of the relationship.

    Be sure to fully explain the rationale for your answer to each question, including evidence from the text and Learning Resources.Provide an APA reference list.Submit three documents for grading:

    • Your text (Word) document with your answers and explanations to the assignment questions, your SPSS Data file, and your SPSS Output file.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

Required Reading/Viewing

The following reading/viewing will provide you with the input you need to successfully complete this assignment.

  • Executives Download Executivesby House, Layton, Livingston & Moseley from The Engineering Communication Manual. This brief chapter introduces how to effectively communicate for Executives.
  • Professional Tone. This Canvas page describes ‘professional tone’ and writing to accomplish it.

The reading is provided to assist you with revising, editing, and proofreading your document and understanding why these are essential steps for effective communication.

The writing readings required for IDL 1 are hyperlinked here in case you want to review any information from them: revising drafts, (Links to an external site.) flow (Links to an external site.),, editing and proofreading (Links to an external site.), and reading aloud. (Links to an external site.)

Additional Resource Hyperlinks

The following reading/viewing will provide you with links to other parts of the assignment and additional professional or general writing resources.

Compose Content

Use the template provided for the memorandum. Refer to the assignment overview for the description of the assignment problem.

No quotations and NO plagiarism Quotations are rarely used in professional engineering and computer science writing; do not get in the habit of doing this. The words in your document must represent your own thinking based on the information you have gathered. Citing credible sources from which you got ideas gives you credibility, and it is required in this work. Review the Plagiarism page if you are unclear on what constitutes plagiarism or are from a region of the world that may think about it differently.

Document Structure

To complete this assignment, you must use this memorandum template. Download memorandum template.This memorandum example document Download memorandum example documentprovides an example using a different topic of the scope and detail of the document; pay attention to the appearance of your final memorandum. Use all headings as shown.

Your document will have the following elements:

  • List summarizing your revisions (see detail in the section below)
  • Memorandum Header. Use the header and header information on the provided template until you get to the “Re” line which is what the document is about and is the same as a title. Compose this line to specifically say what your memorandum is about as succinctly as possible.
  • Introduction. As with any professional document, the beginning of the document needs to provide readers with an overview of why it is important to them and what it will address. State what is important to the audience when it comes to agreeing to your request. Include a thesis statement, that presents the main idea of the document and why it is being presented. State that you will support this in two ways – 1) the ways in which probability and statistics are used in the work of the [your field] division; and 2) how probability and statistics proficiency of employees contribute to organization success.
  • Probability and Statistics Use in [Your Field] Division Services. Using the understanding of probability and statistics in your field you developed in assignment IDL 1, revise your communication of this material to support the arguments you are making to executives. To assist you with this, remember the roles of executives within the organization as described in your reading. Be sure you explain these in terms of their priorities. In-text citations are required in this section.
  • The Importance of Probability and Statistics Literacy for Our Division’s Operational Success. In this section, you will present your argument as a cohesive well-written section. You will incorporate the data from Analytics and AI-driven enterprises thrive in the age of with: The culture catalyst (Links to an external site.) as evidence. In-text citations are required in this section. At least one in-text citation is required in this section; a minimum of two in-text citations are required if you complete the extra credit.
    • Extra Credit Option: If you complete the extra credit option detailed below, it will be part of this section.
  • Conclusion. In this section, reverse the order of your introduction content, maintain your meaning but use different words. Be sure to end with a statement of what you accomplished (your objective for writing it) by providing this memorandum.
  • Reference List. The document will end with an APA formatted reference list (Links to an external site.) with a minimum of 4 sources. Only include items on the reference list for which you have made an APA in-text citation (Links to an external site.) in your document. You are required to have an in-text citation at any place in your memorandum where you have used the ideas, data, or images of others.
  • Extra Credit: If you complete the extra credit, insert a page break at the end of the assignment content, and paste in the review you completed for your classmate. You will not get credit without actionable feedback being provided.

Completing Review and Revision

Review and Revise Your Document Draft

At least one day after completing your draft and after the review workshop, revisit your draft and review it for:

Revise your work as needed. Even professional writers revise their work to make it better. If you feel you do not need any revision, you are not looking carefully enough. Keep a running list of the revisions you make. This list can be a bullet-pointed list that is a brief but understandable documentation of what type of changes you made and where you did this in the document. For example:

  • Content – I added more detail to the Use of statistics in my field, as mine lacked detail
  • Coherent – I revised sentence structure to make the introduction, use of statistics, and conclusion read more coherently
  • Clarity – I see I used the word things frequently and revised the document to use more specific wording to use statistics and enhance my employability sections.

Assignment Deliverable and Submittal

Deliverable: A 800-1400 word handout using the appropriate template, provided on the hyperlinked in the compose content section of the draft assignment. The bullet-pointed list included before the memorandum and required reference list do not count as part of the word count.

Submittal: All assignments must be uploaded to the appropriate Canvas assignment before the assignment closes. I save the document as a pdf before submitting it, as pdfs are better for maintaining the document format than MS Word. No formats other than pdf and MS Word will be accepted, as they will not be readable for the graders.

Extra Credit

Extra Credit Option (up to 5 points) – The Extra Credit for this assignment is to complete a peer review with a classmate.

To complete the review:

  1. Go to the People tab of your Canvas course page, then click on the “Peer Review Tab” you can sign up for a group in which to complete a peer review. Do this early and then communicate through Canvas message with the person you signed up with to arrange to exchange documents.
  2. Use the Feedback Considerations structure Download Feedback Considerations structureto complete your peer review copy and paste it, after a page break, at the end of your document. (Using this format is required so the graders can effectively review your work.)The document you will attach is the review you complete for another student. Use the review you receive to aid you in completing the revision list for your document.

USF banner

Rubric

IDL 2 21F (1)

IDL 2 21F (1)

Criteria Ratings Pts

This criterion is linked to a Learning OutcomeRevision summary providedA bullet-pointed list of revisions was provided indicating that the student has made an effort to revise their document.

4 pts

A

Well done

3.5 pts

B

Needs improvement, but mostly well done

3 pts

C

Needs significant improvement

2.5 pts

D

Poorly done

0 pts

F

Missing

4 pts

This criterion is linked to a Learning OutcomeIntroductionThe title/subtitle makes sense for the document. The introduction addresses the audience and the purpose of handout is clear after reading it. It is brief (not more than 20% of the handout) and gives a short summary of the document

5 pts

A

Well done

4.25 pts

B

Needs improvement, but mostly well done

3.75 pts

C

Needs significant improvement

3 pts

D

Poorly done

0 pts

F

Missing

5 pts

This criterion is linked to a Learning OutcomeUse of probability and statistics in the fieldThere is a general overview of the uses of
statistics/probability in the field. There is a clear connection between statistics and problems in the engineering world and the language is not too technical for your reader (no formulas or data tables). The discussion is focused on a specific field of engineering (industrial, civil…) or computer science.

6 pts

A

Well done

5.2 pts

B

Needs improvement, but mostly well done

4.5 pts

C

Needs significant improvement

3.8 pts

D

Poorly done

0 pts

F

Missing

6 pts

This criterion is linked to a Learning OutcomeData literacy aspectData from the required sources is included and discussed in the text. After reading this section, the audience understands the lag in data literacy in the American workplace and why it should be improved within your company.

8 pts

A

Well done

7 pts

B

Needs improvement, but mostly well done

6 pts

C

Needs significant improvement

5 pts

D

Poorly done

0 pts

F

Missing

8 pts

This criterion is linked to a Learning OutcomeConclusionIn this section, you summarize what you discussed about the use of probability and statistics in your field and how teaching this competency will enhance the company’s performance.

5 pts

A

Well done

4.25 pts

B

Needs improvement, but mostly well done

3.75 pts

C

Needs significant improvement

3 pts

D

Poorly done

0 pts

F

Missing

5 pts

This criterion is linked to a Learning OutcomeFormattingThe required template was used and consistency was verified.

2 pts

A

Well done

1.7 pts

B

Needs improvement, but mostly well done

1.5 pts

C

Needs significant improvement

1.2 pts

D

Poorly done

0 pts

F

The handout is formatted like an essay or is very unprofessional

2 pts

This criterion is linked to a Learning OutcomeWritingThe student has proofread the paper for clarity, spelling, and grammatical errors. The tone is well suited for the audience and professional. The handout is well organized with headings and has good flow.

6 pts

A

Well done

5.2 pts

B

Needs improvement, but mostly well done

4.5 pts

C

Needs significant improvement

3.8 pts

D

Poorly done

0 pts

F

The document is exceptionally poorly written

6 pts

This criterion is linked to a Learning OutcomeReferencesIn-text citations are used in the format (Author’s_ last_name, year) anywhere you have used someone else’s ideas, data, or images. 4 sources are used total. References are in APA format and on a separate page or not taking too much space on the handout.

4 pts

A

Well done

3.5 pts

B

Needs improvement, but mostly well done

3 pts

C

Needs significant improvement

2.5 pts

D

Poorly done

0 pts

F

Missing

4 pts

This criterion is linked to a Learning OutcomeExtra credit (5 pts)The student submitted a completely executed peer review at the end of the assignment.

0 pts

Full Marks

0 pts

No Marks

0 pts

This criterion is linked to a Learning OutcomeANY FORM OF PLAGIARISM will result in a zero for the entire assignment

0 pts

Full Marks

0 pts

No Marks

0 pts

This criterion is linked to a Learning OutcomeEnhanced Gen Ed Information and Data LiteracyStudents will know when there is a need for information, be able to identify, locate, evaluate, and effectively and responsibly use and share information for the problem at hand.

threshold: 1.0 pts

1 ptsSatisfactory: Student satisfactorily demonstrates information and data literacy attributes by using and sharing information to effectively and responsibly address the problem at hand.

0 ptsUnsatisfactory: Student does not satisfactorily demonstrate information and data literacy attributes.

This criterion is linked to a Learning OutcomeEnhanced Gen Ed CommunicationStudents will produce well-organized, well-developed communications that reflect appropriate use of language to achieve a specific purpose and address specific audiences.

threshold: 2.0 pts

3 ptsProficient: Student demonstrates a thorough understanding of required context and is able to create a skillfully developed presentation that provides relevant, detailed, and compelling assertions supported by credible and relevant sources.

2 ptsDeveloping: Student demonstrates an adequate understanding of required context and is able to explore ideas that are fairly-well developed in a presentation that is mostly organized and includes basic use of relevant sources to support ideas.

1 ptsNovice: Student is able to develop simple ideas in portions of the assignment, provides a basic, somewhat organized presentation, and attempts to use sources to support ideas with some attention to context.

This criterion is linked to a Learning OutcomeEnhanced Gen Ed Critical & Analytical ThinkingStudents will comprehensively explore issues, ideas, artifacts, and events before accepting or formulating opinions or conclusions.

threshold: 2.0 pts

3 ptsProficient: Student identifies and states problems clearly and completely in understandable terms, carefully and comprehensively evaluates the relevance of assumptions and questions the viewpoints of experts when presenting a position, and formulates conclusions based on a thorough and logical thought process that reflects careful analysis of appropriate assumptions and evidence.

2 ptsDeveloping: Student identifies problems to be considered critically with some omissions or lack of clarity, gathers mostly appropriate information to develop coherent arguments, and questions some conventional assumptions and often considers opposing viewpoints when formulating conclusions.

1 ptsNovice: Student demonstrates some awareness of assumptions when identifying positions, states problems in simple terms without much clarification, generally accepts viewpoints of experts as fact without question, and routinely reaches conclusions not consistently tied to some of the available information.

This criterion is linked to a Learning OutcomeEnhanced Gen Ed Problem SolvingStudents will design, evaluate, and implement a strategy to answer open-ended questions or achieve desired goals.

threshold: 2.0 pts

3 ptsProficient: Student demonstrates the ability to construct a clear and insightful problem statement and to identify multiple approaches to solve a problem that indicate insightful comprehension of the problem and carefully addresses multiple contextual factors and thoroughly reviews results relative to the problem with specific consideration of need for further work.

2 ptsDeveloping: Student demonstrates the ability to construct a problem statement, proposes multiple solutions that suggest comprehension of the problem, some of which apply within a specific context, and reviews results relative to the problem with some consideration of need for further work.

1 ptsNovice: Student demonstrates a limited ability to identify a problem statement and approaches for solving the problem, provides vague and cursory solutions that do not directly address a problem, and reviews results superficially with no consideration of need for further work.

Total Points: 40

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

Part One:

Conduct an independent samples t-test (a.k.a., between-subjects design, which is discussed in Ch. 10, and will be covered in Week 9 of this course) comparing two independent groups on any variable of your choice.

Each sample should have at least n=10 individuals (thus, 20 participants in the study total, at minimum)

What to turn in:

  1. State your research question (e.g., Is there a difference in coffee consumption between males and females?).
  2. State the null and alternative hypotheses, and then conduct the hypothesis test using an alpha level of .05.
  3. Calculate and interpret a measure of effect size (estimated Cohen’s d or r2).
  4. Calculate a 95% confidence interval.
  5. Write a conclusion statement, as would appear in a published research report.
  6. Critical analysis (up to 10 points of extra credit): Write a short paragraph (2-3 sentences) explaining why your research question is important and where your findings could be applied. This is very open ended. For example, if one were studying coffee consumption differences between males and females, they could discuss the importance of this information for Starbucks marketing purposes.

Part Two:

Discussion 9

Describe a potential example of an “independent samples t-tests” (see Ch. 10) that could be conducted within the social sciences, and list what you believe the outcome of the research would be for this study. No data or calculations are necessary whatsoever, but you should describe why you developed your chosen hypothesis (e.g., based on your own understanding of current research, real world observations, a wild guess, etc.).

To receive full credit, this first section must be at least 100 words. Once you submit this post, you will then have access to everyone else’s posts, at which point you must then respond to at least one other student by adding to their discussion, asking them a question, or reacting to their post in a meaningful way. For instance, maybe you disagree with their hypotheses and you’d like to offer an alternative perspective.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

1 INTRODUCTION

This project investigates a very common problem in communication of digital information, the variation of the bit error rate (or probability of bit error) as a function of changing the signal-to-noise ratio at the detector input. It is complicated, but I’ve tried to break out the steps as clearly as possible. This is a real, and immensely important, application of maximum likelihood detection techniques. Take your time. Do not let it go to the last minutes. In revision 5, the changes are in red, so you can find them quickly. There is an additional task in 2.5. I’ve made these changes based on my experience in crafting a solution to Project 4, as well as comments or questions from students.

1.1 Due Date Project 4 is due on Wednesday May 5, at 11:59 PM. Project 5 is due on Monday, May 17 at 11:59 PM.

1.2 Background Binary Amplitude Shift Keying (BASK) is a technique of sending binary information by sending one of two signals levels, with one level assigned to be a binary 1 and one level assigned to be a binary 0. One common mapping is to send a level of when the binary information is a 1 and to send at when the binary information is a 0. For this project we will assume a BASK system with the input being a binary random variable and The random variable is a simple mapping and the random variable corresponds to the symbol bit in a string of symbols, generated by the mapping of the input bit . For the various values for different values are independent and identically distributed, with . (1.1) As in Project 2, this sequence of i.i.d. random values is corrupted by additive white Gaussian noise, where the pdf of the noise is −A +A B ∈{0,1} Pr⎡B = 0 ⎣ ⎤ ⎦ = p0 , Pr⎡B = 1 ⎣ ⎤ ⎦ = 1− p0. M b = 0 ! m = −A, b = 1! m = A Mk k-th Bk Bk and Mk k fB (b) = p0 b = 0 1− p0 b = 1 ⎧ ⎨ ⎪ ⎩ ⎪ f M (m) = p0 m = +A 1− p0 m = −A ⎧ ⎨ ⎪ ⎩ ⎪2 . (1.2) Our model for our received signal is , (1.3) and we can assume that the random variables are independent. Hint: This is the same model used in Project 3, except we will be varying the noise variance. Hint: Notice that we are NOT using M and N as the number of rows and number of columns in your R matrix, as we have done before! Please be careful and don’t confuse them! We are tasked to design a Maximum A Priori (MAP) detection method that will examine the value received random variable, at the time instant , and estimate the value of the corresponding input binary digit, where the is the binary digit used to create the BASK value , and is the estimate at the output of the MAP detector. For this simple model, the Signal to Noise ratio, , is defined as , (1.4) and is often expressed in decibels, . Throughout your implementation of this Project, please assume that the value of , so that and Figure 1 shows the bits, messages, noise, received value, estimates and errors. The pmf for the input bits is not changed for any particular set of experiments. The entire process is repeated for various values of , which specify the variance by means of (1.4) fN (n) ~ N(0,σ2 ) = 1 2πσ2 e−n2 /(2σ2 ) , − ∞ < n < ∞ R = M + N M,N r k k ˆ bk bk ∈{0,1} mk ˆ bk γ γ = A2 σ2 γ dB = 10log10 (γ ) A = 1 γ = 1 σ2 γ dB = −10log10 (σ2 ) = −20log10 (σ ) γ dB3 Figure 1 Overall System Model for Project 4 (revised 4/19/2021)

2 PROJECT TASKS

erform the following tasks, document your results and submit them in written form in accordance with the instructions in Section 3, below. You may use this document as a format.

2.1 Design the MAP detector Design a MAP detector by the following steps.

1) Write expressions for the conditional pdfs, and .

2) Transform those conditional pdfs into the joint pdfs and by application of Bayes’ Rule. The answer will explicitly contain . 3) Using these joint pdfs, set up the ratio (think through what this means!). You will see that this ratio contains exponentials, the Gaussian constant, and a ratio of terms explicitly containing When , the numerator is larger than the denominator and thus the input is more likely.

4) Carefully take the natural log of the ratio and set the result equal to zero, because (Note that there is no derivative necessary here; we’re not formally maximizing, we’re just comparing the two pdfs.) Solve for , the value that makes the equation true. Hint: The MATLAB routine log takes the natural logarithm. log10 is logarithm to the base 10. BASK + MAP (or ML) Count Errors fB (b) = p0 b = 0 1− p0 b = 1 ⎧ ⎨ ⎪ ⎩ ⎪ b1,b2 ,…,bk ,…bn m1,m2 ,…,mk ,…mn r 1,r 2 ,…,r k ,…r n ˆ b1, ˆ b2 ,…, ˆ bk ,…ˆ b b1 n ,b2 ,…,bk ,…bn e1,e2 ,…,ek ,…en ∈{0,1} XOR pBX = Nerror Ntrial = Nerror n n1,n2 ,…,nk ,…nn iid fN (n) = N(0,σ2 ) b = 0 ! m = + A b = 1! m = − A fR|M =A(r | M = A) f r|M =− A(r | M = −A) fR,M (r, A) fR,M (r,−A) p0 L = fRM (n,m | b = 0) fRM (n,m | b = 1) p0. L > 1 b = 0 log(1) = 0. r = τ MAP Λ = log(L) = 04 Your MAP detector uses this voltage threshold, , to decide if each individual received value (in volts) should be interpreted as a binary 1 or a binary 0. That is (1.5) Explain how your value of does or does not depend on For a noise variance equivalent to a fixed value of (remember the assumption that ), plot the values of your MAP threshold, , as a function of over the range . Discuss why your MAP threshold curve makes sense in terms of the interpretation of Why does the threshold move in one direction or the other based on ?

2.2 Investigate the MAP detector For a value of and a value of , generate a large number of i.i.d. samples of the random variable and plot the histogram corresponding to the pdf . Using this value of , and the equivalent of , plot the analytical value of on the same figure. Indicate on your plot where the MAP threshold would be for this set of parameters. Look at the relative amplitudes of the “humps” of the Gaussian histograms and think about what they mean.

2.3 Evaluate the ML Detector

2.3.1 Find the ML threshold Modify your MAP detector to be a maximum likelihood (ML) detector by assuming that a binary 1 and a binary 0 are equally likely, and determine the ML threshold, . Explain why this value makes sense

. 2.3.2 Derive the probability of error. Then, using the value of your ML threshold, analytically compute the theoretical . Using the following steps 1) Write the conditional pdf when the input bit is 2) Using the threshold in (1.5), write the limits of the region where the ML decision selects a 1 instead of a 0. 3) Integrate the conditional pdf over that region to find the conditional probability of error given that the input was a 0. Hint: The Q function will be of use here, consider your change of variables from an early homework. 4) Write the conditional pdf when the input bit is τ MAP r k ˆ bk = 1 r k < τ MAP 0 r k ≥ τ MAP ⎧ ⎨ ⎪ ⎩ ⎪ τ MAP p0. γ dB = 10 dB A = 1 τ MAP p0 0.01≤ p0 ≤ 0.99 p0. p0 p0 = 0.6 γ dB = 10 dB R fR (r) p0 σ2 fR (r) τ ML pBT b = 0. b = 1.5 5) Using the threshold in (1.5), write the limits of the region where the ML decision selects a 0 instead of a 1. 6) Integrate the conditional pdf over that region to find the conditional probability of error given that the input was a 1. 7) Use the Principle of Total Probability to write an expression for the unconditional probability of error. This is . For full credit, combine the terms wherever possible. Hint: Remember the Q function. MATLAB knows about erfc, but not Q.

2.3.3 Simulate the ML detector. Evaluate the bit error performance of the ML detector as a function of the signal-to-noise ratio by varying from 1 dB to 8 dB in steps of 0.5 dB, and then from 8.5 dB to 13 dB in steps of 0.25 dB. At each value of , generate a very large number of trials of the random variable , apply your ML decoder, and compare with the estimate . Hint: errors = mod(bk-bkhat,2) Count all of the errors at each value of , and compute your experimental probability of bit error, (1.6) at each value of , where is the number of errors counted and is the number of trials. Save the values of , for use in Section 2.5. Plot the simulated results on the y-axis against on the x-axis. On the same figure plot a smooth curve of using your theoretical value. Discuss any differences between your simulated values and your analytical curve. Hint: At , there should be about 1 error in every 100,000 trials, so you may need several hundred thousand trials for the higher values of . Hint: To plot the analytical values, you can use a much finer step size for to get a smooth curve, because you are just doing the computation, not a simulation at each point. You might also want to plot the simulated values using only symbols to make it clear the exact coordinates of the simulation points. Hint: Your curves should have a negative slope: as increases, decreases. Hint: Consider the MATLAB function semiology for the plots. pBT γ dB γ dB R bk ˆ bk γ dB pBX = Nerror Ntrials γ dB Nerror Ntrials (γ dB , pB ) log10 ( pBX ) γ dB log10 ( pBT ) γ dB = 12.6 dB γ dB γ dB γ dB pB6

2.4 Evaluate the MAP Detector

2.4.1 Derive the probability of error using the MAP threshold Following the same steps as

2.3.2 write an expression for the probability of error for the MAP threshold. Your answer should explicitly contain . 2.4.2 Simulate the MAP detector Following the same steps as

2.3.3, simulate the bit error performance of the MAP detector. Remember to use when establishing your random binary bits, . Plot the simulated results on the y-axis against on the x-axis. On the same figure plot a smooth curve of using your theoretical value. Discuss any differences between your simulated values and your analytical curve.

2.5 Compare the MAP and ML Detector performance Finally, plot the analytical for ML detector (from 2.3) and the analytical for MAP detector (from 2.4), on the same x-axis, that is, plot two curves on the same figure. Discuss any differences. Then, on a separate figure, plot the ratio (1.7) on the (linear) y-axis against the value of on the x-axis. How does this ratio compare with a value of 1, which would indicate that the probability of error was the same for MAP and ML? Why does the ratio (compared to 1) make sense? Hint: What does tell us?

3 INSTRUCTIONS FOR PROJECT REPORT

3.1 Report Format The project report shall be in the same form as this document, with an introduction, simulation and discussion section, and a “what I learned” section. Each section shall contain the content identified in Section 2 and the appropriate Section 3 subsection below. The report shall be in Times New Roman 11 point font. MATLAB pictures shall be pasted in-line in the report (this is a useful skill to know!); shall be numbered consecutively; shall be appropriately titled; the axes shall be appropriately labeled; the curves shall be appropriately identified by an appropriate legend.

3.2 Section 2 Content Section 2 of the report shall be titled “Simulation and Discussion” and shall contain the simulation plots and a discussion of each plot. The discussion shall address the points identified in Section 2, and any other interesting p0 p0 bk log10 ( pBX ) γ dB log10 ( pBT ) log10 pBT log10 pBT γ dB ρ = pBT for p0 = 0.6 (MAP) pBT for p0 = 0.5 (ML) γ DB p0 ≠ 0.57 observations that occur to you. Remember, I know this stuff: you don’t. So take a look at the plots and tell me what you see and what it means to you.

3.3 Section 3 Content Section 3 of the report shall be titled “What I learned” and shall contain a summary of what information you observed, what insights you gained, etc. Section 3 shall also contain a subsection critiquing the project and suggesting improvements that I could institute for next spring. Finally, Section 3 shall contain an estimate of how much time you spent on the project, including reading, research, programming, writing, and final preparation.

3.4 Questions I will accept questions regarding the project through the Ask the Professor discussion forum through 6 PM on Tuesday April 13, 2021. Please plan to check the Ask the Professor discussion forum frequently to learn of clarifications and hints (if I give any!). In my opinion, Project 3 is actually easier than Project 2, so I’m less likely to give direct assistance.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

1 INTRODUCTION


This project investigates a very common problem in communication of digital information, the variation of the bit error rate (or probability of bit error) as a function of changing the signal-to-noise ratio at the detector input. It is complicated, but I’ve tried to break out the steps as clearly as possible. This is a real, and immensely important, application of maximum likelihood detection techniques. Take your time. Do not let it go to the last minutes. In revision 5, the changes are in red, so you can find them quickly. There is an additional task in 2.5. I’ve made these changes based on my experience in crafting a solution to Project 4, as well as comments or questions from students.

1.1 Due Date Project 4 is due on Wednesday May 5, at 11:59 PM. Project 5 is due on Monday, May 17 at 11:59 PM.

1.2 Background Binary Amplitude Shift Keying (BASK) is a technique of sending binary information by sending one of two signals levels, with one level assigned to be a binary 1 and one level assigned to be a binary 0. One common mapping is to send a level of when the binary information is a 1 and to send at when the binary information is a 0. For this project we will assume a BASK system with the input being a binary random variable and The random variable is a simple mapping and the random variable corresponds to the symbol bit in a string of symbols, generated by the mapping of the input bit . For the various values for different values are independent and identically distributed, with . (1.1) As in Project 2, this sequence of i.i.d. random values is corrupted by additive white Gaussian noise, where the pdf of the noise is −A +A B ∈{0,1} Pr⎡B = 0 ⎣ ⎤ ⎦ = p0 , Pr⎡B = 1 ⎣ ⎤ ⎦ = 1− p0. M b = 0 ! m = −A, b = 1! m = A Mk k-th Bk Bk and Mk k fB (b) = p0 b = 0 1− p0 b = 1 ⎧ ⎨ ⎪ ⎩ ⎪ f M (m) = p0 m = +A 1− p0 m = −A ⎧ ⎨ ⎪ ⎩ ⎪2 . (1.2) Our model for our received signal is , (1.3) and we can assume that the random variables are independent. Hint: This is the same model used in Project 3, except we will be varying the noise variance. Hint: Notice that we are NOT using M and N as the number of rows and number of columns in your R matrix, as we have done before! Please be careful and don’t confuse them! We are tasked to design a Maximum A Priori (MAP) detection method that will examine the value received random variable, at the time instant , and estimate the value of the corresponding input binary digit, where the is the binary digit used to create the BASK value , and is the estimate at the output of the MAP detector. For this simple model, the Signal to Noise ratio, , is defined as , (1.4) and is often expressed in decibels, . Throughout your implementation of this Project, please assume that the value of , so that and Figure 1 shows the bits, messages, noise, received value, estimates and errors. The pmf for the input bits is not changed for any particular set of experiments. The entire process is repeated for various values of , which specify the variance by means of (1.4) fN (n) ~ N(0,σ2 ) = 1 2πσ2 e−n2 /(2σ2 ) , − ∞ < n < ∞ R = M + N M,N r k k ˆ bk bk ∈{0,1} mk ˆ bk γ γ = A2 σ2 γ dB = 10log10 (γ ) A = 1 γ = 1 σ2 γ dB = −10log10 (σ2 ) = −20log10 (σ ) γ dB3 Figure 1 Overall System Model for Project 4 (revised 4/19/2021)

2 PROJECT TASKS

erform the following tasks, document your results and submit them in written form in accordance with the instructions in Section 3, below. You may use this document as a format.

2.1 Design the MAP detector Design a MAP detector by the following steps.

1) Write expressions for the conditional pdfs, and .

2) Transform those conditional pdfs into the joint pdfs and by application of Bayes’ Rule. The answer will explicitly contain . 3) Using these joint pdfs, set up the ratio (think through what this means!). You will see that this ratio contains exponentials, the Gaussian constant, and a ratio of terms explicitly containing When , the numerator is larger than the denominator and thus the input is more likely.

4) Carefully take the natural log of the ratio and set the result equal to zero, because (Note that there is no derivative necessary here; we’re not formally maximizing, we’re just comparing the two pdfs.) Solve for , the value that makes the equation true. Hint: The MATLAB routine log takes the natural logarithm. log10 is logarithm to the base 10. BASK + MAP (or ML) Count Errors fB (b) = p0 b = 0 1− p0 b = 1 ⎧ ⎨ ⎪ ⎩ ⎪ b1,b2 ,…,bk ,…bn m1,m2 ,…,mk ,…mn r 1,r 2 ,…,r k ,…r n ˆ b1, ˆ b2 ,…, ˆ bk ,…ˆ b b1 n ,b2 ,…,bk ,…bn e1,e2 ,…,ek ,…en ∈{0,1} XOR pBX = Nerror Ntrial = Nerror n n1,n2 ,…,nk ,…nn iid fN (n) = N(0,σ2 ) b = 0 ! m = + A b = 1! m = − A fR|M =A(r | M = A) f r|M =− A(r | M = −A) fR,M (r, A) fR,M (r,−A) p0 L = fRM (n,m | b = 0) fRM (n,m | b = 1) p0. L > 1 b = 0 log(1) = 0. r = τ MAP Λ = log(L) = 04 Your MAP detector uses this voltage threshold, , to decide if each individual received value (in volts) should be interpreted as a binary 1 or a binary 0. That is (1.5) Explain how your value of does or does not depend on For a noise variance equivalent to a fixed value of (remember the assumption that ), plot the values of your MAP threshold, , as a function of over the range . Discuss why your MAP threshold curve makes sense in terms of the interpretation of Why does the threshold move in one direction or the other based on ?

2.2 Investigate the MAP detector For a value of and a value of , generate a large number of i.i.d. samples of the random variable and plot the histogram corresponding to the pdf . Using this value of , and the equivalent of , plot the analytical value of on the same figure. Indicate on your plot where the MAP threshold would be for this set of parameters. Look at the relative amplitudes of the “humps” of the Gaussian histograms and think about what they mean.

2.3 Evaluate the ML Detector

2.3.1 Find the ML threshold Modify your MAP detector to be a maximum likelihood (ML) detector by assuming that a binary 1 and a binary 0 are equally likely, and determine the ML threshold, . Explain why this value makes sense

. 2.3.2 Derive the probability of error. Then, using the value of your ML threshold, analytically compute the theoretical . Using the following steps 1) Write the conditional pdf when the input bit is 2) Using the threshold in (1.5), write the limits of the region where the ML decision selects a 1 instead of a 0. 3) Integrate the conditional pdf over that region to find the conditional probability of error given that the input was a 0. Hint: The Q function will be of use here, consider your change of variables from an early homework. 4) Write the conditional pdf when the input bit is τ MAP r k ˆ bk = 1 r k < τ MAP 0 r k ≥ τ MAP ⎧ ⎨ ⎪ ⎩ ⎪ τ MAP p0. γ dB = 10 dB A = 1 τ MAP p0 0.01≤ p0 ≤ 0.99 p0. p0 p0 = 0.6 γ dB = 10 dB R fR (r) p0 σ2 fR (r) τ ML pBT b = 0. b = 1.5 5) Using the threshold in (1.5), write the limits of the region where the ML decision selects a 0 instead of a 1. 6) Integrate the conditional pdf over that region to find the conditional probability of error given that the input was a 1. 7) Use the Principle of Total Probability to write an expression for the unconditional probability of error. This is . For full credit, combine the terms wherever possible. Hint: Remember the Q function. MATLAB knows about erfc, but not Q.

2.3.3 Simulate the ML detector. Evaluate the bit error performance of the ML detector as a function of the signal-to-noise ratio by varying from 1 dB to 8 dB in steps of 0.5 dB, and then from 8.5 dB to 13 dB in steps of 0.25 dB. At each value of , generate a very large number of trials of the random variable , apply your ML decoder, and compare with the estimate . Hint: errors = mod(bk-bkhat,2) Count all of the errors at each value of , and compute your experimental probability of bit error, (1.6) at each value of , where is the number of errors counted and is the number of trials. Save the values of , for use in Section 2.5. Plot the simulated results on the y-axis against on the x-axis. On the same figure plot a smooth curve of using your theoretical value. Discuss any differences between your simulated values and your analytical curve. Hint: At , there should be about 1 error in every 100,000 trials, so you may need several hundred thousand trials for the higher values of . Hint: To plot the analytical values, you can use a much finer step size for to get a smooth curve, because you are just doing the computation, not a simulation at each point. You might also want to plot the simulated values using only symbols to make it clear the exact coordinates of the simulation points. Hint: Your curves should have a negative slope: as increases, decreases. Hint: Consider the MATLAB function semiology for the plots. pBT γ dB γ dB R bk ˆ bk γ dB pBX = Nerror Ntrials γ dB Nerror Ntrials (γ dB , pB ) log10 ( pBX ) γ dB log10 ( pBT ) γ dB = 12.6 dB γ dB γ dB γ dB pB6

2.4 Evaluate the MAP Detector

2.4.1 Derive the probability of error using the MAP threshold Following the same steps as

2.3.2 write an expression for the probability of error for the MAP threshold. Your answer should explicitly contain . 2.4.2 Simulate the MAP detector Following the same steps as

2.3.3, simulate the bit error performance of the MAP detector. Remember to use when establishing your random binary bits, . Plot the simulated results on the y-axis against on the x-axis. On the same figure plot a smooth curve of using your theoretical value. Discuss any differences between your simulated values and your analytical curve.

2.5 Compare the MAP and ML Detector performance Finally, plot the analytical for ML detector (from 2.3) and the analytical for MAP detector (from 2.4), on the same x-axis, that is, plot two curves on the same figure. Discuss any differences. Then, on a separate figure, plot the ratio (1.7) on the (linear) y-axis against the value of on the x-axis. How does this ratio compare with a value of 1, which would indicate that the probability of error was the same for MAP and ML? Why does the ratio (compared to 1) make sense? Hint: What does tell us?

3 INSTRUCTIONS FOR PROJECT REPORT

3.1 Report Format The project report shall be in the same form as this document, with an introduction, simulation and discussion section, and a “what I learned” section. Each section shall contain the content identified in Section 2 and the appropriate Section 3 subsection below. The report shall be in Times New Roman 11 point font. MATLAB pictures shall be pasted in-line in the report (this is a useful skill to know!); shall be numbered consecutively; shall be appropriately titled; the axes shall be appropriately labeled; the curves shall be appropriately identified by an appropriate legend.

3.2 Section 2 Content Section 2 of the report shall be titled “Simulation and Discussion” and shall contain the simulation plots and a discussion of each plot. The discussion shall address the points identified in Section 2, and any other interesting p0 p0 bk log10 ( pBX ) γ dB log10 ( pBT ) log10 pBT log10 pBT γ dB ρ = pBT for p0 = 0.6 (MAP) pBT for p0 = 0.5 (ML) γ DB p0 ≠ 0.57 observations that occur to you. Remember, I know this stuff: you don’t. So take a look at the plots and tell me what you see and what it means to you.

3.3 Section 3 Content Section 3 of the report shall be titled “What I learned” and shall contain a summary of what information you observed, what insights you gained, etc. Section 3 shall also contain a subsection critiquing the project and suggesting improvements that I could institute for next spring. Finally, Section 3 shall contain an estimate of how much time you spent on the project, including reading, research, programming, writing, and final preparation.

3.4 Questions I will accept questions regarding the project through the Ask the Professor discussion forum through 6 PM on Tuesday April 13, 2021. Please plan to check the Ask the Professor discussion forum frequently to learn of clarifications and hints (if I give any!). In my opinion, Project 3 is actually easier than Project 2, so I’m less likely to give direct assistance.

3.5 Project Grading The project will be graded in the following way: 75% of the project score shall depend on the technical, theoretical, and graphical presentations of the tasks set out in Section 2 of this document. 25% of the project score shall be based on an evaluation of the technical writing against the Rubric on Technical Writing, posted on Blackboard, including grammar, clarity, organization, etc. For the purpose of this document, yo

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

Binary Amplitude Shift Keying (BASK) is a technique of sending binary information by sending one of two signals levels, with one level assigned to be a binary 1 and one level assigned to be a binary 0. One common mapping is to send a level of when the binary information is a 1 and to send at when the binary information is a 0. For this project we will assume a BASK system with the input being a binary random variable and The random variable is a simple mapping and the random variable corresponds to the symbol bit in a string of symbols, generated by the mapping of the input bit . For the various values for different values are independent and identically distributed, with . (1.1) As in Project 2, this sequence of i.i.d. random values is corrupted by additive white Gaussian noise, where the pdf of the noise is . (1.2) Our model for our received signal is , (1.3) −A +A B ∈{0,1} Pr⎡B = 0 ⎣ ⎤ ⎦ = p0 , Pr⎡B = 1 ⎣ ⎤ ⎦ = 1− p0. M b = 0 ! m = −A, b = 1! m = A Mk k-th Bk Bk and Mk k fB (b) = p0 b = 0 1− p0 b = 1 ⎧ ⎨ ⎪ ⎩ ⎪ f M (m) = p0 m = +A 1− p0 m = −A ⎧ ⎨ ⎪ ⎩ ⎪ fN (n) ~ N(0,σ2 ) = 1 2πσ2 e−n2 /(2σ2 ) , − ∞ < n < ∞ R = M + N 2 and we can assume that the random variables are independent. Hint: This is the same model used in Project 3, except we will be varying the noise variance. Hint: Notice that we are NOT using M and N as the number of rows and number of columns in your R matrix, as we have done before! Please be careful and don’t confuse them! We are tasked to design a Maximum A Priori (MAP) detection method that will examine the value received random variable, at the time instant , and estimate the value of the corresponding input binary digit, where the is the binary digit used to create the BASK value , and is the estimate at the output of the MAP detector. For this simple model, the Signal to Noise ratio, , is defined as , (1.4) and is often expressed in decibels, . Throughout your implementation of this Project, please assume that the value of , so that and Figure 1 shows the bits, messages, noise, received value, estimates and errors. The pmf for the input bits is not changed for any particular set of experiments. The entire process is repeated for various values of , which specify the variance by means of (1.

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

Posted in Uncategorized

Statistics Question

This project is reviewing a dataset from the CDC on COVID19 deaths. I need to explore the dataset and try to provide statistical analysis that support the hypotheses listed below. This project must be done using SAS programming.

I have created my business questions and hypothesis that i need to try and create graphs and statistical test to reject or support each hypothesis. The requirement is to use and show some statistical test using SAS. I also need to show any charts or graphs that can support the tests. I need some creative ways to do some SAS work that will support my business questions. We may not have all the data needed but hopefully enough to provide recommendations. The following hypothesis need to be answered and/or supported with some statistical analysis with SAS.

Research Hypothesis

  • 1. Were Pneumonia deaths the cause of under-reporting COVID-19 deaths in the start of the pandemic?
    • Null hypothesis – Pneumonia deaths were significantly higher than the normal rates in 2020.
    • Alternative Hypothesis – Pneumonia deaths were at normal levels throughout the U.S. in 2020.
  • 2. Did the political environment of each state during 2020 create a higher death rate of people 50 and older?
    • The majority of the states with the highest COVID19 death rates of 50 older had the same political party governor.
    • The majority of states with the highest COVID19 deaths did not have the same political party governor.
  • 3. Were COVID19 deaths under-reported at the beginning of the pandemic?
    • COVID19 deaths were under-reported in 2020.
    • COVID19 deaths were not under-reported in 2020.
  • 4. Did the states with the highest COVID19 deaths have political similarities?
    • The states with the highest COVID19 death rates in 2020 were the republican party.
    • The states with the highest COVID19 death rates in 2020 were the democratic party.
    • The states with the highest COVID19 death rates in 2020 had no party relationship trend.—————————————–Requirements are as follows: (Use Dataset that is attached to this post)
  • 1. Provide SAS statistical analysis that includes charts, graphs or tables, for each question with a two-three sentence explanation for each questions supported analysis. Provide all SAS code that is used to create the tests and charts in separate txt file. .

    2. Provide recommendation for further data that would be needed if a question or hypothesis could not be answered. Note: The dataset has a column named “political_Party” this is the political party for the associated state.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      Assignment Details

      The below scenario describes a real-world or business application that utilizes statistical analysis to help resolve a business problem: increasing efficiency by decreasing processing time. Prepare an analysis by completing five steps of the hypothesis testing with one sample. The report should be a minimum of 5 pages in length.

      Last week, your manager asked you to analyze staffing needs for the Foreclosure Department. She was so impressed, and she wants you to create another report for her. Her intention is to decrease the processing time per document.

      Based on last week’s report, the average number of processed documents per hour was 15.11, with a standard deviation of 2.666. That is, one document was reviewed in 238.25 seconds. To be objective as much as possible, the manager spoke with an employee whose average was exactly 15 documents per hour. The employee claimed that if she was given a larger monitor, the processing time would be shorter.

      They conducted an experiment with a large monitor and measured processing time. After reviewing 20 documents, the calculated average processing time per document was 190.58 seconds. The manager believes that a bigger monitor helped reduce the processing time for reviewing foreclosure documents. Conduct a hypothesis test using a 95% confidence level, which means that significance level a = 0.05.

      Use the 5-step process, and explain each term or concept mentioned in each section in the following.

      Step 1: Set Up Null and Alternative Hypotheses

      Based on the request description, explain if a one-tailed or two-tailed test is needed. If a one-tailed test is needed, is it a left or right-tailed test? Please explain why one alternative is better than the other.

      State both of the following hypotheses:

      Null hypothesis
      Alternative hypothesis
      You will need the following information to progress to Step 2:

      Standard deviation: Explain what standard deviation is. Locate the calculated standard deviation in the assignment description, and enter here.
      Random variable: Explain what a random variable is. Locate it in the assignment description, and enter here.
      Test type: Compare and contrast t-test and z-test. Once done, determine which one is appropriate for the experiment given the fact that the sample size is less than 30.
      Step 2: Decide the Level of Significance

      Explain what the significance level is, and determine whether the one used in the assignment description is high, medium, or low. What does this significance level tell you about this test? Locate the level of significance in the given scenario, and list it in this step.

      Significance level = ?

      Determine the degree of freedom based on the number of reviewed documents in the new experiment (n = 20) and based on the formula Degree of freedom = n – 1.

      Degree of freedom = ?

      Critical value = (You will need to use the t-table and find the intersection point between the degree of freedom and the alpha value that is provided in the assignment description.)

      Step 3: Calculate the Test Statistics

      Calculate the test statistics based on the test type determined in Step 1.

      If the determination was done correctly, you should use this formula to calculate the test statistics.

      Test statistics = ?

      Step 4: Compare the Calculated Test Statistics and the Critical Value

      Construct a bell-shaped diagram showing the critical value and the calculated test statistic.

      Step 5: Reach a Conclusion

      Was the manager’s conclusion correct? Share your conclusions on the assumptions in the scenario using the hypothesis testing that you conducted in the previous four steps.

      Use this template to complete your assignment.

      Submitting your assignment in APA format means, at a minimum, you will need the following:

      Title page: The title should be in all capitals.
      Length: 5 pages minimum
      Body: This begins on the page following the title page and abstract page and must be double-spaced (be careful not to triple- or quadruple-space between paragraphs). The typeface should be 12-pt. Times Roman or 12-pt. Courier in regular black type. Do not use color, bold type, or italics, except as required for APA-level headings and references. The deliverable length of the body of your paper for this assignment is 5 pages. In-body academic citations to support your decisions and analysis are required. A variety of academic sources is encouraged.
      Reference page: References that align with your in-body academic sources are listed on the final page of your paper. The references must be in APA format using appropriate spacing, hanging indent, italics, and uppercase and lowercase usage as appropriate for the type of resource used. Remember, the Reference page is not a bibliography but a further listing of the abbreviated in-body citations used in the paper. Every referenced item must have a corresponding in-body citation.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      Scenario

      You have been hired by the Regional Real Estate Company to help them analyze real estate data. One of the company’s Pacific region salespeople just returned to the office with a newly designed advertisement. It states that the average cost per square foot of his home sales is above the average cost per square foot in the Pacific region. He wants you to make sure he can make that statement before approving the use of the advertisement. The average cost per square foot of his home sales is $275. In order to test his claim, you collect a sample of 1,001 home sales for the Pacific region.

      Prompt

      Design a hypothesis test and interpret the results using significance level α = .05.

      Use the House Listing Price by Region document to help support your work on this assignment. You may also use the Descriptive Statistics in Exceland Creating Histograms in Excel tutorials for support.

      Specifically, you must address the following rubric criteria, using the Module Five Assignment Template.

      • Setup: Define your population parameter, including hypothesis statements, and specify the appropriate test.
        • Define your population parameter.
        • Write the null and alternative hypotheses. Note: Remember, the salesperson believes that his sales are higher.
        • Specify the name of the test you will use.
          • Identify whether it is a left-tailed, right-tailed, or two-tailed test.
        • Identify your significance level.
      • Data Analysis Preparations: Describe sample summary statistics, provide a histogram and summary, check assumptions, and find the test statistic and significance level.
        • Provide the descriptive statistics (sample size, mean, median, and standard deviation).
        • Provide a histogram of your sample.
        • Describe your sample by writing a sentence describing the shape, center, and spread of your sample.
        • Determine whether the conditions to perform your identified test have been met.
      • Calculations: Calculate the p value, describe the p value and test statistic in regard to the normal curve graph, discuss how the p value relates to the significance level, and compare the p value to the significance level to reject or fail to reject the null hypothesis.
        • Determine the appropriate test statistic, then calculate the test statistic.
          Note: This calculation is (mean – target)/standard error. In this case, the mean is your regional mean (Pacific), and the target is 275.
        • Calculate the p value.
          Note: For right-tailed, use the T.DIST.RT function in Excel, left-tailed is the T.DIST function, and two-tailed is the T.DIST.2T function. The degree of freedom is calculated by subtracting 1 from your sample size.
          Choose your test from the following:
          =T.DIST.RT([test statistic], [degree of freedom])
          =T.DIST([test statistic], [degree of freedom], 1)
          =T.DIST.2T([test statistic], [degree of freedom])
        • Using the normal curve graph as a reference, describe where the p value and test statistic would be placed.
      • Test Decision: Discuss the relationship between the p value and the significance level, including a comparison between the two, and decide to reject or fail to reject the null hypothesis.
        • Discuss how the p value relates to the significance level.
        • Compare the p value and significance level, and make a decision to reject or fail to reject the null hypothesis.
      • Conclusion: Discuss how your test relates to the hypothesis and discuss the statistical significance.
        • Explain in one paragraph how your test decision relates to your hypothesis and whether your conclusions are statistically significant.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      Assignment #1: Quantitative Analysis

      For this assignment, students should choose data from the quantitative analysis below and are asked to analyze it using Excel, RStuido (BONUS points)

      Data set:

      Minnesota Healthcare Database.xlsx

      Medicare National Data by County

      MN Hospital Report Data by Care Unit FY2013

      MN HCCIS Imaging Procedures 2013

      MEPS Dental Files

      MEPS Inpatient Stay Database

      Students will develop an analysis report, in five main sections, including introduction, research method (research questions/objective, data set, research method, and analysis), results, conclusion and health policy recommendations. This is a 5-6 page individual project report.

      Here are the main steps for this assignment.

      TOPIC: – Comparing average annual percent of diabetic Medicare enrollees age 65-75 having hemoglobin A1c between B and W (#1)

      Step 2: Develop the research question and

      Step 3: Run the analysis using EXCEL (RStudio for BONUS points) and report the findings using the assignment instruction.

      The Report Structure:

      Start with the

      1.Cover page (1 page, including running head).

      Please look at the example http://www.apastyle.org/manual/related/sample-experiment-paper-1.pdf (you can download the file from the class) and http://www.umgc.edu/library/libhow/apa_tutorial.cfm to learn more about the APA style.

      In the title page include:

      • Title, this is the approved topic by your instructor.
      • Student name
      • Class name
      • Instructor name
      • Date

      2.Introduction

      Introduce the problem or topic being investigated. Include relevant background information, for example;

      • Indicates why this is an issue or topic worth researching;
      • Highlight how others have researched this topic or issue (whether quantitatively or qualitatively), and
      • Specify how others have operationalized this concept and measured these phenomena

      Note: Introduction should not be more than one or two paragraphs.

      Literature Review

      There is no need for a literature review in this assignment

      3.Research Question or Research Hypothesis

      What is the Research Question or Research Hypothesis?

      ***Just in time information: Here are a few points for Research Question or Research Hypothesis

      There are basically two kinds of research questions: testable and non-testable. Neither is better than the other, and both have a place in applied research.

      Examples of non-testable questions are:

      How do managers feel about the reorganization?

      What do residents feel are the most important problems facing the community?

      Respondents’ answers to these questions could be summarized in descriptive tables and the results might be extremely valuable to administrators and planners. Business and social science researchers often ask non-testable research questions. The shortcoming with these types of questions is that they do not provide objective cut-off points for decision-makers.

      In order to overcome this problem, researchers often seek to answer one or more testable research questions. Nearly all testable research questions begin with one of the following two phrases:

      Is there a significant difference between …?

      Is there a significant relationship between …?

      For example:

      Is there a significant relationship between the age of managers? and their attitudes towards the reorganization?

      A research hypothesis is a testable statement of opinion. It is created from the research question by replacing the words “Is there” with the words “There is,” and also replacing the question mark with a period. The hypotheses for the two sample research questions would be:

      There is a significant relationship between the age of managers and their attitudes towards the reorganization.

      It is not possible to test a hypothesis directly. Instead, you must turn the hypothesis into a null hypothesis. The null hypothesis is created from the hypothesis by adding the words “no” or “not” to the statement. For example, the null hypotheses for the two examples would be:

      There is no significant relationship between the age of managers

      and their attitudes towards the reorganization.

      There is no significant difference between white and minority residents

      with respect to what they feel are the most important problems facing the community.

      All statistical testing is done on the null hypothesis…never the hypothesis. The result of a statistical test will enable you to either:

      1) reject the null hypothesis, or

      2) fail to reject the null hypothesis. Never use the words “accept the null hypothesis.”

      *Source: StatPac for Windows Tutorial. (2017). User’s Guide; Formulating Hypotheses from Research Questions. Retrieved May 17, 2019 from https://statpac.com/manual/index.htm?turl=formulatinghypothesesfromresearchquestions.htm

      What does significance really mean?

      “Significance is a statistical term that tells how sure you are that a difference or relationship exists. To say that a significant difference or relationship exists only tells half the story. We might be very sure that a relationship exists, but is it a strong, moderate, or weak relationship? After finding a significant relationship, it is important to evaluate its strength. Significant relationships can be strong or weak. Significant differences can be large or small. It just depends on your sample size.

      To determine whether the observed difference is statistically significant, we look at two outputs of our statistical test:

      P-value: The primary output of statistical tests is the p-value (probability value). It indicates the probability of observing the difference if no difference exists.

      Example of Welch Two Sample T-test from Exercise 1

      The p-value from above example, 0.9926, indicates that we DO NOT expect to see a meaningless (random) difference of 5% or more in ‘hospital beds’ only about 993 times in 1000 there is no difference (0.9926*1000=992.6 ~ 993).

      Note: This is an example from the week1 exercise.

      An example from Exercise 1

      The p-value from above example, 0.0001, indicates that we’d expect to see a meaningless (random) ‘number of the employees on payer’ difference of 5% or more only about 0.1 times in 1000 (0.0001 * 1000=0.1).

      CI around Difference: A confidence interval around a difference that does not cross zero also indicates statistical significance. The graph below shows the 95% confidence interval around the difference between hospital beds in 2011 and 2012 (CI: [-40.82 ; 40.44]):

      Confidence Interval Example

      CI around Difference: A confidence interval around a difference that does not cross zero also indicates statistical significance. The graph below shows the 95% confidence interval around the difference between hospital beds in 2011 and 2012 (CI: [-382.16 ; 125.53]):

      Confidence Interval Example

      The boundaries of this confidence interval around the difference also provide a way to see what the upper [40.44] and lower bounds [-40.82].

      As a summary:

      “Statistically significant means a result is unlikely due to chance.

      The p-value is the probability of obtaining the difference we saw from a sample (or a larger one) if there really isn’t a difference for all users.

      Statistical significance doesn’t mean practical significance. Only by considering context can we determine whether a difference is practically significant; that is, whether it requires action.

      The confidence interval around the difference also indicates statistical significance if the interval does not cross zero. It also provides likely boundaries for any improvement to aide in determining if a difference really is noteworthy.

      With large sample sizes, you’re virtually certain to see statistically significant results, in such situations, it’s important to interpret the size of the difference”(“Measuring U”, 2019).

      *Resource

      Measuring U. (2019). Statistically significant. Retrieved May 17, 2019 from: https://measuringu.com/statistically-significant/

      Small sample sizes often do not yield statistical significance; when they do, the differences themselves tend also to be practically significant; that is, meaningful enough to warrant action.

      4.Research Method

      Discuss the Research Methodology (in general). Describe the variable or variables that are being analyzed. Identify the statistical test you will select to analyze these data and explain why you chose this test. Summarize your statistical alternative hypothesis. This section includes the following sub-sections:

      a)Describe the Dataset

      Example: The primary source of data will be HOSPITAL COMPARE MEDICARE DATA (citation). This dataset provides information on hospital characteristics, such as: Number of staffed beds, ownership, system membership, staffing by nurses and non-clinical staff, teaching status, percentage of discharge for Medicare and Medicaid patients, and information regarding the availability of specialty and high-tech services, as well as Electronic Medical Record (EMR) use (Describe dataset in 2-3 lines, Google the dataset and find the related website to find more information about the data).

      Also, describe the sample size; for example, “The writer is using Medicare data-2013, this data includes 3000 obs. for all of the hospitals in the US.”

      b)Describe Variables

      Next, review the database you selected and select a variable or variables that are a “best-fit.” That is, choose a variable that quantitatively measures the concept or concepts articulated in your research question or hypothesis.

      Return to your previously stated Research Question or Hypothesis and evaluate it considering the variables you have selected. (See the sample Table 1).

      Table 1. List of variables used for the analysis

      Variable

      Definition

      Description

      of code

      Source

      Year

      Total Hospital Beds

      Total facility beds set up and staffed

      at the end of the reporting period

      Numeric

      MN Data

      2013

      ….

      …..

      Source: UMGC, 2019

      ***Just in time information:

      To cite a dataset, you can go with two approaches:

      First, look at the note in the dataset for example;

      Medicare National Data by County. (2012). Dartmouth Atlas of Health Care, A

      Second, use the online citation, for example:

      Zare, H., (2019, May). MN Hospital Report Data. Data posted in University of Maryland University College HMGT 400 online classroom, archived at: http://campus.umgc.edu

      See two examples describing the variables from Minnesota Data:

      Table 2. Definition of variables used in the analysis

      Variable

      Definition

      Description

      of code

      Source

      Year

      hospital_beds

      Total facility beds set up and staffed

      at the end of the reporting period

      Numeric

      MN data

      2013

      year

      FY

      Categorical

      MN data

      2013

      Source: UMGC, 2019

      c)Describe the Research Method for Analysis

      First, describe the research method as a general (e.g., this is a quantitative method and then explain about this method in about one paragraph. If you have this part in the introduction, you do not need to add here).

      Then, explain the statistical method you plan to use for your analysis (Refer to content in week 3 on Biostatistics for information on various statistical methods you can choose from).

      Example:

      Hypothesis: AZ hospitals are more likely to have lower readmission rates for PN compared to CA.

      Research Method: To determine whether Arizona hospitals are more likely to have lower readmission rate than California, we will use a t-test, to determine whether differences across hospital types are statistically significant (You can change the test depends on your analysis).

      d)Describe statistical package

      Add one paragraph for the statistical package, e.g., Excel or RStudio.

      5. Results

      Discuss your findings considering the following tips:

      ▪ Why you needed to see the distribution of data before any analysis (e.g., check for outliers, finding the best fit test; for example, if the data had not a normal distribution, you can’t use the parametric test, etc., so just add 1 or 2 sentences).

      ▪ Did you eliminate outliers? (Please write 1 or 2 sentences, if applicable).

      ▪ How many observations do you have in your database and how many for selected variables, report % of missing.

      ▪ When you are finished with this, go for the next steps:

      Present the results of your statistical analysis; include any relevant statistical information (summary tables, including N, mean, std. dev.). Make sure to completely and correctly name all your columns and rows, tables and variables. For this part you could have at least 1-2 tables and 1-2 figures (depending on your variables bar-chart, pi-chart, or scatter-plot), you can use a table like this:

      Table 3. Descriptive analysis to compare % of BL in Medicare beneficiary, MD vs. VA- 2013

      Variable

      Obs.

      Mean

      SD

      P-value

      Per of Lipid in MD

      24

      83.20

      2.32
      0.4064
       

      Per of Lipid in VA

           124
                 82.69
      4.41

      Source: UMGC, 2019

      When you have tables and plots ready, think about your finding and state the statistical conclusion. That is, do the results present evidence in favor or the null hypothesis or evidence that contradicts the null hypothesis?

      6.Conclusion and Discussion

      Review your research questions or hypothesis.

      How has your analysis informed this question or hypothesis? Present your conclusion(s) from the results (presented above) and discuss the meaning of this conclusion(s) considering the research question or hypothesis presented in your introduction.

      At the end of this section, add one or two sentences and discuss the limitations (including biases) associated with this analysis and any other statements you think are important in understanding the results of this analysis.

      References

      Include a reference page listing the bibliographic information for all sources cited in this report. This information should be consistent with the requirements specified in the American Psychological Association (APA) format and style guide.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      One of the most crucial components of this course is developing a research project from conceptualizing a research problem and developing a number of measurement and statistical analysis approaches to bring evidence to bear on the problem. Throughout the class, you created a research study based on publicly available data from the General Social Survey (GSS). You chose data which were representative of your interests and satisfied your research question and hypotheses.

      This Assignment meets these course objectives:

      CO1: Describe and apply the concepts and logic of elementary statistics.
      CO2: Conduct statistical analysis in SPSS (Statistical Package for the Social Sciences).
      CO3: Compare and contrast different types of data and the statistics that can be used to analyze them
      CO4: Examine the differences between descriptive and inferential statistics and their use in the social sciences.
      CO5: Complete and interpret descriptive and inferential statistical data analysis.
      CO6: Develop a research project from conceptualizing a research problem and develop a number of complementary design, measurement, and data collection approaches to bring evidence to bear on the problem.
      CO7: Form critical interpretations of quantitative research literature in sociology and other social sciences, critically evaluating the quality of research design and evidence in published social research.

      Instructions

      The Final Portfolio Assignment is where you pull together the research you’ve been working on the first seven weeks of class. Using your weekly Discussion posts, and the feedback from your classmates and instructor on your posts, construct a 6+ page paper that fully explores your research topic in a way that provides the context and explanation surrounding the analyses provided in the paper. Your project should display you understand what you are writing about holistically, not simply going through the motions.

      Citing literature about your research topic, be sure to set the stage for the data and analyses that you present. Briefly describe the General Social Survey as your survey instrument. Provide the questions, verbatim, that were asked in the survey which became the variable which you chose to use. You will also need to include the answer choices for each of them. This portion can be a table if you choose. Share and explain frequency table(s) an histograms or graphs to describe your data. Using the statistical tests you ran each week in class (crosstabs, tests of significance, measures of association), present the tests and your findings. Clearly identify and explain your hypothesis and the five steps of hypothesis testing as they apply to your paper. Explain the results of the statistical tests and pull in some literature to provide context, demonstrating how your results and research fit into the larger body of literature on this topic. Be sure to use proper APA formatting for citations and references. However, you do not need to include an abstract or table of contents. You can find guidance in APA by clicking here to access the Purdue Online Writing Lab.

      Because the project is a formal, you should include a title page and reference page. You may organize the paper based on the following headings:

      Introduction – Introduce the topic based in current literature (briefly – show why it is important to study). Discuss why you chose the topic and what the purpose of the paper is. Give a brief overview of what you will cover.

      Literature Review – Review 3-4 peer-reviewed sources that provide a background on your topic. These sources don’t have to specifically address the relationship between your IV and DV, but should address the topic and be somewhat related to your variables.

      Methods – Briefly discuss the GSS (information you included in Assignment 1) as your data source. Identify and describe your specific variables, including the name, question, and responses (categories). You may state your hypothesis here, but do not go through the hypothesis testing steps until the next section.

      Findings – Begin with a discussion of each variable individually, utilizing your frequency tables and charts/graphs. Then discuss your other analyses in logical order. Crosstabs are your first look at a potential relationship. Next, discuss the steps of hypothesis testing. Include the table of your significance test. Last, discuss the strength and direction of the relationship using measures of association. (Be sure you are thorough here. Include all of your analyses done in the discussions!)

      Discussion – Discussion what you learned from the various analyses and draw any conclusions you found. Talk about any further research you think may be needed on your topic.

      General requirements:

      • Submissions should be typed, double-spaced, 1″ margins, times new roman 12 pt font, and saved as .doc, .docx, .pdf.
      • Use APA format for citations and references
      • View the grading rubric so you understand how you will be assessed on this Assignment.
      • Disclaimer- Originality of attachments will be verified by Turnitin. Both you and your instructor will receive the results.
      • This course has “Resubmission” status enabled to help you if you realized you submitted an incorrect or blank file, or if you need to submit multiple documents as part of your Assignment. Resubmission of an Assignment after it is grades, to attempt a better grade, is not permitted.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      For this Introduction to Quantitative Analysis Assignment, you will explore how to visually display data for optimal use.

      To prepare for this Assignment:

      • Review this week’s Learning Resources and consider visual displays of data.
      • For additional support, review the Skill Builder: Unit of Analysis and the Skill Builder: Levels of Measurement, which you can find by navigating back to your Blackboard Course Home Page. From there, locate the Skill Builder link in the left navigation pane.
      • Using the SPSS software, open the Afrobarometer dataset or the High School Longitudinal Study dataset (whichever you choose) found in this week’s Learning Resources.
      • From the dataset you chose, choose one categorical and one continuous variable and perform the appropriate visual display for each variable.
      • Once you visually display each variable, review Chapter 11 of the Wagner text to understand how to copy and paste your output into your Word document.

      For this Assignment:

      Write a 2- to 3-paragraph analysis of your results and include a copy and paste of the appropriate visual display of the data into your document.

      Based on the results of your data, provide a brief explanation of what the implications for social change might be. Early in your Assignment, when you relate which dataset you analyzed, please include the mean of the following variables. If you are using the Afrobarometer Dataset, report the mean of Q1 (Age). If you are using the HS Long Survey Dataset, report the mean of X1SES.

      Use appropriate APA format. Refer to the APA manual for appropriate citation.

      Learning Resources

      Required Readings

      Frankfort-Nachmias, C., Leon-Guerrero, A., & Davis, G. (2020). Social statistics for a diverse society (9th ed.). Thousand Oaks, CA: Sage Publications.

      • Chapter 2, “The Organization and Graphic Presentation Data” (pp. 27-74)

      Wagner, III, W. E. (2020). Using IBM® SPSS® statistics for research methods and social science statistics (7th ed.). Thousand Oaks, CA: Sage Publications.

      • Chapter 5, “Charts and Graphs”
      • Chapter 11, “Editing Output”

      Walden University Writing Center. (n.d.). General guidance on data displays. Retrieved from http://waldenwritingcenter.blogspot.com/2013/02/general-guidance-on-data-displays.html

      Use this website to guide you as you provide appropriate APA formatting and citations for data displays.

      Datasets

      Your instructor will post the datasets for the course in the Doc Sharing section and in an Announcement. Your instructor may also recommend using a different dataset from the ones provided here.

      Required Media

      Laureate Education (Producer). (2016j). Visual displays of data [Video file]. Baltimore, MD: Author.

      Note: The approximate length of this media piece is 9 minutes.

      In this media program, Dr. Matt Jones discusses frequency distributions. Focus on how his explanation might support your analysis in this week’s Assignment.

      Accessible player

      Optional Resources

      Skill Builders:

      • Visual Displays for Continuous Variables
      • Visual Displays for Categorical Variables

      To access these Skill Builders, navigate back to your Blackboard Course Home page, and locate “Skill Builders” in the left navigation pane. From there, click on the relevant Skill Builder link for this week.

      You are encouraged to click through these and all Skill Builders to gain additional practice with these concepts. Doing so will bolster your knowledge of the concepts you’re learning this week and throughout the course.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      STAT 3401 Wk2 Assignment: Assessing Product with System Usability Scale

      The System Usability Scale (SUS) is commonly employed for usability testing. This ten-question survey applies to various technology products, including graphical user interfaces, websites, and hardware devices.

      Review the articles An Empirical Evaluation of the System Usability Scale (Bangor, Kortum, & Miller, 2008) and Determining What Individual SUS Scores Mean: Adding An Adjective Rating Scale (Bangor, Kortum, & Miller, 2009). Then, respond to the following questions:

      1. Page 578 of the first article, An Empirical Evaluation of the System Usability Scale (Bangor, Kortum, & Miller, 2008), shows the distributions of scores for individual surveys and overall studies. Describe the distribution for individual surveys. Describe the distribution for overall studies. Explain using the Central Limit Theorem why the data for the overall studies is approximately normally distributed when that of the individual surveys is not.
      2. You are assessing a new user interface for a point-of-the-sale web system. Forty randomly selected individuals to complete the SUS after utilizing the interface. The data file SUSpointofsaletest.CVS gives the scores for these surveys. Import this data into Excel and find the average score for your product. What is the average score? What percentage of systems score higher than the average? Assuming a normal distribution, what is the z-statistics for this value? (specifically, identify the average and standard deviation for the distribution you are utilizing to calculate z). What mark would it receive on the grade scale? What adjective would best describe it? Please explain and justify all of your answers.

      https://class.content.laureate.net/03380d6fb031d9d69c0a7bf3fdb52180.zip

      • Explain why the authors want to develop consistent adjective ratings when there are already accepted numeric scores and grade equivalents for the SUS. Do you agree that the adjective ratings are necessary or valuable? Why or why not?
      • Pages 118-119 of the second article, Determining What Individual SUS Scores Mean: Adding An Adjective Rating Scale (Bangor, Kortum, & Miller, 2009), provide descriptive statistics adjective rating and show a chart of the averages with standard error bars. In the descriptive statistics, “Best Imaginable” has a standard deviation less than the standard deviation for “OK. However, on the chart, the standard error bars are much larger for “Best Imaginable” than for “OK.” Explain what standard error means, and utilize this definition to explain why the adjective with a smaller standard deviation has the larger standard error. Use the descriptive statistics given to calculate the standard error for “Best Imaginable” and for “OK” (show your work) to demonstrate that the chart is accurate.

      Bangor, A., Kortum, P.T., & Miller, J.T. (2008). An empirical evaluation of the System Usability Scale. International Journal of Human-Computer Interaction. 24(6). 574-594. DOI: 10.1080/10447310802205776

      Submission and Grading Information

      To submit your completed Assignment for review and grading, do the following:

      • Please save your Assignment using the naming convention “UN2Assgn+last name+first initial. (extension)” as the name.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      Description

      In this final information and data literacy assignment, you will persuade a non-technical audience, a jury, that the statistical analysis that you have done provides adequate evidence on the case they must determine. You must explain to a jury (from the general population) the case, how probability and statistics can inform it and what you found by doing your analysis. You must use present this in a way and using language they will understand. Part of this will be explaining why this approach can be used with confidence for addressing this question.

      You will also demonstrate your ability to edit a professional document to assure it is clear, complete, and concise, following the direction provided in the Legal Memorandum Guide (file Uploaded). Proofread and edit your pre-writing document for grammar and spelling PRIOR TO TURNING IT IN (you may use an external grammar tool to help with this. The Legal Memorandum Template (file Uploaded) is provided to make formatting the document easier.

      Assignment Preparation

      • You will be expected to discuss the elements of this assignment accurately. There are several areas in which students have had difficultly in the past that you may want to make sure you are familiar with by doing some research.
        • Be sure you clearly understand the specific car part you are discussing. If this is not a familiar product take the time to look up some information or videos on it before writing about it so, you can speak precisely.
        • Evaluate the information you have been given carefully, using the method we practiced in assignment IDL 3.
        • Be sure you are clear on the difference between the data and your evaluation of it.
        • Be sure you understand the difference between negligence and liability
        • Be sure you understand what a class-action lawsuit (Links to an external site.) (Links to an external site.) is.

      Problem Description

      A series of seemingly related recent automobile failures have resulted in a class-action lawsuit (Links to an external site.), Saraki v. Real Car Parts Corporation being filed. The ‘class’ in this case is claiming that Real Car Parts manufactured a part that does not comply with mandated performance standards in place to assure the safe operation of vehicles. The class was established as this part has been used by many manufactures in many models of vehicles for a decade, and therefore, involves hundreds of thousands of vehicles. They are claiming the failure of this part has led to serious engine failures resulting in needed vehicle repair or replacement, for which Real Car Parts should be financially responsible.

      The part in question is a shaft with a copper-lead bearing surface that is manufactured for use in fuel pumps. The use of copper lead as a bearing surface was considered a revolution in the performance capability of this part when it was introduced. However, after a decade in operation, the ability of this design to meet performance standards established for the safe operation of vehicles is being questioned. The established standards require these parts may not wear more than 3.5 microns over a useful life of 250,000 miles of vehicle operation with the fuel pump using this shaft. This standard has been established, as wear of the shaft bearing surface exceeding this amount can result in catastrophic fuel pump failure in extreme weather conditions.

      Due to the significance of this litigation, the Judge involved with the case, The Honorable Fairuza Habibi, has brought you in as an expert witness, independent of either side of the litigation, to provide the jury with the clarity that can be brought to the question by an engineering review. At your request, she is providing you with data from a random sampling of 45 shafts of this design from a recent manufacturing run, which have been put through a test simulating 250,000 miles of wear. She is requiring that you complete your work with a high level of confidence, therefore you have suggested using a confidence interval of .01. You are to provide the jury with which side of the litigation you would support and why in a written document that makes the determination clear and believable to them. The document you are to provide is to follow the court’s template and their Legal memorandum guide. You will also copy the document to Milton Jackson, Clerk of the Courts, Julius Gubenko, Plaintiffs’ Counsel, and Calvin Tjader Defendant’s Counsel.

      The result of this test was: Data from Sample Testing (file Uploaded)

      Assignment Completion

      This assignment will be completed as a (modified version) of a legal memorandum as documented by CUNY School of Law. (You should not reference their document, as the information you need to address is covered here.) Each element of the document has an attached section that describes the purpose of the section, how to think or structure it for this assignment, and what I will be looking for when grading it. You may also reference or download this information as one document in the Legal Memorandum Guide (file Uploaded). The appearance of the document must match that of the attached exemplar (file Uploaded)for which you may use the attached template. (file Uploaded) This document will emulate a professional document and as such will be completed in full sentences and paragraphs.

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized

      Statistics Question

      amind110 quiz4

      math 160 its all due on April 25

      1. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICSModule 26 – Correlation is not Causation (21 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Correlation is not Causation (21 of 21)10 PTSDUE: 11:59 PM
      2. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 26 – Linear Relationships (10 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Linear Relationships (10 of 21)10 PTSDUE: 11:59 PM
      3. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 26 – Linear Relationships (13 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Linear Relationships (13 of 21)10 PTSDUE: 11:59 PM
      4. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICSModule 26 – Linear Relationships (15 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Linear Relationships (15 of 21)10 PTSDUE: 11:59 PM
      5. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 26 – Linear Relationships (19 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Linear Relationships (19 of 21)10 PTSDUE: 11:59 PM
      6. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 26 – Scatterplots (2 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Scatterplots (2 of 21)10 PTSDUE: 11:59 PM
      7. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICSModule 26 – Scatterplots (5 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Scatterplots (5 of 21)10 PTSDUE: 11:59 PM
      8. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 26 – Scatterplots (7 of 21), due Sunday, April 25, 2021 11:59 PM.Module 26 – Scatterplots (7 of 21)10 PTSDUE: 11:59 PM
      9. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 26 Checkpoint: Scatterplots, Linear Relationships, and Correlation, due Sunday, April 25, 2021 11:59 PM.Module 26 Checkpoint: Scatterplots, Linear Relationships, and Correlation10 PTSDUE: 11:59 PM
      10. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 27 – Linear Regression (2 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression (2 of 13)10 PTSDUE: 11:59 PM
      11. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 27 – Linear Regression (4 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression (4 of 13)10 PTSDUE: 11:59 PM
      12. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS Module 27 – Linear Regression (6 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression (6 of 13)10 PTSDUE: 11:59 PM
      13. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS QUIZQuiz Module 27 – Linear Regression (8 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression (8 of 13)10 PTSDUE: 11:59 PM
      14. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS QUIZQuiz Module 27 – Linear Regression (10 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression (10 of 13)10 PTSDUE: 11:59 PM
      15. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS QUIZQuiz Module 27 – Linear Regression (13 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression (13 of 13)10 PTSDUE: 11:59 PM
      16. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS QUIZQuiz Module 27 – Linear Regression Lab (11 of 13), due Sunday, April 25, 2021 11:59 PM.Module 27 – Linear Regression Lab (11 of 13)10 PTSDUE: 11:59 PM
      17. 2021SP-MATH-160-1542 – ELEMENTARY STATISTICS QUIZQuiz Module 27 Checkpoint: “Fitting a Line”, due Sunday, April 25, 2021 11:59 PM.Module 27 Checkpoint: “Fitting a Line”

      Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount

      Posted in Uncategorized