IGNOU MBA MMPC-005 Previous Solved Question Paper -JUNE 2023
MMPC-005 MANAGEMENT PROGRAMME (MP)
MASTER OF BUSINESS ADMINISTRATION (MBA) Term-End Examination
June, 2023
1. Why is forecasting so important in business ? Identify the application of forecasting for short-term and medium-term decisions.
Forecasting is crucial in business for several reasons, as it helps organizations make informed decisions, allocate resources efficiently, and plan for the future. Here are some key reasons why forecasting is important in business:
Planning and Budgeting: Forecasting helps businesses create realistic budgets and allocate resources effectively. It allows companies to set financial targets and plan for expenses, ensuring they have the necessary resources to meet their goals.
Risk Management: Forecasting can identify potential risks and uncertainties in the business environment. By anticipating changes in demand, economic conditions, or market trends, companies can develop strategies to mitigate risks and adapt to unforeseen events.
Strategic Decision-Making: Forecasting guides strategic decisions such as market expansion, product development, and investment in new technologies. It provides data to support these choices, making them more likely to be successful.
Inventory Management: Short-term forecasting is essential for managing inventory levels. It helps companies avoid overstocking or understocking products, reducing carrying costs and ensuring products are available to meet customer demand.
Sales and Revenue Projections: Sales forecasts enable businesses to set sales targets, evaluate sales team performance, and plan marketing and promotional activities to achieve revenue goals.
Production and Capacity Planning: Forecasting helps manufacturers determine production levels and capacity requirements. It ensures that they produce enough to meet demand without overproducing, which can lead to excess inventory and increased costs.
Now, let’s identify the applications of forecasting for short-term and medium-term decisions:
Short-Term Forecasting (0–12 months):
Demand Forecasting: Businesses use short-term forecasting to predict demand for their products or services in the immediate future, allowing them to adjust inventory and staffing levels accordingly.
Cash Flow Management: Short-term cash flow forecasting helps organizations manage their liquidity by predicting cash inflows and outflows in the short term, ensuring they can meet their financial obligations.
Staffing and Scheduling: Short-term workforce forecasting assists in scheduling employees based on anticipated workloads, seasonal fluctuations, and special events.
Promotional Campaigns: Short-term forecasts guide marketing and promotional campaigns, helping businesses plan when and how to market their products or services to boost sales.
Medium-Term Forecasting (1–5 years):
Production and Capacity Planning: Medium-term forecasting aids in determining production capacity requirements, enabling businesses to make decisions regarding equipment upgrades, facility expansion, or new locations.
Product Development: Companies use medium-term forecasts to identify opportunities for new product development, ensuring that the products will meet anticipated market demand upon launch.
Supply Chain Management: Forecasting helps in medium-term supply chain decisions, such as selecting suppliers, negotiating contracts, and ensuring a reliable supply of materials.
Capital Investment: Medium-term forecasts are essential for planning capital investments in areas like infrastructure, technology, and research and development.
In summary, forecasting is essential for both short-term and medium-term decisions in business, helping organizations manage their operations, resources, and strategies effectively in a dynamic and ever-changing business environment.
2. The overall % of failure in a certain examination is 40. What is the probability that out of a group of 6 candidates at least 4 passed the examination ?
To find the probability that out of a group of 6 candidates at least 4 passed the examination when the overall failure rate is 40%, you can use the binomial probability formula. In this case, you’re looking for the probability of having 4, 5, or 6 candidates passing the exam.
The binomial probability formula is:
P(X = k) = (n choose k) * p^k * (1 — p)^(n — k)
Where:
P(X = k) is the probability of exactly k successes.
n is the number of trials (in this case, the number of candidates, which is 6).
k is the number of successes you want (4, 5, or 6 in this case).
p is the probability of a single success (the overall pass rate, which is 1–0.40).
Let’s calculate the probabilities for each case:
Probability of 4 candidates passing:
P(X = 4) = (6 choose 4) * (0.60)⁴ * (0.40)^(6–4)
Probability of 5 candidates passing:
P(X = 5) = (6 choose 5) * (0.60)⁵ * (0.40)^(6–5)
Probability of 6 candidates passing:
P(X = 6) = (6 choose 6) * (0.60)⁶ * (0.40)^(6–6)
You can calculate these probabilities separately and then add them together to find the total probability that at least 4 candidates passed:
P(at least 4 passing) = P(X = 4) + P(X = 5) + P(X = 6)
Calculating these probabilities:
P(X = 4) = (6 choose 4) * (0.60)⁴ * (0.40)²
P(X = 5) = (6 choose 5) * (0.60)⁵ * (0.40)¹
P(X = 6) = (6 choose 6) * (0.60)⁶ * (0.40)⁰
Now, you can calculate each of these probabilities and sum them to find the final result.
P(X = 4) = (15) * (0.1296) * (0.16) = 0.31104
P(X = 5) = (6) * (0.07776) * (0.40) = 0.18648
P(X = 6) = (1) * (0.046656) * (1) = 0.046656
Now, add these probabilities together:
P(at least 4 passing) = 0.31104 + 0.18648 + 0.046656 ≈ 0.5442
So, the probability that at least 4 out of 6 candidates passed the examination is approximately 54.42%.
3. Calculate Bowley’s measure of skewness from the following data Payment of Commission (in Rs.)
No. of Salesmen
1,000–1,200 4
1,200–1,400 10
1,400–1,600 16
1,600–1,800 29
1,800–2,000 52
2,000–2,200 80
2,200–2,400 32
2,400–2,600 23
2,600–2,800 17
2,800–3,000 7
Bowley’s measure of skewness is calculated using the following formula:
Bowley’s Skewness = (Q1 + Q3–2*Median) / (Q3 — Q1)
Where:
Q1 is the first quartile (25th percentile).
Q3 is the third quartile (75th percentile).
Median is the 50th percentile.
To calculate Bowley’s measure of skewness, you first need to find Q1, Q3, and the median. Here’s how you can do it step by step:
Step 1: Arrange the data in ascending order:
1,000–1,200 4
1,200–1,400 10
1,400–1,600 16
1,600–1,800 29
1,800–2,000 52
2,000–2,200 80
2,200–2,400 32
2,400–2,600 23
2,600–2,800 17
2,800–3,000 7
Step 2: Find the median (50th percentile). The total number of data points is 4 + 10 + 16 + 29 + 52 + 80 + 32 + 23 + 17 + 7 = 270 data points, which is an even number. Therefore, the median is the average of the 135th and 136th values in the ordered data.
Median = (29th value + 30th value) / 2 = (29 + 52) / 2 = 81/2 = 40.5
Step 3: Find Q1 (25th percentile). Q1 is the average of the 68th and 69th values in the ordered data.
Q1 = (68th value + 69th value) / 2 = (16 + 29) / 2 = 45/2 = 22.5
Step 4: Find Q3 (75th percentile). Q3 is the average of the 203rd and 204th values in the ordered data.
Q3 = (203rd value + 204th value) / 2 = (80 + 32) / 2 = 112/2 = 56
Now, you can calculate Bowley’s Skewness using the formula:
Bowley’s Skewness = (Q1 + Q3–2*Median) / (Q3 — Q1)
Bowley’s Skewness = (22.5 + 56–2*40.5) / (56–22.5) = (79–81) / 33.5 = -2/33.5 ≈ -0.0597 (rounded to four decimal places)
Bowley’s measure of skewness for the given data is approximately -0.0597. This indicates a slight negative skew (left-skewed) in the data.
4. “The success of collecting data through a questionnaire depends mainly on how skillfully and imaginatively the questionnaire has been designed.” Explain in the view of the statement, designing of questionnaire.
The statement, “The success of collecting data through a questionnaire depends mainly on how skillfully and imaginatively the questionnaire has been designed,” underscores the critical role that questionnaire design plays in the data collection process. Let’s break down this statement and explain its significance:
Success of Data Collection: The effectiveness of a questionnaire is directly linked to the success of data collection. The primary goal of a questionnaire is to gather accurate and relevant information from respondents. Therefore, a well-designed questionnaire is essential for achieving this objective.
Skillful Design: Skillful questionnaire design involves careful consideration of various factors, including the research objectives, target audience, and the nature of the data you intend to collect. Skilled designers understand how to formulate questions that are clear, concise, and unbiased. They also know how to structure the questionnaire in a logical and coherent manner.
Imaginative Design: Imaginative design implies creativity in crafting questions that can capture nuanced or complex information. It means thinking beyond simple yes/no questions and exploring innovative ways to elicit valuable responses. Creative question design can lead to richer and more insightful data.
Key Aspects of Questionnaire Design:
Clarity: Questions should be clear and unambiguous, so respondents understand what is being asked.
Relevance: Questions should be relevant to the research objectives, ensuring that the data collected is useful.
Conciseness: Long, convoluted questions can confuse respondents. Concise questions are more likely to yield accurate responses.
Bias Avoidance: Questions should be formulated in a way that doesn’t lead or bias respondents’ answers. This requires careful wording and avoiding loaded language.
Logical Flow: The order of questions should make sense and flow logically to maintain respondents’ engagement.
Response Format: The questionnaire designer must also consider the format of responses, which can include multiple-choice, open-ended, rating scales, or other options. The choice of response format can significantly impact the quality and quantity of data collected.
Pilot Testing: Before deploying a questionnaire, it is essential to conduct pilot testing. This involves testing the questionnaire with a small group of respondents to identify any issues with clarity, wording, or question order. Pilot testing helps in refining the questionnaire.
Adaptability: The questionnaire should be adaptable to the characteristics and preferences of the target audience. This may involve using language and terminology that resonates with respondents and aligns with their cultural or social context.
Continuous Improvement: Questionnaire design is an iterative process. Designers should be open to feedback and willing to make improvements based on the results obtained in the field.
In conclusion, the success of data collection through a questionnaire is heavily dependent on how well the questionnaire is designed. Skillful and imaginative questionnaire design is crucial for obtaining high-quality, relevant, and unbiased data. Researchers should invest time and effort into crafting well-structured and thoughtful questionnaires to maximize the effectiveness of their data collection efforts.
5. A sample of 100 students is taken from a large population. The mean height of these students is 64 inches and S.D. of 4 inches. Can it be reasonably regarded that the population mean height is 66 inches ?(Assume : At α0.05, the table cut off value of Z is — 1.96.)
To determine whether it can be reasonably regarded that the population mean height is 66 inches based on the sample data, you can perform a hypothesis test. The null hypothesis (H0) is that the population mean height is 66 inches, and the alternative hypothesis (H1) is that the population mean height is not 66 inches.
Here are the steps to conduct a hypothesis test:
Set up the null and alternative hypotheses:
Null Hypothesis (H0): The population mean height (μ) is 66 inches.
Alternative Hypothesis (H1): The population mean height (μ) is not 66 inches.
Determine the significance level (α), which is given as α = 0.05.
Calculate the standard error of the sample mean. The formula for standard error (SE) is:
SE = Standard Deviation (SD) / √(Sample Size)
SE = 4 inches / √(100) = 4 / 10 = 0.4 inches
Calculate the test statistic (Z-score) using the formula:
Z = (Sample Mean — Population Mean) / SE
Z = (64 inches — 66 inches) / 0.4 inches = -5
Find the critical value for the given significance level (α) using the Z-table. You mentioned a critical value of -1.96. Since the test statistic Z is -5, it is far more extreme than -1.96.
Compare the calculated Z-score with the critical value:
If the absolute value of the Z-score is greater than the critical value, you can reject the null hypothesis.
In this case, |Z| = 5 > 1.96.
Make a decision:
Since the calculated Z-score is more extreme than the critical value, you can reject the null hypothesis.
Draw a conclusion:
Based on the sample data, there is enough evidence to conclude that the population mean height is not 66 inches. The data suggests that the population mean height is significantly lower than 66 inches.
In summary, you can reasonably conclude that the population mean height is likely not 66 inches based on the sample data and the given significance level.
6. Find the coefficient of correlation between the following given data :
X Y
1 12
2 11
3 13
4 15
5 14
6 17
7 16
8 19
9 18
To find the coefficient of correlation (also known as the Pearson correlation coefficient) between the given data sets X and Y, you can follow these steps:
Calculate the mean (average) of X and Y.
Calculate the product of the differences between each data point and the mean for both X and Y.
Square each of these differences.
Sum up the squared differences for both X and Y.
Calculate the product of the squared differences for X and Y.
Find the square root of the product of the summed squared differences for X and Y.
Finally, calculate the coefficient of correlation using the formula:
7. Write short notes on any three of the following :
(a) Mathematical properties of Arithmetic Mean
(b) Bernoulli’s process
© Decision Tree Approach
(d) Non-probability sampling
(e) Least square criterion
(a) Mathematical properties of Arithmetic Mean:
The arithmetic mean, often simply referred to as the mean, is a fundamental statistical measure that represents the average of a set of values. It possesses several important mathematical properties:
Linearity: The arithmetic mean is a linear function. This means that if you multiply all data points by a constant or add a constant to all data points, the mean will also be multiplied by the constant or have the constant added to it.
Symmetry: The arithmetic mean is symmetric, meaning that if you have a set of data and you take the mean of the data and its complement (the set of values not included), the result will be the same.
Sensitivity to Outliers: The arithmetic mean is sensitive to outliers. Even a single extremely high or low value can significantly affect the mean, potentially distorting the representation of the central tendency of the data.
Minimization of the Sum of Squared Differences: The arithmetic mean minimizes the sum of squared differences between the data points and the mean itself. This property is the basis for the least squares criterion in regression analysis.
(b) Bernoulli’s Process:
The Bernoulli process, named after the Swiss mathematician Jacob Bernoulli, is a sequence of random experiments or trials in which each trial has two possible outcomes: success (usually denoted by “1”) and failure (usually denoted by “0”). These trials are independent and identically distributed, which means that the probability of success (p) and the probability of failure (q = 1 — p) remain the same from trial to trial, and the outcome of one trial does not affect the outcome of another.
Some key properties of the Bernoulli process include:
Independence: Each trial is independent of previous and future trials. The outcome of one trial does not affect the outcome of other trials.
Identical Distribution: The probability of success (p) and the probability of failure (q) remain the same for all trials.
Binary Outcomes: The outcomes are binary, typically denoted as “1” for success and “0” for failure.
The Bernoulli process is the simplest form of a random process and serves as the foundation for more complex processes in probability theory and statistics.
© Decision Tree Approach:
The decision tree approach is a widely used method in machine learning and data analysis for making decisions based on a series of conditions or choices. It is a visual representation of a decision-making process in which each internal node represents a decision or a test on an attribute, each branch represents an outcome of the test, and each leaf node represents a decision or the final outcome. Here are some key points about the decision tree approach:
Decision-Making: Decision trees are used for both classification and regression tasks. In classification, decision trees are used to assign data points to different categories or classes based on the conditions in the tree. In regression, they are used to predict a continuous value.
Splitting Criteria: Decision trees split the data based on criteria that maximize information gain or minimize impurity, such as Gini impurity or entropy, at each decision point.
Interpretability: Decision trees are known for their interpretability, as the logic for making decisions is readily understandable from the tree’s structure.
Overfitting: Decision trees can be prone to overfitting, where they capture noise in the data. Techniques like pruning and using random forests are employed to mitigate this issue.
Decision trees are a versatile tool for solving various problems, from customer segmentation to medical diagnosis, and are often used in ensemble methods like random forests and gradient boosting.
(d) Non-Probability Sampling:
Non-probability sampling, also known as non-random or judgmental sampling, is a method of selecting a subset of individuals or items from a larger population for the purpose of research or surveying. Unlike probability sampling methods, non-probability sampling does not involve random selection, and the likelihood of any particular member of the population being included in the sample cannot be determined. Instead, individuals are chosen based on the judgment of the researcher or some other non-random method. Here are some key points about non-probability sampling:
Convenience Sampling: Convenience sampling is a common form of non-probability sampling. In this approach, researchers select individuals or items that are easily accessible or convenient, which may not be representative of the entire population.
Purposive Sampling: In purposive sampling, researchers intentionally select individuals or items that meet specific criteria or characteristics of interest, often based on their knowledge and judgment.
Snowball Sampling: Snowball sampling is a method often used in cases where the population is hard to reach. It involves selecting an initial set of individuals and then asking them to refer others who meet the criteria.
Quota Sampling: Quota sampling is a non-probability method in which researchers select a sample to match predefined proportions or quotas based on certain characteristics, such as age, gender, or income.
Non-probability sampling can be useful in certain situations, especially when random sampling is not feasible or when the researcher has specific criteria in mind. However, it may lead to sampling bias and may not provide results that are generalizable to the entire population.
(e) Least Square Criterion:
The least squares criterion is a mathematical principle used in various fields, including statistics, mathematics, and data analysis. It is primarily associated with the method of least squares, which is used to find the best-fitting model or estimate by minimizing the sum of the squared differences (or “residuals”) between observed data points and the corresponding values predicted by the model. Here are some key points about the least squares criterion:
Minimization of Residuals: The least squares criterion aims to find the model parameters (coefficients) that minimize the sum of the squared differences between observed data points and the values predicted by the model. This minimization is achieved by adjusting the model’s parameters iteratively.
Linear Regression: In linear regression, the least squares criterion is used to find the line (or hyperplane in multiple dimensions) that best fits a set of data points by minimizing the sum of squared vertical deviations between the data points and the line.
Curve Fitting: The least squares criterion is also applied in nonlinear regression and curve fitting, where it is used to find the best-fitting curve to a set of data points, such as exponential, polynomial, or logistic curves.
Applications: The least squares criterion is widely used in various fields, including economics, engineering, physics, and data analysis. It plays a crucial role in estimating model parameters and making predictions based on data.
Overall, the least squares criterion is a powerful and widely used method for finding the best-fitting models in data analysis, and it forms the foundation for many statistical and regression techniques.