Factor Analysis Of IPAD

FACTOR ANALYSIS OF IPAD
Authors:
1. Samaya Rayaprolu
2. Shatakshi
3. Sneha Yadav
4. Paridhi Gangrade

Introduction:
Apple recently conducted a survey with iPad users, aiming to gather insights into preferences across key
features like display quality, battery life, processing power, camera performance, storage capacity, iPencil
support, and security op ons such as Touch ID and Face ID. This survey reflects Apple’s commitment to understanding user priorities and identifying areas for future enhancements in the iPad lineup.
Objective:
To understand customer preferences about the ipad.
Data Collection:
The data was collected through a google form. 54 Responses were received.
KMO and Bartle ‘s Test
Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .754
Bartle ‘s Test of Sphericity Approx. Chi-Square 155.947
df 45
Sig. <.001
1. KMO and Bartle ‘s Test:
 Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO): 0.754
 Interpretation: The KMO value of 0.754 indicates that the sample is adequate for factor
analysis. A value above 0.7 is considered acceptable.
 Bartle ‘s Test of Sphericity
 Approx. Chi-Square: 155.947
 Degrees of Freedom (df): 45
 Significance (Sig.): < 0.001
 Interpretation: The significant Bartle ‘s test suggests that the variables are related and factor
analysis is appropriate.
Communalities
Ini al Extraction
Display 1.000 .595
Operating System 1.000 .668
Battery Life 1.000 .667
Processor 1.000 .649
Camera 1.000 .416
Storage 1.000 .701
Connectivity 1.000 .538
Apple Pencil Support 1.000 .606
Face ID/ Touch ID 1.000 .724
Apple Ecosystem 1.000 .658
Extrac on Method: Principal Component Analysis.
Communalities
 This table shows how much of the variance in each variable is explained by the extracted
factors (factor loadings).
 Example-
 Display: 59.5% of the variance is explained by the factors.
 Operating System*: 66.8% of the variance is explained.
 Interpretation: Higher values indicate stronger rela onships with the extracted factors.
Total Variance Explained
Rotation Sums of
Squared
Initial Eigenvalues Extraction Sums of Squared Loadings
Loadingsa
Component
Total % of Variance Cumulative % Total % of Variance Cumulative % Total
1 3.969 39.694 39.694 3.969 39.694 39.694 2.946
2 1.242 12.424 52.118 1.242 12.424 52.118 1.982
3 1.011 10.106 62.224 1.011 10.106 62.224 3.001
4 .918 9.179 71.404
5 .690 6.898 78.302
6 .615 6.150 84.452
7 .592 5.923 90.375
8 .380 3.804 94.179
9 .344 3.445 97.623
10 .238 2.377 100.000
Extrac on Method: Principal Component Analysis.
a. When components are correlated, sums of squared loadings cannot be added to obtain a total variance.
Three components explain 62.224% of the total variance.
 Component 1*: 39.694%
 Component 2*: 12.424%
 Component 3*: 10.106%
 Interpreta on: These three components summarize the majority of the informa on contained in the original variables, meaning they explain a significant portion of the variance.
Component Matrixa
Component
1 2 3
Display .608 .026 -.474
Operating System .694 .343 -.262
Battery Life .374 .715 .126
Processor .636 .431 .244
Camera .630 -.134 .037
Storage .687 -.153 -.453
Connectivity .557 -.418 -.231
Apple Pencil Support .585 -.345 .380
Face ID/ Touch ID .667 -.301 .435
Apple Ecosystem .780 .038 .220
Extrac on Method: Principal Component Analysis.
a. 3 components extracted.
This table shows the loadings of each variable on the three
components.
 Interpretation: Loadings indicate how strongly a
variable correlates with a component.
 For instance, *Apple Ecosystem* loads highly on
Component 1 (0.780), while *Ba ery Life* loads
highly on Component 2 (0.715).
 Variables can have mixed loadings across
components.
Pa ern Matrixa
Component
1 2 3
Display -.104 .089 -.786
Operating System -.034 .473 -.578
Battery Life -.051 .829 .022
Processor .327 .670 -.029
Camera .426 .078 -.303
Storage .045 -.051 -.828
Connectivity .298 -.286 -.569
Apple Pencil Support .801 -.048 .039
Face ID/ Touch ID .869 .033 .058
Apple Ecosystem .576 .334 -.176
Extraction Method: Principal Component Analysis.
Rotation Method: Oblimin with Kaiser Normalization.
a. Rotation converged in 8 itera ons.
Provides factor loadings a er rota on, which makes it easier to
interpret.
 Interpretation: Variables are be er aligned with the
components. For example:
 Face ID/ Touch ID strongly loads on Component 1
(0.869).
 Battery Life loads on Component 2 (0.829).
Structure Matrix
Component
1 2 3
Display .260 .261 -.761
Operating System .304 .606 -.677
Battery Life .082 .814 -.155
Processor .455 .733 -.336
Camera .574 .224 -.510
Storage .403 .156 -.835
Connectivity .502 -.097 -.633
Apple Pencil Support .776 .080 -.305
Face ID/ Touch ID .849 .169 -.336
Apple Ecosystem .711 .476 -.512
Extraction Method: Principal Component Analysis.
Rotation Method: Oblimin with Kaiser Normalization.
 Displays the correla on between the variables and components a er rota on.
 Interpretation : Similar to the pa ern matrix but based on correlations.
Component Correlation Matrix
Component 1 2 3
1 1.000 .172 -.443
Cluster
1 2
Display 5 1
Operating System 5 1
Battery Life 5 1
Processor 5 1
Camera 5 3
Storage 5 1
Connectivity 5 1
Apple Pencil Support 5 2
Face ID/ Touch ID 5 1
Apple Ecosystem 5 1
2 .172 1.000 -.241
3 -.443 -.241 1.000
Extraction Method: Principal Component Analysis.
Rotation Method: Oblimin with Kaiser Normalization.
 Shows correlations between the extracted components.
 Interpretation: Component 1 and Component 2 have a low positive correlation (0.172), while
Component 1 and Component 3 have a moderate negative correlation (-0.443). This indicates
that the components are rela vely dis nct.
Ini al Cluster Centers
 Displays the ini al values used to form clusters.
 Example:
 Cluster 1 has a high ini al value for most features (5 for Display, OS, etc.).
 Cluster 2 starts with lower values.
 Interpreta on: The ini al cluster centers help in grouping similar cases.
Itera on Historya
Itera on
1 2
1 4.007 4.777
2 .214 .353
3 .186 .287
4 .181 .263
5 .000 .000
Change in Cluster Centers
a. Convergence achieved due to no or
small change in cluster centers. The
maximum absolute coordinate change for
any center is .000. The current itera on is
5. The minimum distance between initial
centers is 11.874.
 Shows how the cluster centers changed across itera ons.
 Interpretation: The clustering converged in 5 iteraons, meaning the centers stabilized after this point, achieving op mal separa on between clusters.
Cluster Membership
Case Number Cluster Distance
1 1 2.356
2 1 3.043
3 1 3.588
4 1 2.443
5 1 2.688
6 1 2.410
7 1 3.142
8 1 3.157
9 1 2.924
10 1 2.984
11 1 3.100
12 1 2.918
13 2 3.341
14 2 4.020
15 2 3.501
16 1 3.005
17 1 1.984
18 1 2.913
19 2 3.391
20 2 2.502
21 1 2.118
22 2 1.335
23 1 1.694
24 1 1.787
25 2 3.355
26 2 2.064
27 1 3.588
28 2 2.539
29 1 2.430
30 1 3.172
31 . .
32 1 4.856
33 2 5.457
34 2 2.262
35 2 2.972
36 1 2.048
37 2 3.112
38 1 2.578
39 1 2.890
40 1 5.218
41 1 3.238
42 2 3.232
43 2 2.005
44 1 2.534
45 2 3.035
46 2 2.187
47 2 3.667
48 1 2.736
49 1 3.641
50 2 2.219
51 2 2.595
52 2 4.527
53 1 2.724
 Each case (item) is assigned to a cluster based on similarity. For instance, Case 1 belongs to
Cluster 1 with a distance of 2.356.
 Interpretation: This helps understand how individual items are grouped into clusters based on
their charẩcteristics.
Final Cluster Centers
Cluster
1 2
Display 4 3
Operating System 4 3
Battery Life 4 4
Processor 4 3
Camera 4 2
Storage 4 3
Connec vity 4 3
Apple Pencil Support 4 3
Face ID/ Touch ID 4 3
Apple Ecosystem 4 3
 A er itera on, the final cluster centers are listed.
 Interpretation: For instance, both clusters rate Ba ery Life similarly (4), but they differ on Camera
(Cluster 1 = 4, Cluster 2 = 2). This shows the different characteris cs of the two clusters.
Distances between Final Cluster
Centers
Cluster 1 2
1 3.476
2 3.476
 The distance between the two final clusters is 3.476.
 Interpretation : The larger the distance, the more distinct the clusters are from each other.

ANOVA
Cluster Mean Square df Error
Mean Square df
F Sig.
Display 17.880 1 1.324 50 13.505 <.001
Operating System 7.534 1 .826 50 9.122 .004
Battery Life 1.000 1 .882 50 1.135 .292
Processor 12.829 1 1.225 50 10.476 .002
Camera 19.048 1 .833 50 22.880 <.001
Storage 17.152 1 .742 50 23.130 <.001
Connectivity 17.152 1 .902 50 19.025 <.001
Apple Pencil Support 7.804 1 .978 50 7.982 .007
Face ID/ Touch ID 21.747 1 1.305 50 16.663 <.001
Apple Ecosystem 29.128 1 .717 50 40.622 <.001
The F tests should be used only for descrip ve purposes because the clusters have been chosen to maximíse the differences among cases in different clusters. The observed significance levels are not corrected for this and thus cannot be interpreted as tests of the hypothesis that the cluster means are
equal.
 This shows the variance explained by each cluster for each variable.
 Significant results (e.g., *Display*: F = 13.505, Sig. < 0.001) indicate that the variable significantly
differs between clusters.
 Interpreta on: The significant F-values for most variables show that the clusters are dis nct,
meaning these variables explain meaningful differences between groups.
Number of Cases in each Cluster
Cluster 1 31.000
2 21.000
Valid 52.000
Missing 1.000
 Cluster 1 has 31 cases, while Cluster 2 has 21.
 Interpretation: This helps in understanding the distribu on of cases across clusters.
1. KMO and Bartle ‘s Test:
 KMO Measure of Sampling Adequacy (.754): The Kaiser-Meyer-Olkin (KMO) value of 0.754
indicates that the sample size is sufficient for factor analysis. A value closer to 1 is considered
ideal, suggesting that the variables have enough in common for the analysis.
 Bartle ’s Test of Sphericity (Chi-Square 155.947, Sig. < .001): This test checks whether the correlation matrix is an identity matrix, which would imply that factor analysis is inappropriate. A significant result (p < .001) means that factor analysis is suitable, as the variables are correlated.
2. ANOVA Table:
 Cluster ANOVA: The ANOVA table compares the means of variables across clusters. Significant F-values for variables like Display, Operating System, Processor, and Camera (p < .05) indicate that these features vary significantly between clusters. For instance, the F-value for Display (13.505, Sig. < .001) suggests that the Display variable significantly differs between the iden fied clusters. 
However, the F-tests are descrip ve, as clusters were selected to maximize differences, so they
should not be interpreted as hypothesis tests for mean equality.
3. Cluster Analysis:
 Ini al and Final Cluster Centers: The ini al cluster centers represent the starting point for the clustering process, while the final cluster centers show the final average scores of each variable for the clusters a er the iterations. For example, in the final clusters, both clusters rated variables like Operating System and Battery Life close to 4, indicating they’re seen as relatively equal in importance between the two clusters.
 Distances between Final Cluster Centers: The distance between the clusters is 3.476, which
represents how dis nct the two clusters are from each other. The larger the distance, the more
dissimilar the clusters are.
 Cluster Membership: Each case is assigned to a specific cluster, with 31 cases in Cluster 1 and 21 in Cluster 2, helping you interpret how different cases (iPads) were grouped based on their
features.
4. Component Rotation:
 Principal Component Analysis (PCA) and Oblimin Rota on: Component rota on helps make the output more interpretable by redistributing variance across the components. The Oblimin
rotation with Kaiser Normalization suggests that the factors are correlated, as indicated by the
component correla on matrix where correla ons between components are not zero (e.g.,
correla on between Component 1 and 2 is .172). This method simplifies the factor structure,
making it easier to iden fy clusters of related features. For example, Component 1 groups high
loadings for variables like Apple Pencil Support and Face ID/Touch ID, implying these variables are closely related.

Leave a comment