Skip to main content

Contact Us

Comments

Popular posts from this blog

The most powerful kernel trick of SVM

 The kernel trick is most important and powerful technique of SVM . Linear VS Non-Linear dataset linear vs non-linear figure Problem Statement Currently we have learn how to apply SVM algorithm at linear datasets, but what if we have non linear dataset. Solution of Problem Solution is kernel trick. Kernel Trick The Kernel trick is trick where we add  many SVMS  models by bagging,voting,stacking and boosting or we can use SVM class to implement it. Implementation To implement it follow code given below- from sklearn.svm import SVC svc=SVC() svc.fit(X_train,y_train) svc.score(X_test,y_test)

Loss function

Loss function Loss function is Quantity helps to find loss of our model when ever it is greater models perform poor else model perform good. Today we learnt about  Loss function in regression in other class we discus  Loss function in classification There are Several Loss function in regression MSE(Mean Squared Error) MAE(Mean Absolute Error) RMSE(Root Mean Squared Error) R2 Score MSE MSE formula --> 𝛴 (y i -p) 2 Note - MSE is effected by outlier MAE MAE formula --> Σ | y i -p | RMSE RMSE formula --> √Σ (y i -p) 2 R2 Score R2 Score formula --> 1-RSS/TSS RSS=Sum of Square of Residual TSS=Total Sum of Square

Overfitting

Overfitting Overfitting is famous term in machine learning it says in our training data accuracy is good but in testing data and new data from user our model perform bad. Good model vs Overfit model result on training data In this thing, we have to study bias-variance tradeoff bias and variance is loss but bias is loss of training data where variance is loss of testing data they have an inverse relationship means-    B is inversion proportion to the V In overfitting bias is very low but variance is very high Prevention If it is happening with you then simply use the ensemble techniques.