Skip to main content

Disclaimer

Disclaimer for Machine Learning

If you require any more information or have any questions about our site's disclaimer, please feel free to contact us by email at aarushdixit73@gmail.com. Our Disclaimer was generated with the help of the Free Disclaimer Generator.

Disclaimers for Machine Learning

All the information on this website - https://machinelearninghero.blogspot.com/ - is published in good faith and for general information purpose only. Machine Learning does not make any warranties about the completeness, reliability and accuracy of this information. Any action you take upon the information you find on this website (Machine Learning), is strictly at your own risk. Machine Learning will not be liable for any losses and/or damages in connection with the use of our website.

From our website, you can visit other websites by following hyperlinks to such external sites. While we strive to provide only quality links to useful and ethical websites, we have no control over the content and nature of these sites. These links to other websites do not imply a recommendation for all the content found on these sites. Site owners and content may change without notice and may occur before we have the opportunity to remove a link which may have gone 'bad'.

Please be also aware that when you leave our website, other sites may have different privacy policies and terms which are beyond our control. Please be sure to check the Privacy Policies of these sites as well as their "Terms of Service" before engaging in any business or uploading any information.

Consent

By using our website, you hereby consent to our disclaimer and agree to its terms.

Update

Should we update, amend or make any changes to this document, those changes will be prominently posted here.

Comments

Popular posts from this blog

The most powerful kernel trick of SVM

 The kernel trick is most important and powerful technique of SVM . Linear VS Non-Linear dataset linear vs non-linear figure Problem Statement Currently we have learn how to apply SVM algorithm at linear datasets, but what if we have non linear dataset. Solution of Problem Solution is kernel trick. Kernel Trick The Kernel trick is trick where we add  many SVMS  models by bagging,voting,stacking and boosting or we can use SVM class to implement it. Implementation To implement it follow code given below- from sklearn.svm import SVC svc=SVC() svc.fit(X_train,y_train) svc.score(X_test,y_test)

Loss function

Loss function Loss function is Quantity helps to find loss of our model when ever it is greater models perform poor else model perform good. Today we learnt about  Loss function in regression in other class we discus  Loss function in classification There are Several Loss function in regression MSE(Mean Squared Error) MAE(Mean Absolute Error) RMSE(Root Mean Squared Error) R2 Score MSE MSE formula --> 𝛴 (y i -p) 2 Note - MSE is effected by outlier MAE MAE formula --> Σ | y i -p | RMSE RMSE formula --> √Σ (y i -p) 2 R2 Score R2 Score formula --> 1-RSS/TSS RSS=Sum of Square of Residual TSS=Total Sum of Square

Overfitting

Overfitting Overfitting is famous term in machine learning it says in our training data accuracy is good but in testing data and new data from user our model perform bad. Good model vs Overfit model result on training data In this thing, we have to study bias-variance tradeoff bias and variance is loss but bias is loss of training data where variance is loss of testing data they have an inverse relationship means-    B is inversion proportion to the V In overfitting bias is very low but variance is very high Prevention If it is happening with you then simply use the ensemble techniques.