Jack's Notebook

In me the tiger sniffs the rose.


  • Home

  • About

  • Tags

  • Categories

  • Archives

The principle of XGboost

Posted on 2017-09-20

Xgboost

Introduction to XGboost

XGBoost is short for “Extreme Gradient Boosting”, This is a tutorial on gradient boosted trees by the author of xgboost.XGBoost is used for supervised learning problems, where we use the training data (with multiple features) $x_i$ to predict a target variable $y_i$.

Read more »

How to implement a neural network

Posted on 2017-09-19

Neural Network Basics

Derive the gradients of the sigmoid function.

So,the sigmoid function is:

$$sigmiod(x) = \sigma (x)$$

$$ \sigma ‘(x) = - \frac{1}{(1+e^{-x})^2} \cdot -e^{-x} $$
$$ = \frac{1}{1+e^{-x}} \cdot \frac{e^{-x}}{1+e^{-x}} $$
$$ = \frac{1}{1+e^{-x}} \cdot (1 - \frac{1}{1+e^{-x}}) $$
$$ = \sigma(x) \cdot (1-\sigma(x)) $$

Read more »

How to understand softmax function

Posted on 2017-09-19

Softmax

The property of softmax function

(a) prove that softmax is invariant to constant offsets in the input,that is, for any input vecotr x and any constant c,
$$softmax(x + c) = softmax(x)$$
which x+c means adding the constant c to every dimension of x.

First all of,to prove the softmax function:

$$softmax(x_i + c) = \frac {e^{x_i + c}}{\sum_j e^{x_i + c}} $$
$$ = \frac {e^x_i \cdot e^c}{\sum_j e^{x_i} \cdot e^c} $$
$$ = \frac {e^x_i}{\sum_j e^{x_i}} $$
$$ = softmax(x_i) $$

Read more »

ZHANG Bo

Stay hungry.Stay foolish.

3 posts
2 tags
© 2017 ZHANG Bo
Powered by Hexo
|
Theme — NexT.Mist v5.1.3