Let's build a decision tree on the entire data set, predicting which student is on task. In the data set, you'll see that the variable had already been coded. To evaluate the model, we can use Cohen's Kappa as a measure of the agreement between human and non-human coder, or more simply put whether the machine prediction aligns with the way the human observed and coded as students being on task. What is the kappa? Please use 2 decimal places.
You will need to use the following packages:
tree from sklearn cohen_kappa_score from sklearn.metrics