User's gaze on computer screen is widely being used in human computer interaction, virtual reality, computer game industry, usability tests, improving the quality of physically disabled people and drowsiness detection. In this work, gaze image data and computer screen are spatially mapped using appearance based video oculography methods. Furthermore, gazing directions of left, right, lower, upper, center and closed eye state are detected. In order to do this, a new gaze database is created and color space channels which best describes gaze images' appearance are selected. Users' gaze direction is detected using Principal Component Analysis (PCA) feature extraction method and various machine learning approaches. Evaluation of best approaches is performed by using the resulting classification accuracy of applied methods. Eventually, PCA and Artificial Neural Network (ANN) approach shows 95.36% estimation accuracy of gazes in five different direction and closed eye state, % 98.0 average accuracy of left/right and up/down directions, which is comparable to the results in literature.