Method and apparatus for rating applications. Execution of an application on a computing device is monitored to determine usage information for a user. Embodiments capture a plurality of images and for each of the plurality of images, extract, from the respective image, a set of user facial features and determine an user emotional state corresponding to the respective set of user facial features by applying a model correlating a set of predefined emotional states with corresponding predefined facial features. A trend of user emotional states across a plurality of executions of the application on the computing device is determined. Embodiments calculate a rating for the application based on the usage information, the user emotional states, and the trend of the user emotional states. The rating is sent to a server over a network connection for use in an aggregate rating of the application.