Due to the difficulties associated with the special walk mode, there is currently no reliable odometer scheme for the humanoid robot. We propose a real-time, low-cost odometer based on object landmarks for the humanoid robot. The proposed odometer algorithm takes objects in the environment as landmarks and has four modules: the visual measurement, kinematic odometer, filtering correction, and model correction modules. For visual measurement, the monocular camera image is segmented and processed with the morphological method to identify and locate the landmarks, which are then used for estimating the robot pose based on prior information. In the kinematic odometer module, based on the joint angle data, the robot posture is calculated using robot kinematics, and the odometer increment is obtained by the difference. To realize filtering correction, an unscented Kalman filter is used to correct the kinematic odometer based on multiple sets of visual landmark measurements. For model correction, correction data is used as the training dataset. If the visual measurement is determined to be invalid, a generalized regression neural network is used to correct the kinematic odometer. To verify the proposed algorithm, we used the NAO robot as the experimental object. In the experiments, the average localization error was less than 3 cm and the attitude angle was approximately 2°, with an average program execution time of 7.94 ms. The proposed algorithm has high localization accuracy and excellent real-time performance.