Knowledge preparation

  • Before entering the case, it is required to have the basic ability to use Python code, otherwise you may not understand the logic of the code, resulting in twice the effort with half the effort. At the same time, it is required to understand the general process of pymycobot controlling the movement of the manipulator. If you have not used pymycobot, you can simply refer to the following tutorials: pymycobot API reference documentspymycobot course
  • Since the implementation of the case is based on ROS, you need to briefly understand some working principles of ROS before entering the tutorial. You can simply refer to the tutorial:ROS Official introduction


  1. The password in the image is 123

  2. The vision. Launch file must be opened between programs.

  3. The name of the manipulator in the virtual image should be consistent with the port value in the launch / vision. Launch file.

  4. For aruco code cases, the offset may need to be adjusted according to the actual situation.

  5. Viewable files of objects recognized by the picture recognition model: ~ / catkin mycobot/src/mycobot/mycobot ai/scripts/labels.json

  6. If the computer with the camera turned on comes with it, you need to adjust the parameters of turning on the camera in the program:cap_num,it can be modified to 0 or 1.

The color recognition and picture recognition cases are modified as follows:


The aruco code identification case is modified as follows:


results matching ""

    No results matching ""