RESEARCH ON DYNAMIC TRACKING METHOD OF ASSISTED PUNCTURE ROBOT BASED ON POSITION VISION, 489-497.

Guanwu Jiang, Keqiang Bai, Shiliang Nie, Ran Wang, and Abdelaziz Omar

References

  1. [1] Q. Lin, R. Yang, K. Cai, P. Guan, W. Xiao, and X. Wu,Strategy for accurate liver intervention by an optical trackingsystem, Biomedical Optics Express, 6(9), 2015, 3287–3302.496
  2. [2] Y. Zhang, B. Li, and L. Yuan, Study on the control methodand optimization experiment of prostate soft tissue puncture,IEEE Access, 8, 2020, 218621–218643.
  3. [3] J. Engstrand, G. Toporek, P. Harbut, E. Jonas, H. Nilsson, andJ. Freedman, Stereotactic CT-guided percutaneous microwaveablation of liver tumors with the use of high-frequency jetventilation: an accuracy and procedural safety study, AmericanJournal of Roentgenology, 208(1), 2017193–200.
  4. [4] L. Zheng, H. Wu, L. Yang, Y. Lao, Q. Lin, and R. Yang, A novelrespiratory follow-up robotic system for thoracic-abdominalpuncture, IEEE Transactions on Industrial Electronics, 68(3),2020, 2368–2378.
  5. [5] E. Lima, P.L. Rodrigues, P. Mota, N. Carvalho, E.Dias, J. Correia-Pinto, R. Autorino, and J.L. Vila\c{c}a,Ureteroscopy-assisted percutaneous kidney access made easy:first clinical experience with a novel navigation system usingelectromagnetic guidance, European Urology, 72(4), 2017610–616.
  6. [7] and
  7. [8] study puncture surgery under CT and ultrasoundguidance.In
  8. [9], a new value iteration-based path planningmethod called capability iteration network (CIN) isproposed, it has higher accuracy, faster convergence andlower sensitivity to random seed compared to previous VI-based models. Reference
  9. [10] adds an augmented realitysurgical navigation system based on multi-view virtual andreal registration, aiming at the fact that the traditionalspinal puncture navigation system cannot monitor thesurgical process in real time and the navigation accuracy isnot high, and the positioning accuracy is 1.70 ± 0.25 mm.Reference
  10. [11] studies the realisation of three-dimensionalmotion tracking of dynamic objects through monocularvision and inertial sensing technology. Guo et al.
  11. [12]proposed a vision-based leader–follower tracking controlsystem is presented using two autonomous mobile robotsand model predictive control (MPC).This paper studies the dynamic measurement methodbased on position vision. For the workflow of the auxiliaryprecision puncture robot system, the marker pointsaccurate extraction based on binocular vision positioningis selected. For the puncture operation process, theacquisition of the patient’s body micro motion error is set,with the help of the spatial transformation of the markerpoint in the optical coordinate system, solve the changedifference of the puncture path set by the doctor after thepatient’s body micro motion, and send the difference to thedata processing computer system for error compensationin the process of robot puncture positioning to realise errorelimination and accurate positioning.2. System Design Description2.1 Model Principle of Spatial MappingResearch and solve the clinical problems faced by tradi-tional puncture surgery, and build an auxiliary precisionFigure 2. Spatial mapping process.puncture robot system based on medical image guidance
  12. [13]. The auxiliary puncture robot finally completes thepuncture positioning process with accurate position andposture by sending instructions through data processingcomputer
  13. [14]. The hardware composition of the auxiliaryprecision puncture robot system is shown in Fig. 1.In the medical puncture surgery performed bythe auxiliary puncture robot, the spatial mapping ofthe auxiliary precision puncture robot system basedon the marked points mainly includes four coordinatesystems: computer image coordinate system {V }, patientcoordinate system {P}, optical coordinate system {m}and robot coordinate system {r}. The spatial mappingrelationship is shown in Fig. 2.The binocular camera recognises the image coordinatevalue of the circular mark point in the binocular imageplane, and accurately deduces the accurate coordinate490Figure 3. Comparison of marked points between 3D modeland CT image 3D reconstruction model.value of the circle center of the circular mark point in thevisual space coordinate system through image processing.The binocular camera recognises the coordinates of thecircular mark points of the robot body, and deduces thecoordinate value of the robot in the visual space coordinatesystem [16]. The mapping relationship between image spaceand robot space is determined by the coordinate positionsof circular marker points in image space and robot space.2.2 Solution of Spatial Mapping ProcessIn the assisted precision puncture robot system basedon medical image [17], the three-dimensional model ofits medical image is scanned by CT and mapped to thecomputer image coordinate system as shown in Fig. 3. Themarked points pasted in the patient coordinate system {P}of the diseased tissue are mapped to the computer imagecoordinate system {m} and the binocular visual opticalcoordinate system {C}. The marked points pasted in thepatient coordinate system {P} and the marked pointsin the robot coordinate system {r} are mapped to thebinocular visual optical coordinate system {C}. Throughthis series of mapping transformations [18], any point Pin the computer image coordinate system {m} can bemapped to the robot coordinate system {r}, so as torealise the accurate positioning function of the auxiliarypuncture robot system.For any point MP in the computer image coordinatesystem {m}, after rotation and translation, it is mapped tothe corresponding point RP in the robot coordinate system{r}, which can be expressed as:rP = T · cP (1)Where T =R t0 1 is the mapping transformationmatrix composed of rotation matrix and translationmatrix, R is 3 × 3 rotation matrix of, t is 3 ×1translation matrix.The corresponding features in the two workspaces areextracted, and the transformation matrix T is obtained torealise the mapping from the computer image coordinatesystem {m} to the robot coordinate system {r}. Basedon the binocular vision positioning system, there are fivecoordinate systems: computer image coordinate system,optical coordinate system, robot coordinate system, 3Dmodel affine coordinate system and robot affine coordinatesystem. The mapping process is solved as follows.(1) Image coordinate system – 3D model affinecoordinate systemPaste more than 4 marker points on the 3D modelto establish a 3D model affine coordinate system. Usefour marker points to establish a reference coordinatesystem [19]. Select point M0 as the origin of the referencecoordinate system. The 3D model affine coordinate systemhas translation and rotation relative to the CT imagecoordinate system. The translation distances on the x, yand z axes are xm0, ym0 and zm0, respectively, and itsrotation along the x, y and z axes is assumed to be 0.The other three marker points M 1 (xm1, ym1, zm1), M 2(xm2, ym2, zm2) and M 3 (xm3, ym3, zm3) establish affinerelations. The mapping matrix from the 3D model radiationcoordinate system to the image coordinate system can beexpressed as follows.T1 =xm1 − xm0ym1 − ym0zm1 − zm00xm2 − xm0ym2 − ym0zm2 − ym00xm3 − xm0ym3 − ym0zm3 − zm00xm0ym0zm01(2)If the affine coordinate system of the 3D modelestablishes a corresponding relationship with the imagecoordinate system, the point P (xmp, ymp, zmp) in theimage coordinate system has a unique point correspondingto it on the 3D model, and its conversion formula can beexpressed as:P3D affine = T−11 Pimage, Pimage = T1P3D affine (3)(2) 3D model affine coordinate system binocularcamera optical coordinate systemThrough the optical positioning calculation of thebinocular camera, the coordinate positions of the fourmarked points in the optical coordinate system areobtained, and the conversion matrix from the cameraoptical coordinate system to the 3D model affine coordinatesystem is established:T2 =xc1 − xc0yc1 − yc0zc1 − zc00xc2 − xc0yc2 − yc0zc2 − yc00xc3 − xc0yc3 − yc0zc3 − zc00xc0yc0zc01(4)In this way, the binocular camera optical coordinatesystem and the 3D model affine coordinate system establisha corresponding relationship, so the points on the 3D modelhave unique points corresponding to them in the binocularcamera optical coordinate system, and the conversionformula can be expressed as:Poptical = T−12 P3D affine, P3D affine = T2Poptical (5)491According to the relationship between (4) and (5), themapping relationship between the points in the image coor-dinate system and the camera optical coordinate system isas follows:Poptical = T−12 T−11 Pimage = T3Pimage (6)Where T3 = T−12 T−11 is the mapping matrix from theimage coordinate system to the camera coordinate system.(3) Robot coordinate system – robot affine coordinatesystemSimilarly, by selecting four marker points at the end ofthe robot, the conversion matrix from the robot coordinatesystem to the robot affine coordinate system can beestablished:T4 =xr1 − xr0yr1 − yr0zr1 − zr00xr2 − xr0yr2 − yr0zr2 − yr00xr3 − xr0yr3 − yr0zr3 − zr00xr0yr0zr01(7)If a corresponding relationship is established betweenthe robot coordinate system and the robot affine coordinatesystem, any point of the robot has a unique pointcorresponding to it in the robot affine coordinate system,and its conversion formula can be expressed as:Probot affine = T−14 Probot, Probot = T4 (8)(4) Robot affine coordinate system binocular cameraoptical coordinate systemThe optical positioning calculation is carried outthrough the binocular camera, and the conversion matrixfrom the camera optical coordinate system to the robotcoordinate system is established by positioning thecoordinate positions of the four marked points at the endof the robot in the optical coordinate system:T5 =xcr1 − xcr0ycr1 − ycr0zcr1 − zcr00xcr2 − xcr0ycr2 − ycr0zcr2 − ycr00xcr3 − xcr0ycr3 − ycr0zcr3 − zcr00xcr0ycr0zcr01(9)The conversion formula can be expressed as:Probot affine = T−15 Poptical, Poptical = T2Probot affine (10)According to the relationship between (9) and (10),the mapping relationship between the points in the robotcoordinate system and the camera optical coordinatesystem is as follows:Probot = T4T−15 Poptical = T6Poptical (11)Where T6 = T4T−15 is the mapping matrix from theoptical coordinate system to the camera coordinate system.(5) Image coordinate system – robot coordinate systemThrough the above derivation process, the CT imagecoordinate system, 3D model coordinate system androbot coordinate system have been unified in the visualcoordinate system. The CT image is mapped to the opticalcoordinate system through the image coordinate system,and then mapped to the robot coordinate system from theoptical coordinate system, that is, the mapping processbetween CT image space and robot surgery is realised:Probot = T6T3Pimage = T7Pimage (12)Where T7 = T6T3 is the transformation relationshipmatrix from the image coordinate system to the robotcoordinate system, which can map the position coordinatesof any point on the 3D model to the robot coordinatesystem.3. Algorithm Research3.1 Principle of Dynamic Tracking MeasurementBased on Position VisionThe principle of dynamic tracking measurement of patientbody micro motion based on binocular position visionmainly includes camera calibration, image acquisition andcalibration [20]. The design scheme of dynamic trackingmeasurement marker point based on position vision mainlycompletes the following processes:(1) The binocular camera quickly collects, analysesand processes the patient image, identifies andstereo matches the marked points, and deduces thespatial three-dimensional coordinate information ofthe marked points in the camera coordinate system;(2) When the patient’s body moves slightly, the circularmark point deviates from the original position, thebinocular camera collects the image information againand analyses it. After identifying and stereo matchingthe mark point, the new spatial three-dimensionalinformation of the circular mark point in the cameracoordinate system is deduced again;(3) Through the spatial three-dimensional coordinatedifference of the circular mark point, the change ofthe corresponding spatial position and attitude of themark point after micro motion is solved, and the errorcompensation is made in the process of assisting theprecise puncture robot in guiding and positioning,so that the puncture guide tube can be accuratelypositioned on the puncture point after micro motion ofthe patient’s body. The design principle of the visualdynamic tracking measurement process is shown inFig. 4.Through the transformation from the optical coordi-nate system to the robot affine coordinate system, thepuncture path information is mapped to the robot surgicalsetting coordinate system. During the puncture operation,the patient’s body frets, the position of the marked pointdeviates, and the puncture path deviates accordingly. Thecoordinate information of the spatial marked point isreobtained through the binocular camera, the error value isaccurately calculated, and the error value is compensatedduring the puncture positioning. The structure of thedynamic tracking measurement system based on positionvision is shown in Fig. 5.492Figure 4. Schematic diagram of dynamic tracking measurement based on position vision.Figure 5. Structure diagram of dynamic tracking measurement system based on position vision.3.2 Dynamic Tracking Algorithm Based onPosition VisionAt present, the mainstream surgical navigation systemscomplete the registration from surgical image space torobot optical space through the identification of markerpoints [20]. The registration relationship between imagespace and optical space is established by mathematicalmodel. Let the corresponding point sets in optical spaceand image space be p = [p1, p2, . . . , pn] and q = [q1,q2, . . . , qn], respectively, and the corresponding relationshipcan be expressed as {pi ←→ qi, i = 1, 2, . . . , n}. There isa rotation matrix R and a translation matrix T from theoptical coordinate system to the image space coordinatesystem, which can convert the optical coordinate systemspace to the image coordinate system space, satisfying thefollowing relationship:Q = RP + T (13)Where T is the translation matrix of 3 × 1, R isrotation matrix of 3 × 3, which can be expressed by Eulerangle and quaternion.The geometric transformation of the Euler angle is tofirst fix the position of the coordinate origin, and thenrotate the original coordinate system three times aroundthe coordinate axis to obtain another coordinate space.The rotation angles are used, respectively, ϕ, θ, ψ. TheEuler angle is expressed as follows:R =r11r21r31r12r22r32r13r23r33=cosψcosθ− sin ψ cos φ+ cos ψ sin θ sin φsin ψ sin φ+ cos ψ sin θ sin φsin ψ cos θcos ψ cos θ+ sin ψ sin θ sin φ− cos ψ sin θ+ sin ψ sin θ cos φ− sin θcos θ sin ϕcos θ cos ϕ(14)The rotation matrix R has nine elements and threedegrees of freedom, the spatial rotation angle ϕ, θ, ψ.The Euler angle can also be obtained from the relevantparameters of the rotation matrix R.Quaternion is defined in four-dimensional space,expressed as q = [q1, q2, q3, q4] ‘, if | q | = 1, it is calledunit quaternion. Rotation transformation of the coordinatesystem in three-dimensional space around unit vector γ= (rx, ry, rz), and θ is the rotation of the angle, whoserotation angle is expressed in unit quaternion as:q =q0q1q2q3=sin(θ2 )sin θ2 rxsin θ2 rysin θ2 rz(15)The rotation matrix can also be described as unitquaternion:R =q20 + q21 − q22 − q232(q1q2 + q0q3)2(q1q3 − q0q2)2(q1q2 − q0q3)q20 + q22 − q21 − q232(q2q3 + q0q1)2(q1q2 − q0q3)2(q2q3 − q0q1)q20 + q23 − q21 − q22(16)493Figure 6. Dynamic transformation of the 3D model and puncture effect of auxiliary puncture robot: (a) initial state diagramof the 3D model; (b) effect drawing of auxiliary puncture robot; (c) after dynamic transformation of 3D model; (d) punctureeffect after dynamic transformation of the 3D model.The marker points on the patient’s skin surfaceare collected by binocular vision, the corresponding fourmarker points are selected in order, and the spatial three-dimensional information of the marker points in the opticalcoordinate system is deduced. Registration realises thespatial registration between the affected part entity and thethree-dimensional image model in the optical coordinatesystem according to the marked points, so as to ensure theaccurate delivery of surgical instruments to the affectedpart.The quaternion algorithm uses quaternions to repre-sent the rotation matrix so that E =mi=1 qi − RPi2isthe smallest. The algorithm steps are as follows: calculatematrix A:A =mi=1mj=1piq tj (17)Calculate the antisymmetric matrix B, the vector ∆:B = A − At=B11B21B31B12B22B23B13B23B33(18)∆ = [B23 B31 B13] (19)Calculation matrix C:C =tr (A)∆∆tA + At− tr (A) I3 (20)Where tr(A) represents the trace of matrix A andI3 represents unit matrix of 3 × 3. The quaternion q isequal to the eigenvector corresponding to the maximumeigenvalue of matrix C:q = [q0 q1 q2 q3]t(21)The rotation matrix R is obtained. Finally, Q and Pare solved by two points set, and the translation matrixT = Q − RP is solved. The quaternion solving process iscompleted.4. Experimental Verification4.1 3D Model ExperimentThis section sets up a dynamic tracking measurementexperiment based on binocular position vision according tothe possible changes in the position of the puncture needlecaused by the changes in the position of the patient’s bodyduring the time from CT scanning to the intervention ofthe puncture needle. After the dynamic transformation ofthe 3D model and the puncture effect of auxiliary puncturerobot are shown in Fig. 6.Firstly, four marker points are selected in the CTimage as the three-dimensional coordinate information inthe medical image coordinate system space as shown inFig. 6(a). After taking pictures with a binocular camera,select the three-dimensional coordinate information of fourmarker points in the optical coordinate system, and setthe puncture needle entry point and target lesion pointof the 3D model. When the spatial position of the 3Dmodel does not change, the auxiliary puncture robot canrealise auxiliary precise puncture positioning as shown inFig. 6(b).After the dynamic transformation of the 3D model isshown in Fig. 6(c), the position and attitude informationin the initial state are changed. Through the derivation ofthe dynamic tracking process of quaternion, the rotationmatrix R and translation matrix t of the 3D modelchange in the new state are obtained, the error valueof the puncture point and target focus point set afterdynamic transformation is corrected, and the error valueis corrected during the auxiliary puncture positioningprocess of the auxiliary puncture robot, and its punctureguide tube accurately reaches the set position is shownin Fig. 6(d).In the process of this experiment, the three-dimensional coordinate information of the four markerpoints on the 3D model in the CT medical image coordinatesystem, the three-dimensional coordinate information ofthe initial state in the optical coordinate system andthe three-dimensional coordinate information of the fourmarker points in the optical coordinate system collectedafter dynamic transformation are shown in Table 1.494Table 1Corresponding Values of 3D Model Experiment Mark Point Coordinate SystemMark Point Coordinate Value in CTSpace (mm)Robot Positioning MarkPoint Coordinates (mm)Target and Skin EntryPoint Coordinates (mm)X Y Z X Y Z X Y ZMark point 1 −17.91 −67.45 −1225.5 77.608 37.669 803.65 54.098 75.004 789.39Mark point 2 −13.4 −107.17 −1105.5 −37.568 44.011 847.83 −58.37 52.137 831.73Mark point 3 −88.75 −121.47 −1114.3 −29.322 −33.809 874.44 −37.175 −23.39 839.09Mark point 4 −112.63 −222.2 −1360.7 210.39 −48.611 955.56 206.04 −8.459 920.66Figure 7. Experimental process of rabbit cervical puncture: (a) anaesthetised white rabbit; (b) 3D reconstruction of CTimage; (c) manual cervical puncture; (d) imaging comparison chart.In this experiment, aiming at the problem of bodydisplacement change of patients in medical punctureclinical surgery, a dynamic measurement method basedon position vision is studied, and the correctness ofthe measurement method is verified by the 3D modelexperiment. After a large number of experiments, thecorrectness of the algorithm derivation process is verified.After the dynamic transformation of position and attitude,the 3D model makes error compensation in the robotpuncture positioning process, and the positioning errorof the auxiliary puncture robot is within 2 mm, whichmeets the clinical puncture accuracy index of puncturesurgery.4.2 Live Animal ExperimentIn this section, rabbits after in vivo anaesthesia are selectedfor the experiment. The cervical joint puncture operationof rabbits is carried out to verify the puncture experimentof the auxiliary precision puncture robot system based onthe medical image in the in vivo anaesthetised animals,and the possible factors affecting the precise positioning inthe clinical puncture of living animals are analysed.(1) Experimental stepsThe animals fasted on the day of the experiment, andthe rabbits were anaesthetised by intramuscular injectionhalf an hour before the operation to ensure that theywere in a stable state of anaesthesia during the operation.A rabbit was designed for a cervical spine punctureexperiment. The rabbit was fixed on the CT scanning tablein the lying down position, as shown in Fig. 7(a), and amark point was pasted on the appendix of the puncturesite; Then do medical image processing and complete thethree-dimensional reconstruction of rabbit CT image, asshown in Fig. 7(b), register and formulate the optimalpuncture path in the medical image space and surgicalspace, the robot completes the puncture positioning, asshown in Fig. 7(c), and the doctor completes the manualpuncture after the robot positioning, as shown in Fig. 7(d);Finally, the accuracy of the puncture path was judgedby CT scanning again to complete the postoperativeevaluation.1) Preoperative planninga. The anaesthetised rabbits were fixed on the board,the marked points were pasted, and the CT scan wascompleted.b. The CT scanning image is transmitted to thedata processing computer of the auxiliary pre-cision puncture robot system to complete themedical image processing and three-dimensionalmodelling.c. Determine the coordinates of the marker point inthe medical image coordinate system, and selectthe puncture target and puncture needle point todetermine the puncture path. Corresponding valuesof rabbit experimental marker point coordinatesystems are shown in Table 2.2) Intraoperative executiona. Select puncture needle, guide tube and registerpuncture object information.b. The registration of medical image coordinate systemand spatial surgical coordinate system is completedby four points positioning method, which is unifiedin the world coordinate system.495Table 2Corresponding Values of Rabbit Experimental Marker Point Coordinate SystemMark point Coordinate Value in CTSpace (mm)Robot Positioning MarkPoint Coordinates (mm)Target and Skin EntryPoint Coordinates (mm)X Y Z X Y Z X Y ZMark point 1 4.2949 194.01 −122.5 −532.46 −480.84 98.812 0.509 187.98 −286.3Mark point 2 79.367 194.01 −122.5 −534.96 −401.88 96.496 0.509 187.98 −286.3Mark point 3 79.367 189.59 −136.5 −517.16 −401.2 98.893 21.89 164.36 −286.76Mark point 4 −10.846 138.49 −179.5 −417.88 −487.17 151.42 121.60 78.39 −234.263c. The data analysis completes the surgical planning,selects the optimal motion path of the robot, andsends the motion command.d. After the robot positioning is completed, the doctormanually enters the puncture needle to assist theprecise puncture robot to exit.3) Postoperative evaluationa. The anaesthetised rabbits after puncture werescanned by CT again.b. Export the CT-scanned medical image to the aux-iliary precision puncture robot system workstation.Through the analysis and comparison of two-dimensional and three-dimensional images, judgewhether it coincides with the set puncture paththrough the imaging of the puncture needle, so asto judge whether the auxiliary puncture robot canlocate accurately.(2) Experimental conclusionThrough the assisted precision puncture robot systembased on the medical image, the puncture operation ofrabbit cervical spine joint is completed. Through thecomparison of postoperative CT scanning imaging, asshown in Fig. 8, it is obtained that the position accuracyof puncture needle point and focus target point is within 2mm, and the puncture needle imaging and set needle pathbasically coincide, so as to verify the positioning accuracyof the assisted precision puncture robot system based onCT image Safety and stability lay a foundation for thefurther research of medical image-based assisted precisionpuncture robot.5. ConclusionThe novelty of this paper is to solve the puncture errorcaused by the patient’s body micro motion, aiming atthe assisted puncture robot in the medical puncture. Inthe process of medical puncture surgery assisted by thepuncture robot, the micro motion of the patient’s body isinevitable. Aiming at the error of the puncture path causedby the patient’s body micro motion, this paper studies thedynamic measurement method based on position vision,collects the coordinate changes of the marked points causedby the patient’s body micro motion in real time throughthe binocular camera, solves the variation difference ofthe puncture path after the patient’s body micro motion,and makes error compensation in the process of robotFigure 8. Imaging of manual cervical puncture andpostoperative puncture needle in rabbits.puncture positioning, realise error elimination and accuratepositioning.The innovation of this paper is to realise the spacemapping method in the auxiliary puncture robot. With thehelp of binocular camera, the dynamic tracking algorithmof the auxiliary puncture robot based on position visionis studied, and good experimental results are obtainedthrough the 3D model and living animal experiments,which lays a good foundation for the application of robottechnology in medical puncture surgery.AcknowledgementThis research was supported by the Innovation Fund of theEngineering Research Center of the Ministry of Educationfor the Integration and Application of Digital LearningTechnologies (Project No. 1221012). This research wassupported by the Mianyang Science and TechnologyBureau Transfer Payment Fund (2021ZYZF1004).References[1] Q. Lin, R. Yang, K. Cai, P. Guan, W. Xiao, and X. Wu,Strategy for accurate liver intervention by an optical trackingsystem, Biomedical Optics Express, 6(9), 2015, 3287–3302.496[2] Y. Zhang, B. Li, and L. Yuan, Study on the control methodand optimization experiment of prostate soft tissue puncture,IEEE Access, 8, 2020, 218621–218643.[3] J. Engstrand, G. Toporek, P. Harbut, E. Jonas, H. Nilsson, andJ. Freedman, Stereotactic CT-guided percutaneous microwaveablation of liver tumors with the use of high-frequency jetventilation: an accuracy and procedural safety study, AmericanJournal of Roentgenology, 208(1), 2017193–200.[4] L. Zheng, H. Wu, L. Yang, Y. Lao, Q. Lin, and R. Yang, A novelrespiratory follow-up robotic system for thoracic-abdominalpuncture, IEEE Transactions on Industrial Electronics, 68(3),2020, 2368–2378.[5] E. Lima, P.L. Rodrigues, P. Mota, N. Carvalho, E.Dias, J. Correia-Pinto, R. Autorino, and J.L. Vila\c{c}a,Ureteroscopy-assisted percutaneous kidney access made easy:first clinical experience with a novel navigation system usingelectromagnetic guidance, European Urology, 72(4), 2017610–616.[6] G. Zhou, X. Chen, B. Niu, Y. Yan, F. Shao, Y. Fan, and Y.Wang, Intraoperative localization of small pulmonary nodulesto assist surgical resection: A novel approach using a surgicalnavigation puncture robot system, Thoracic Cancer, 11(1),2019, 72–81.[7] W. Zhang, P. Xia, S. Liu, X. Huang, X. Zhao, Z. Liu, and G. Niu,A coordinate positioning puncture method under robot-assistedCT-guidance: phantom and animal experiments, MinimallyInvasive Therapy & Allied Technologies, 31(2), 2020, 206–215.[8] S. Chen, F. Wang, Y. Lin, Q. Shi, and Wang, Y, Ultrasound-guided needle insertion robotic system for percutaneous punc-ture, International Journal of Computer Assisted Radiologyand Surgery, 16(3), 2021, 475–484.[9] B. Nie, Y. Gao, Y. Mei, and F. Gao, Capability iterationnetwork for robot path planning, International Journal ofRobotics and Automation, 37(3), 2022, 266–272.[10] F. Zhang, L. Chen, W. Miao, and L. Sun, Research on accuracyof augmented reality surgical navigation system based on multi-view virtual and real registration technology, IEEE Access, 8,2020, 122511–122528.[11] M.A. Khan and B.J. Yi, Design and clinical test of a passiveultrasound probe holder mechanism for arterial puncturing,International Journal of Control Automation and Systems,18(1), 2020, 29–37.[12] T. Guo, H. Wang, Y. Liu, M. Li, and Y. Wang, Vision-basedmobile robot leader follower control using model predictivecontrol, International Journal of Robotics and Automation,34(5), 2019, 544–552.[13] R.F. Hess, B. Thompson, and D.H. Baker, Binocular vision inamblyopia: structure, suppression and plasticity, Ophthalmicand Physiological Optics, 34(2), 2014, 146–162.[14] D. Spinczyk, Towards the clinical integration of an image-guided navigation system for percutaneous liver tumor ablationusing freehand 2D ultrasound images, Computer Aided Surgery,20(1), 2015, 61–72.
  14. [16]. The mapping relationship between image spaceand robot space is determined by the coordinate positionsof circular marker points in image space and robot space.2.2 Solution of Spatial Mapping ProcessIn the assisted precision puncture robot system basedon medical image
  15. [17], the three-dimensional model ofits medical image is scanned by CT and mapped to thecomputer image coordinate system as shown in Fig. 3. Themarked points pasted in the patient coordinate system {P}of the diseased tissue are mapped to the computer imagecoordinate system {m} and the binocular visual opticalcoordinate system {C}. The marked points pasted in thepatient coordinate system {P} and the marked pointsin the robot coordinate system {r} are mapped to thebinocular visual optical coordinate system {C}. Throughthis series of mapping transformations
  16. [18], any point Pin the computer image coordinate system {m} can bemapped to the robot coordinate system {r}, so as torealise the accurate positioning function of the auxiliarypuncture robot system.For any point MP in the computer image coordinatesystem {m}, after rotation and translation, it is mapped tothe corresponding point RP in the robot coordinate system{r}, which can be expressed as:rP = T · cP (1)Where T =R t0 1 is the mapping transformationmatrix composed of rotation matrix and translationmatrix, R is 3 × 3 rotation matrix of, t is 3 ×1translation matrix.The corresponding features in the two workspaces areextracted, and the transformation matrix T is obtained torealise the mapping from the computer image coordinatesystem {m} to the robot coordinate system {r}. Basedon the binocular vision positioning system, there are fivecoordinate systems: computer image coordinate system,optical coordinate system, robot coordinate system, 3Dmodel affine coordinate system and robot affine coordinatesystem. The mapping process is solved as follows.(1) Image coordinate system – 3D model affinecoordinate systemPaste more than 4 marker points on the 3D modelto establish a 3D model affine coordinate system. Usefour marker points to establish a reference coordinatesystem
  17. [19]. Select point M0 as the origin of the referencecoordinate system. The 3D model affine coordinate systemhas translation and rotation relative to the CT imagecoordinate system. The translation distances on the x, yand z axes are xm0, ym0 and zm0, respectively, and itsrotation along the x, y and z axes is assumed to be 0.The other three marker points M 1 (xm1, ym1, zm1), M 2(xm2, ym2, zm2) and M 3 (xm3, ym3, zm3) establish affinerelations. The mapping matrix from the 3D model radiationcoordinate system to the image coordinate system can beexpressed as follows.T1 =xm1 − xm0ym1 − ym0zm1 − zm00xm2 − xm0ym2 − ym0zm2 − ym00xm3 − xm0ym3 − ym0zm3 − zm00xm0ym0zm01(2)If the affine coordinate system of the 3D modelestablishes a corresponding relationship with the imagecoordinate system, the point P (xmp, ymp, zmp) in theimage coordinate system has a unique point correspondingto it on the 3D model, and its conversion formula can beexpressed as:P3D affine = T−11 Pimage, Pimage = T1P3D affine (3)(2) 3D model affine coordinate system binocularcamera optical coordinate systemThrough the optical positioning calculation of thebinocular camera, the coordinate positions of the fourmarked points in the optical coordinate system areobtained, and the conversion matrix from the cameraoptical coordinate system to the 3D model affine coordinatesystem is established:T2 =xc1 − xc0yc1 − yc0zc1 − zc00xc2 − xc0yc2 − yc0zc2 − yc00xc3 − xc0yc3 − yc0zc3 − zc00xc0yc0zc01(4)In this way, the binocular camera optical coordinatesystem and the 3D model affine coordinate system establisha corresponding relationship, so the points on the 3D modelhave unique points corresponding to them in the binocularcamera optical coordinate system, and the conversionformula can be expressed as:Poptical = T−12 P3D affine, P3D affine = T2Poptical (5)491According to the relationship between (4) and (5), themapping relationship between the points in the image coor-dinate system and the camera optical coordinate system isas follows:Poptical = T−12 T−11 Pimage = T3Pimage (6)Where T3 = T−12 T−11 is the mapping matrix from theimage coordinate system to the camera coordinate system.(3) Robot coordinate system – robot affine coordinatesystemSimilarly, by selecting four marker points at the end ofthe robot, the conversion matrix from the robot coordinatesystem to the robot affine coordinate system can beestablished:T4 =xr1 − xr0yr1 − yr0zr1 − zr00xr2 − xr0yr2 − yr0zr2 − yr00xr3 − xr0yr3 − yr0zr3 − zr00xr0yr0zr01(7)If a corresponding relationship is established betweenthe robot coordinate system and the robot affine coordinatesystem, any point of the robot has a unique pointcorresponding to it in the robot affine coordinate system,and its conversion formula can be expressed as:Probot affine = T−14 Probot, Probot = T4 (8)(4) Robot affine coordinate system binocular cameraoptical coordinate systemThe optical positioning calculation is carried outthrough the binocular camera, and the conversion matrixfrom the camera optical coordinate system to the robotcoordinate system is established by positioning thecoordinate positions of the four marked points at the endof the robot in the optical coordinate system:T5 =xcr1 − xcr0ycr1 − ycr0zcr1 − zcr00xcr2 − xcr0ycr2 − ycr0zcr2 − ycr00xcr3 − xcr0ycr3 − ycr0zcr3 − zcr00xcr0ycr0zcr01(9)The conversion formula can be expressed as:Probot affine = T−15 Poptical, Poptical = T2Probot affine (10)According to the relationship between (9) and (10),the mapping relationship between the points in the robotcoordinate system and the camera optical coordinatesystem is as follows:Probot = T4T−15 Poptical = T6Poptical (11)Where T6 = T4T−15 is the mapping matrix from theoptical coordinate system to the camera coordinate system.(5) Image coordinate system – robot coordinate systemThrough the above derivation process, the CT imagecoordinate system, 3D model coordinate system androbot coordinate system have been unified in the visualcoordinate system. The CT image is mapped to the opticalcoordinate system through the image coordinate system,and then mapped to the robot coordinate system from theoptical coordinate system, that is, the mapping processbetween CT image space and robot surgery is realised:Probot = T6T3Pimage = T7Pimage (12)Where T7 = T6T3 is the transformation relationshipmatrix from the image coordinate system to the robotcoordinate system, which can map the position coordinatesof any point on the 3D model to the robot coordinatesystem.3. Algorithm Research3.1 Principle of Dynamic Tracking MeasurementBased on Position VisionThe principle of dynamic tracking measurement of patientbody micro motion based on binocular position visionmainly includes camera calibration, image acquisition andcalibration
  18. [20]. The design scheme of dynamic trackingmeasurement marker point based on position vision mainlycompletes the following processes:(1) The binocular camera quickly collects, analysesand processes the patient image, identifies andstereo matches the marked points, and deduces thespatial three-dimensional coordinate information ofthe marked points in the camera coordinate system;(2) When the patient’s body moves slightly, the circularmark point deviates from the original position, thebinocular camera collects the image information againand analyses it. After identifying and stereo matchingthe mark point, the new spatial three-dimensionalinformation of the circular mark point in the cameracoordinate system is deduced again;(3) Through the spatial three-dimensional coordinatedifference of the circular mark point, the change ofthe corresponding spatial position and attitude of themark point after micro motion is solved, and the errorcompensation is made in the process of assisting theprecise puncture robot in guiding and positioning,so that the puncture guide tube can be accuratelypositioned on the puncture point after micro motion ofthe patient’s body. The design principle of the visualdynamic tracking measurement process is shown inFig. 4.Through the transformation from the optical coordi-nate system to the robot affine coordinate system, thepuncture path information is mapped to the robot surgicalsetting coordinate system. During the puncture operation,the patient’s body frets, the position of the marked pointdeviates, and the puncture path deviates accordingly. Thecoordinate information of the spatial marked point isreobtained through the binocular camera, the error value isaccurately calculated, and the error value is compensatedduring the puncture positioning. The structure of thedynamic tracking measurement system based on positionvision is shown in Fig. 5.492Figure 4. Schematic diagram of dynamic tracking measurement based on position vision.Figure 5. Structure diagram of dynamic tracking measurement system based on position vision.3.2 Dynamic Tracking Algorithm Based onPosition VisionAt present, the mainstream surgical navigation systemscomplete the registration from surgical image space torobot optical space through the identification of markerpoints [20]. The registration relationship between imagespace and optical space is established by mathematicalmodel. Let the corresponding point sets in optical spaceand image space be p = [p1, p2, . . . , pn] and q = [q1,q2, . . . , qn], respectively, and the corresponding relationshipcan be expressed as {pi ←→ qi, i = 1, 2, . . . , n}. There isa rotation matrix R and a translation matrix T from theoptical coordinate system to the image space coordinatesystem, which can convert the optical coordinate systemspace to the image coordinate system space, satisfying thefollowing relationship:Q = RP + T (13)Where T is the translation matrix of 3 × 1, R isrotation matrix of 3 × 3, which can be expressed by Eulerangle and quaternion.The geometric transformation of the Euler angle is tofirst fix the position of the coordinate origin, and thenrotate the original coordinate system three times aroundthe coordinate axis to obtain another coordinate space.The rotation angles are used, respectively, ϕ, θ, ψ. TheEuler angle is expressed as follows:R =r11r21r31r12r22r32r13r23r33=cosψcosθ− sin ψ cos φ+ cos ψ sin θ sin φsin ψ sin φ+ cos ψ sin θ sin φsin ψ cos θcos ψ cos θ+ sin ψ sin θ sin φ− cos ψ sin θ+ sin ψ sin θ cos φ− sin θcos θ sin ϕcos θ cos ϕ(14)The rotation matrix R has nine elements and threedegrees of freedom, the spatial rotation angle ϕ, θ, ψ.The Euler angle can also be obtained from the relevantparameters of the rotation matrix R.Quaternion is defined in four-dimensional space,expressed as q = [q1, q2, q3, q4] ‘, if | q | = 1, it is calledunit quaternion. Rotation transformation of the coordinatesystem in three-dimensional space around unit vector γ= (rx, ry, rz), and θ is the rotation of the angle, whoserotation angle is expressed in unit quaternion as:q =q0q1q2q3=sin(θ2 )sin θ2 rxsin θ2 rysin θ2 rz(15)The rotation matrix can also be described as unitquaternion:R =q20 + q21 − q22 − q232(q1q2 + q0q3)2(q1q3 − q0q2)2(q1q2 − q0q3)q20 + q22 − q21 − q232(q2q3 + q0q1)2(q1q2 − q0q3)2(q2q3 − q0q1)q20 + q23 − q21 − q22(16)493Figure 6. Dynamic transformation of the 3D model and puncture effect of auxiliary puncture robot: (a) initial state diagramof the 3D model; (b) effect drawing of auxiliary puncture robot; (c) after dynamic transformation of 3D model; (d) punctureeffect after dynamic transformation of the 3D model.The marker points on the patient’s skin surfaceare collected by binocular vision, the corresponding fourmarker points are selected in order, and the spatial three-dimensional information of the marker points in the opticalcoordinate system is deduced. Registration realises thespatial registration between the affected part entity and thethree-dimensional image model in the optical coordinatesystem according to the marked points, so as to ensure theaccurate delivery of surgical instruments to the affectedpart.The quaternion algorithm uses quaternions to repre-sent the rotation matrix so that E =mi=1 qi − RPi2isthe smallest. The algorithm steps are as follows: calculatematrix A:A =mi=1mj=1piq tj (17)Calculate the antisymmetric matrix B, the vector ∆:B = A − At=B11B21B31B12B22B23B13B23B33(18)∆ = [B23 B31 B13] (19)Calculation matrix C:C =tr (A)∆∆tA + At− tr (A) I3 (20)Where tr(A) represents the trace of matrix A andI3 represents unit matrix of 3 × 3. The quaternion q isequal to the eigenvector corresponding to the maximumeigenvalue of matrix C:q = [q0 q1 q2 q3]t(21)The rotation matrix R is obtained. Finally, Q and Pare solved by two points set, and the translation matrixT = Q − RP is solved. The quaternion solving process iscompleted.4. Experimental Verification4.1 3D Model ExperimentThis section sets up a dynamic tracking measurementexperiment based on binocular position vision according tothe possible changes in the position of the puncture needlecaused by the changes in the position of the patient’s bodyduring the time from CT scanning to the intervention ofthe puncture needle. After the dynamic transformation ofthe 3D model and the puncture effect of auxiliary puncturerobot are shown in Fig. 6.Firstly, four marker points are selected in the CTimage as the three-dimensional coordinate information inthe medical image coordinate system space as shown inFig. 6(a). After taking pictures with a binocular camera,select the three-dimensional coordinate information of fourmarker points in the optical coordinate system, and setthe puncture needle entry point and target lesion pointof the 3D model. When the spatial position of the 3Dmodel does not change, the auxiliary puncture robot canrealise auxiliary precise puncture positioning as shown inFig. 6(b).After the dynamic transformation of the 3D model isshown in Fig. 6(c), the position and attitude informationin the initial state are changed. Through the derivation ofthe dynamic tracking process of quaternion, the rotationmatrix R and translation matrix t of the 3D modelchange in the new state are obtained, the error valueof the puncture point and target focus point set afterdynamic transformation is corrected, and the error valueis corrected during the auxiliary puncture positioningprocess of the auxiliary puncture robot, and its punctureguide tube accurately reaches the set position is shownin Fig. 6(d).In the process of this experiment, the three-dimensional coordinate information of the four markerpoints on the 3D model in the CT medical image coordinatesystem, the three-dimensional coordinate information ofthe initial state in the optical coordinate system andthe three-dimensional coordinate information of the fourmarker points in the optical coordinate system collectedafter dynamic transformation are shown in Table 1.494Table 1Corresponding Values of 3D Model Experiment Mark Point Coordinate SystemMark Point Coordinate Value in CTSpace (mm)Robot Positioning MarkPoint Coordinates (mm)Target and Skin EntryPoint Coordinates (mm)X Y Z X Y Z X Y ZMark point 1 −17.91 −67.45 −1225.5 77.608 37.669 803.65 54.098 75.004 789.39Mark point 2 −13.4 −107.17 −1105.5 −37.568 44.011 847.83 −58.37 52.137 831.73Mark point 3 −88.75 −121.47 −1114.3 −29.322 −33.809 874.44 −37.175 −23.39 839.09Mark point 4 −112.63 −222.2 −1360.7 210.39 −48.611 955.56 206.04 −8.459 920.66Figure 7. Experimental process of rabbit cervical puncture: (a) anaesthetised white rabbit; (b) 3D reconstruction of CTimage; (c) manual cervical puncture; (d) imaging comparison chart.In this experiment, aiming at the problem of bodydisplacement change of patients in medical punctureclinical surgery, a dynamic measurement method basedon position vision is studied, and the correctness ofthe measurement method is verified by the 3D modelexperiment. After a large number of experiments, thecorrectness of the algorithm derivation process is verified.After the dynamic transformation of position and attitude,the 3D model makes error compensation in the robotpuncture positioning process, and the positioning errorof the auxiliary puncture robot is within 2 mm, whichmeets the clinical puncture accuracy index of puncturesurgery.4.2 Live Animal ExperimentIn this section, rabbits after in vivo anaesthesia are selectedfor the experiment. The cervical joint puncture operationof rabbits is carried out to verify the puncture experimentof the auxiliary precision puncture robot system based onthe medical image in the in vivo anaesthetised animals,and the possible factors affecting the precise positioning inthe clinical puncture of living animals are analysed.(1) Experimental stepsThe animals fasted on the day of the experiment, andthe rabbits were anaesthetised by intramuscular injectionhalf an hour before the operation to ensure that theywere in a stable state of anaesthesia during the operation.A rabbit was designed for a cervical spine punctureexperiment. The rabbit was fixed on the CT scanning tablein the lying down position, as shown in Fig. 7(a), and amark point was pasted on the appendix of the puncturesite; Then do medical image processing and complete thethree-dimensional reconstruction of rabbit CT image, asshown in Fig. 7(b), register and formulate the optimalpuncture path in the medical image space and surgicalspace, the robot completes the puncture positioning, asshown in Fig. 7(c), and the doctor completes the manualpuncture after the robot positioning, as shown in Fig. 7(d);Finally, the accuracy of the puncture path was judgedby CT scanning again to complete the postoperativeevaluation.1) Preoperative planninga. The anaesthetised rabbits were fixed on the board,the marked points were pasted, and the CT scan wascompleted.b. The CT scanning image is transmitted to thedata processing computer of the auxiliary pre-cision puncture robot system to complete themedical image processing and three-dimensionalmodelling.c. Determine the coordinates of the marker point inthe medical image coordinate system, and selectthe puncture target and puncture needle point todetermine the puncture path. Corresponding valuesof rabbit experimental marker point coordinatesystems are shown in Table 2.2) Intraoperative executiona. Select puncture needle, guide tube and registerpuncture object information.b. The registration of medical image coordinate systemand spatial surgical coordinate system is completedby four points positioning method, which is unifiedin the world coordinate system.495Table 2Corresponding Values of Rabbit Experimental Marker Point Coordinate SystemMark point Coordinate Value in CTSpace (mm)Robot Positioning MarkPoint Coordinates (mm)Target and Skin EntryPoint Coordinates (mm)X Y Z X Y Z X Y ZMark point 1 4.2949 194.01 −122.5 −532.46 −480.84 98.812 0.509 187.98 −286.3Mark point 2 79.367 194.01 −122.5 −534.96 −401.88 96.496 0.509 187.98 −286.3Mark point 3 79.367 189.59 −136.5 −517.16 −401.2 98.893 21.89 164.36 −286.76Mark point 4 −10.846 138.49 −179.5 −417.88 −487.17 151.42 121.60 78.39 −234.263c. The data analysis completes the surgical planning,selects the optimal motion path of the robot, andsends the motion command.d. After the robot positioning is completed, the doctormanually enters the puncture needle to assist theprecise puncture robot to exit.3) Postoperative evaluationa. The anaesthetised rabbits after puncture werescanned by CT again.b. Export the CT-scanned medical image to the aux-iliary precision puncture robot system workstation.Through the analysis and comparison of two-dimensional and three-dimensional images, judgewhether it coincides with the set puncture paththrough the imaging of the puncture needle, so asto judge whether the auxiliary puncture robot canlocate accurately.(2) Experimental conclusionThrough the assisted precision puncture robot systembased on the medical image, the puncture operation ofrabbit cervical spine joint is completed. Through thecomparison of postoperative CT scanning imaging, asshown in Fig. 8, it is obtained that the position accuracyof puncture needle point and focus target point is within 2mm, and the puncture needle imaging and set needle pathbasically coincide, so as to verify the positioning accuracyof the assisted precision puncture robot system based onCT image Safety and stability lay a foundation for thefurther research of medical image-based assisted precisionpuncture robot.5. ConclusionThe novelty of this paper is to solve the puncture errorcaused by the patient’s body micro motion, aiming atthe assisted puncture robot in the medical puncture. Inthe process of medical puncture surgery assisted by thepuncture robot, the micro motion of the patient’s body isinevitable. Aiming at the error of the puncture path causedby the patient’s body micro motion, this paper studies thedynamic measurement method based on position vision,collects the coordinate changes of the marked points causedby the patient’s body micro motion in real time throughthe binocular camera, solves the variation difference ofthe puncture path after the patient’s body micro motion,and makes error compensation in the process of robotFigure 8. Imaging of manual cervical puncture andpostoperative puncture needle in rabbits.puncture positioning, realise error elimination and accuratepositioning.The innovation of this paper is to realise the spacemapping method in the auxiliary puncture robot. With thehelp of binocular camera, the dynamic tracking algorithmof the auxiliary puncture robot based on position visionis studied, and good experimental results are obtainedthrough the 3D model and living animal experiments,which lays a good foundation for the application of robottechnology in medical puncture surgery.AcknowledgementThis research was supported by the Innovation Fund of theEngineering Research Center of the Ministry of Educationfor the Integration and Application of Digital LearningTechnologies (Project No. 1221012). This research wassupported by the Mianyang Science and TechnologyBureau Transfer Payment Fund (2021ZYZF1004).References[1] Q. Lin, R. Yang, K. Cai, P. Guan, W. Xiao, and X. Wu,Strategy for accurate liver intervention by an optical trackingsystem, Biomedical Optics Express, 6(9), 2015, 3287–3302.496[2] Y. Zhang, B. Li, and L. Yuan, Study on the control methodand optimization experiment of prostate soft tissue puncture,IEEE Access, 8, 2020, 218621–218643.[3] J. Engstrand, G. Toporek, P. Harbut, E. Jonas, H. Nilsson, andJ. Freedman, Stereotactic CT-guided percutaneous microwaveablation of liver tumors with the use of high-frequency jetventilation: an accuracy and procedural safety study, AmericanJournal of Roentgenology, 208(1), 2017193–200.[4] L. Zheng, H. Wu, L. Yang, Y. Lao, Q. Lin, and R. Yang, A novelrespiratory follow-up robotic system for thoracic-abdominalpuncture, IEEE Transactions on Industrial Electronics, 68(3),2020, 2368–2378.[5] E. Lima, P.L. Rodrigues, P. Mota, N. Carvalho, E.Dias, J. Correia-Pinto, R. Autorino, and J.L. Vila\c{c}a,Ureteroscopy-assisted percutaneous kidney access made easy:first clinical experience with a novel navigation system usingelectromagnetic guidance, European Urology, 72(4), 2017610–616.[6] G. Zhou, X. Chen, B. Niu, Y. Yan, F. Shao, Y. Fan, and Y.Wang, Intraoperative localization of small pulmonary nodulesto assist surgical resection: A novel approach using a surgicalnavigation puncture robot system, Thoracic Cancer, 11(1),2019, 72–81.[7] W. Zhang, P. Xia, S. Liu, X. Huang, X. Zhao, Z. Liu, and G. Niu,A coordinate positioning puncture method under robot-assistedCT-guidance: phantom and animal experiments, MinimallyInvasive Therapy & Allied Technologies, 31(2), 2020, 206–215.[8] S. Chen, F. Wang, Y. Lin, Q. Shi, and Wang, Y, Ultrasound-guided needle insertion robotic system for percutaneous punc-ture, International Journal of Computer Assisted Radiologyand Surgery, 16(3), 2021, 475–484.[9] B. Nie, Y. Gao, Y. Mei, and F. Gao, Capability iterationnetwork for robot path planning, International Journal ofRobotics and Automation, 37(3), 2022, 266–272.[10] F. Zhang, L. Chen, W. Miao, and L. Sun, Research on accuracyof augmented reality surgical navigation system based on multi-view virtual and real registration technology, IEEE Access, 8,2020, 122511–122528.[11] M.A. Khan and B.J. Yi, Design and clinical test of a passiveultrasound probe holder mechanism for arterial puncturing,International Journal of Control Automation and Systems,18(1), 2020, 29–37.[12] T. Guo, H. Wang, Y. Liu, M. Li, and Y. Wang, Vision-basedmobile robot leader follower control using model predictivecontrol, International Journal of Robotics and Automation,34(5), 2019, 544–552.[13] R.F. Hess, B. Thompson, and D.H. Baker, Binocular vision inamblyopia: structure, suppression and plasticity, Ophthalmicand Physiological Optics, 34(2), 2014, 146–162.[14] D. Spinczyk, Towards the clinical integration of an image-guided navigation system for percutaneous liver tumor ablationusing freehand 2D ultrasound images, Computer Aided Surgery,20(1), 2015, 61–72.[15] G. Dagnino, I. Georgilas, and S. Morad, Image-guided surgicalrobotic system for percutaneous reduction of joint fractures,Annals of biomedical engineering, 45(11), 2017, 2648–2662.[16] Z. Luo, K. Zhang, and Z. Wang, 3D pose estimation of largeand complicated workpieces based on binocular stereo vision,Applied optics, 56(24), 2017, 6822–6836.[17] R. Rugani, G. Vallortigara, and L. Regolin, Mapping number tospace in the two hemispheres of the avian brain, Neurobiologyof Learning and Memory, 133, 2016, 13–18.[18] Z. Zhou, B. Wu, J. Duan, X. Zhang, N. Zhang, and Z. Liang,Optical surgical instrument tracking system based on theprinciple of stereo vision, Journal of Biomedical Optics, 22(6),2017, 065005.[19] H. Hofstetter, Z. Yaniv, and L. Joskowicz, Fluoroscopy asan imaging means for computer-assisted surgical navigation,Computer Aided Surgery, 4(2), 1999, 65–76.[20] G. Jiang, M. Luo, and K. Bai, Optical positioning technologyof an assisted puncture robot based on binocular vision,International Journal of Imaging Systems & Technology, 29(2),2019, 180–190.

Important Links:

Go Back