![]() ![]() We derive the shape out of the output image by taking the sum of the widths of both images and then using the height of the second image. First, we make a call to cv2.warpPerspective which requires three arguments: the image we want to warp (in this case, the right image), the 3 x 3 transformation matrix ( H ), and finally the shape out of the output image. Given our homography matrix H, we are now ready to stitch the two images together. Provided that M is not None, we unpack the tuple on Line 30, giving us a list of keypoint matches, the homography matrix H derived from the RANSAC algorithm, and finally status, a list of indexes to indicate which keypoints in matches were successfully spatially verified using RANSAC. ![]() # return a tuple of the stitched image and the Vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches, # check to see if the keypoint matches should be visualized (imageA.shape + imageB.shape, imageA.shape)) Otherwise, we are now ready to apply the perspective transform: # otherwise, apply a perspective warp to stitch the images If the returned matches M are None, then not enough keypoints were matched to create a panorama, so we simply return to the calling function ( Lines 25 and 26). We’ll define this method later in the lesson. Given the keypoints and features, we use matchKeypoints ( Lines 20 and 21) to match the features in the two images. This method simply detects keypoints and extracts local invariant descriptors (i.e., SIFT) from the two images. Once we have unpacked the images list, we make a call to the detectAndDescribe method on Lines 16 and 17. If images are not supplied in this order, then our code will still run - but our output panorama will only contain one image, not both. The ordering to the images list is important: we expect images to be supplied in left-to-right order. Line 15 unpacks the images list (which again, we presume to contain only two images). We can also optionally supply ratio, used for David Lowe’s ratio test when matching features (more on this ratio test later in the tutorial), reprojThresh which is the maximum pixel “wiggle room” allowed by the RANSAC algorithm, and finally showMatches, a boolean used to indicate if the keypoint matches should be visualized or not. The stitch method requires only a single parameter, images, which is the list of (two) images that we are going to stitch together to form the panorama. ![]() # if the match is None, then there aren't enough matched (kpsB, featuresB) = tectAndDescribe(imageB)įeaturesA, featuresB, ratio, reprojThresh) (kpsA, featuresA) = tectAndDescribe(imageA) # unpack the images, then detect keypoints and extract Next up, let’s start working on the stitch method: def stitch(self, images, ratio=0.75, reprojThresh=4.0, Since there are major differences in how OpenCV 2.4 and OpenCV 3 handle keypoint detection and local invariant descriptors, it’s important that we determine the version of OpenCV that we are using. The constructor to Stitcher simply checks which version of OpenCV we are using by making a call to the is_cv3 method. We’ll be using NumPy for matrix/array operations, imutils for a set of OpenCV convenience methods, and finally cv2 for our OpenCV bindings.įrom there, we define the Stitcher class on Line 6. We start off on Lines 2-4 by importing our necessary packages. ![]() Self.isv3 = imutils.is_cv3(or_better=True) Let’s go ahead and get started by reviewing panorama.py : # import the necessary packages The Stitcher class will rely on the imutils Python package, so if you don’t already have it installed on your system, you’ll want to go ahead and do that now: $ pip install imutils We’ll encapsulate all four of these steps inside panorama.py, where we’ll define a Stitcher class used to construct our panoramas.
0 Comments
Leave a Reply. |