当使用图形化的 X Server 时,X Server 会按照特定的规则把键码值再一次映射,映射成扫描码。当这个映射过程完成之后, X Server 把这个按键字符发送给窗口管理器(DWM,metacity, i3等等),窗口管理器再把字符发送给当前窗口。当前窗口使用有关图形API把文字打印在输入框内。
解析树是以 DOM 元素以及属性为节点的树。DOM是文档对象模型(Document Object Model)的缩写,它是 HTML 文档的对象表示,同时也是 HTML 元素面向外部(如Javascript)的接口。树的根部是”Document”对象。整个 DOM 和 HTML 文档几乎是一对一的关系。
解析算法
HTML不能使用常见的自顶向下或自底向上方法来进行分析。主要原因有以下几点:
语言本身的“宽容”特性
HTML 本身可能是残缺的,对于常见的残缺,浏览器需要有传统的容错机制来支持它们
解析过程需要反复。对于其他语言来说,源码不会在解析过程中发生变化,但是对于 HTML 来说,动态代码,例如脚本元素中包含的 document.write() 方法会在源码中添加内容,也就是说,解析过程实际上会改变输入的内容
由于不能使用常用的解析技术,浏览器创造了专门用于解析 HTML 的解析器。解析算法在 HTML5 标准规范中有详细介绍,算法主要包含了两个阶段:标记化(tokenization)和树的构建。
var a = "10"| 0 ;
console.log("Bitwise Or a is : " +a);
var b = "s1132"|0;
console.log("Bitwise Or b is : " +b);
var c = [1,3,2]&1 ;
console.log("Bitwise And c is : " +c);
vard = [1]|0;
console.log("Bitwise Or d is : " +d);
vare = ~function(){}();
console.log("Bitwise Not e is : " +e);
var f = ({})|0;
console.log("Bitwise Or f is : " +f);
varg = ([1])|0;
console.log("Bitwise Or g is : " +g);
varh = "1ss"^0;
console.log("Bitwise Exclusive Or h is : " +h);
Detecting rectangles in images is often one of the first parts of a computer vision algorithm – whether it be automatic business card interpretation, or road sign processing. Although rectangle detection sounds like it should be a really simple process, as with many problems in computer vision, it’s far harder than you might expect. So it’s great that Apple have implemented an efficient algorithm as part of the CoreImage detectors.
The main class associated with CoreImage detectors is the aptly-named CIDetector. The same class is used for all the different types of detectors, and is instantiated with the CIDetector(ofType:, context:, options:) initializer. The type argument is a string, which for a rectangle detector is CIDetectorTypeRectangle. The options argument is a dictionary of settings associated with this detector. The following method created a CIDetector to be used for detecting rectangles:
You can see that the options here are specifying the accuracy as high, with the CIDetectorAccuracy key, and the CIDetectorAspectRatio key is used to specify that you’re searching for squares. This aspect ratio doesn’t mean that you’re only looking for squares, but it will be used in the ranking of possible rectangles to determine which is the most likely candidate. For example, if you know that you’re going to use the detector for business cards then setting the aspect ratio to 2.0 will likely yield better results.
Once you’ve created a CIDetector it’s actually really simple to use. The method featuresInImage() takes a CIImage and then returns an array of CIFeature objects (well, a subclass of) which represent the detected objects. In the case of a rectangle detector, the CIFeature subclass is CIRectangleFeature, which has CGPoint properties for each of the four corners of the detected rectangle.
The following method demonstrates how you can use the detector to find a rectangle in a supplied CIImage:
funcperformRectangleDetection(image: CIImage) -> CIImage? {
var resultImage: CIImage?
iflet detector = detector {
// Get the detectionslet features = detector.featuresInImage(image)
for feature in features as! [CIRectangleFeature] {
resultImage = drawHighlightOverlayForPoints(image, topLeft: feature.topLeft, topRight: feature.topRight,
bottomLeft: feature.bottomLeft, bottomRight: feature.bottomRight)
}
}
return resultImage
}
This unwraps the option detector, before using the featuresInImage() method to perform the detection itself. At the time of writing, the rectangle detector will only ever detect one rectangle in an image, so the features array will have either exactly one or zero CIRectangleFeature objects in it.
This method returns a new CIImage, which contains a red patch overlaid on the source image over the position of the detected rectangle. This uses the utility method drawHighlightOverlayForPoints() method:
This method creates a colored image, and then uses the perspective transform filter to map it to the points provided. It then creates a new CIImage by overlaying this colored image with the source image.
In the LiveDetection sample project, the performRectangleDetection() method is used as part in the CoreImageVideoFilter class to run the detector on each of the frames received from the camera, before rendering it on screen. This class is a little more involved than you might expect it to be, and it isn’t within the scope of this article to go in to much detail, however an overview might be helpful.
An AVFoundation pipeline is created which uses the camera as input, and provides a pixel buffer of each frame to a delegate method. In this delegate method a CIImage is created from this pixel buffer. The provided filter function (which takes an input CIImage and returns a new CIImage, much like a map function) is passed the current image. The image is cropped to match the view size it is to be rendered in. The CIImage is then rendered on the OpenGLES surface using a pre-created CIRenderContext. Rinse and repeat for each frame received from the camera. If you need to do live video processing using CoreImage then it might be worth taking a look at this class in greater detail, but if you don’t then the performRectangleDetection() method just takes a CIImage, which you can create using one of the many constructors.
Running this app up will kick off the rectangle detection right away, and you’ll see results like the following on the screen:
The performance is pretty good for real-time use – certainly the demo app copes well on an iPad 3.