1
0
Fork 0
mirror of https://github.com/miniflux/v2.git synced 2025-07-27 17:28:38 +00:00

First commit

This commit is contained in:
Frédéric Guillot 2017-11-19 21:10:04 -08:00
commit 8ffb773f43
2121 changed files with 1118910 additions and 0 deletions

1
vendor/github.com/tdewolff/minify/.gitattributes generated vendored Normal file
View file

@ -0,0 +1 @@
benchmarks/sample_* linguist-generated=true

4
vendor/github.com/tdewolff/minify/.gitignore generated vendored Normal file
View file

@ -0,0 +1,4 @@
dist/
benchmarks/*
!benchmarks/*.go
!benchmarks/sample_*

26
vendor/github.com/tdewolff/minify/.goreleaser.yml generated vendored Normal file
View file

@ -0,0 +1,26 @@
builds:
- binary: minify
main: ./cmd/minify/
ldflags: -s -w -X main.Version={{.Version}} -X main.Commit={{.Commit}} -X main.Date={{.Date}}
goos:
- windows
- linux
- darwin
goarch:
- amd64
- 386
- arm
- arm64
archive:
format: tar.gz
format_overrides:
- goos: windows
format: zip
name_template: "{{.Binary}}_{{.Version}}_{{.Os}}_{{.Arch}}"
files:
- README.md
- LICENSE.md
snapshot:
name_template: "devel"
release:
draft: true

5
vendor/github.com/tdewolff/minify/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,5 @@
language: go
before_install:
- go get github.com/mattn/goveralls
script:
- goveralls -v -service travis-ci -repotoken $COVERALLS_TOKEN -ignore=cmd/minify/* || go test -v ./...

22
vendor/github.com/tdewolff/minify/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,22 @@
Copyright (c) 2015 Taco de Wolff
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.

590
vendor/github.com/tdewolff/minify/README.md generated vendored Normal file
View file

@ -0,0 +1,590 @@
# Minify <a name="minify"></a> [![Build Status](https://travis-ci.org/tdewolff/minify.svg?branch=master)](https://travis-ci.org/tdewolff/minify) [![GoDoc](http://godoc.org/github.com/tdewolff/minify?status.svg)](http://godoc.org/github.com/tdewolff/minify) [![Coverage Status](https://coveralls.io/repos/github/tdewolff/minify/badge.svg?branch=master)](https://coveralls.io/github/tdewolff/minify?branch=master) [![Join the chat at https://gitter.im/tdewolff/minify](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/tdewolff/minify?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**[Online demo](http://go.tacodewolff.nl/minify) if you need to minify files *now*.**
**[Command line tool](https://github.com/tdewolff/minify/tree/master/cmd/minify) that minifies concurrently and supports watching file changes.**
**[All releases](https://github.com/tdewolff/minify/releases) for various platforms.**
---
Minify is a minifier package written in [Go][1]. It provides HTML5, CSS3, JS, JSON, SVG and XML minifiers and an interface to implement any other minifier. Minification is the process of removing bytes from a file (such as whitespace) without changing its output and therefore shrinking its size and speeding up transmission over the internet and possibly parsing. The implemented minifiers are high performance and streaming, which implies O(n).
The core functionality associates mimetypes with minification functions, allowing embedded resources (like CSS or JS within HTML files) to be minified as well. Users can add new implementations that are triggered based on a mimetype (or pattern), or redirect to an external command (like ClosureCompiler, UglifyCSS, ...).
#### Table of Contents
- [Minify](#minify)
- [Prologue](#prologue)
- [Installation](#installation)
- [API stability](#api-stability)
- [Testing](#testing)
- [Performance](#performance)
- [HTML](#html)
- [Whitespace removal](#whitespace-removal)
- [CSS](#css)
- [JS](#js)
- [JSON](#json)
- [SVG](#svg)
- [XML](#xml)
- [Usage](#usage)
- [New](#new)
- [From reader](#from-reader)
- [From bytes](#from-bytes)
- [From string](#from-string)
- [To reader](#to-reader)
- [To writer](#to-writer)
- [Middleware](#middleware)
- [Custom minifier](#custom-minifier)
- [Mediatypes](#mediatypes)
- [Examples](#examples)
- [Common minifiers](#common-minifiers)
- [Custom minifier](#custom-minifier-example)
- [ResponseWriter](#responsewriter)
- [Templates](#templates)
- [License](#license)
### Status
* CSS: **fully implemented**
* HTML: **fully implemented**
* JS: improved JSmin implementation
* JSON: **fully implemented**
* SVG: partially implemented; in development
* XML: **fully implemented**
### Roadmap
- [ ] General speed-up of all minifiers (use ASM for whitespace funcs)
- [ ] Improve JS minifiers by shortening variables and proper semicolon omission
- [ ] Speed-up SVG minifier, it is very slow
- [ ] Proper parser error reporting and line number + column information
- [ ] Generation of source maps (uncertain, might slow down parsers too much if it cannot run separately nicely)
- [ ] Look into compression of images, fonts and other web resources (into package `compress`?)
- [ ] Create a cmd to pack webfiles (much like webpack), ie. merging CSS and JS files, inlining small external files, minification and gzipping. This would work on HTML files.
- [ ] Create a package to format files, much like `gofmt` for Go files
## Prologue
Minifiers or bindings to minifiers exist in almost all programming languages. Some implementations are merely using several regular-expressions to trim whitespace and comments (even though regex for parsing HTML/XML is ill-advised, for a good read see [Regular Expressions: Now You Have Two Problems](http://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/)). Some implementations are much more profound, such as the [YUI Compressor](http://yui.github.io/yuicompressor/) and [Google Closure Compiler](https://github.com/google/closure-compiler) for JS. As most existing implementations either use Java or JavaScript and don't focus on performance, they are pretty slow. Additionally, loading the whole file into memory at once is bad for really large files (or impossible for streams).
This minifier proves to be that fast and extensive minifier that can handle HTML and any other filetype it may contain (CSS, JS, ...). It streams the input and output and can minify files concurrently.
## Installation
Run the following command
go get github.com/tdewolff/minify
or add the following imports and run the project with `go get`
``` go
import (
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/html"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
)
```
## API stability
There is no guarantee for absolute stability, but I take issues and bugs seriously and don't take API changes lightly. The library will be maintained in a compatible way unless vital bugs prevent me from doing so. There has been one API change after v1 which added options support and I took the opportunity to push through some more API clean up as well. There are no plans whatsoever for future API changes.
## Testing
For all subpackages and the imported `parse` and `buffer` packages, test coverage of 100% is pursued. Besides full coverage, the minifiers are [fuzz tested](https://github.com/tdewolff/fuzz) using [github.com/dvyukov/go-fuzz](http://www.github.com/dvyukov/go-fuzz), see [the wiki](https://github.com/tdewolff/minify/wiki) for the most important bugs found by fuzz testing. Furthermore am I working on adding visual testing to ensure that minification doesn't change anything visually. By using the WebKit browser to render the original and minified pages we can check whether any pixel is different.
These tests ensure that everything works as intended, the code does not crash (whatever the input) and that it doesn't change the final result visually. If you still encounter a bug, please report [here](https://github.com/tdewolff/minify/issues)!
## Performance
The benchmarks directory contains a number of standardized samples used to compare performance between changes. To give an indication of the speed of this library, I've ran the tests on my Thinkpad T460 (i5-6300U quad-core 2.4GHz running Arch Linux) using Go 1.9.2.
```
name time/op
CSS/sample_bootstrap.css-4 3.05ms ± 1%
CSS/sample_gumby.css-4 4.25ms ± 1%
HTML/sample_amazon.html-4 3.33ms ± 0%
HTML/sample_bbc.html-4 1.39ms ± 7%
HTML/sample_blogpost.html-4 222µs ± 1%
HTML/sample_es6.html-4 18.0ms ± 1%
HTML/sample_stackoverflow.html-4 3.08ms ± 1%
HTML/sample_wikipedia.html-4 6.06ms ± 1%
JS/sample_ace.js-4 9.92ms ± 1%
JS/sample_dot.js-4 91.4µs ± 4%
JS/sample_jquery.js-4 4.00ms ± 1%
JS/sample_jqueryui.js-4 7.93ms ± 0%
JS/sample_moment.js-4 1.46ms ± 1%
JSON/sample_large.json-4 5.07ms ± 4%
JSON/sample_testsuite.json-4 2.96ms ± 0%
JSON/sample_twitter.json-4 11.3µs ± 0%
SVG/sample_arctic.svg-4 64.7ms ± 0%
SVG/sample_gopher.svg-4 227µs ± 0%
SVG/sample_usa.svg-4 35.9ms ± 6%
XML/sample_books.xml-4 48.1µs ± 4%
XML/sample_catalog.xml-4 20.2µs ± 0%
XML/sample_omg.xml-4 9.02ms ± 0%
name speed
CSS/sample_bootstrap.css-4 45.0MB/s ± 1%
CSS/sample_gumby.css-4 43.8MB/s ± 1%
HTML/sample_amazon.html-4 142MB/s ± 0%
HTML/sample_bbc.html-4 83.0MB/s ± 7%
HTML/sample_blogpost.html-4 94.5MB/s ± 1%
HTML/sample_es6.html-4 56.8MB/s ± 1%
HTML/sample_stackoverflow.html-4 66.7MB/s ± 1%
HTML/sample_wikipedia.html-4 73.5MB/s ± 1%
JS/sample_ace.js-4 64.9MB/s ± 1%
JS/sample_dot.js-4 56.4MB/s ± 4%
JS/sample_jquery.js-4 61.8MB/s ± 1%
JS/sample_jqueryui.js-4 59.2MB/s ± 0%
JS/sample_moment.js-4 67.8MB/s ± 1%
JSON/sample_large.json-4 150MB/s ± 4%
JSON/sample_testsuite.json-4 233MB/s ± 0%
JSON/sample_twitter.json-4 134MB/s ± 0%
SVG/sample_arctic.svg-4 22.7MB/s ± 0%
SVG/sample_gopher.svg-4 25.6MB/s ± 0%
SVG/sample_usa.svg-4 28.6MB/s ± 6%
XML/sample_books.xml-4 92.1MB/s ± 4%
XML/sample_catalog.xml-4 95.6MB/s ± 0%
```
## HTML
HTML (with JS and CSS) minification typically shaves off about 10%.
The HTML5 minifier uses these minifications:
- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
- strip superfluous quotes, or uses single/double quotes whichever requires fewer escapes
- strip default attribute values and attribute boolean values
- strip some empty attributes
- strip unrequired tags (`html`, `head`, `body`, ...)
- strip unrequired end tags (`tr`, `td`, `li`, ... and often `p`)
- strip default protocols (`http:`, `https:` and `javascript:`)
- strip all comments (including conditional comments, old IE versions are not supported anymore by Microsoft)
- shorten `doctype` and `meta` charset
- lowercase tags, attributes and some values to enhance gzip compression
Options:
- `KeepConditionalComments` preserve all IE conditional comments such as `<!--[if IE 6]><![endif]-->` and `<![if IE 6]><![endif]>`, see https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).aspx#syntax
- `KeepDefaultAttrVals` preserve default attribute values such as `<script type="text/javascript">`
- `KeepDocumentTags` preserve `html`, `head` and `body` tags
- `KeepEndTags` preserve all end tags
- `KeepWhitespace` preserve whitespace between inline tags but still collapse multiple whitespace characters into one
After recent benchmarking and profiling it became really fast and minifies pages in the 10ms range, making it viable for on-the-fly minification.
However, be careful when doing on-the-fly minification. Minification typically trims off 10% and does this at worst around about 20MB/s. This means users have to download slower than 2MB/s to make on-the-fly minification worthwhile. This may or may not apply in your situation. Rather use caching!
### Whitespace removal
The whitespace removal mechanism collapses all sequences of whitespace (spaces, newlines, tabs) to a single space. If the sequence contained a newline or carriage return it will collapse into a newline character instead. It trims all text parts (in between tags) depending on whether it was preceded by a space from a previous piece of text and whether it is followed up by a block element or an inline element. In the former case we can omit spaces while for inline elements whitespace has significance.
Make sure your HTML doesn't depend on whitespace between `block` elements that have been changed to `inline` or `inline-block` elements using CSS. Your layout *should not* depend on those whitespaces as the minifier will remove them. An example is a menu consisting of multiple `<li>` that have `display:inline-block` applied and have whitespace in between them. It is bad practise to rely on whitespace for element positioning anyways!
## CSS
Minification typically shaves off about 10%-15%.
The CSS minifier will only use safe minifications:
- remove comments and unnecessary whitespace
- remove trailing semicolons
- optimize `margin`, `padding` and `border-width` number of sides
- shorten numbers by removing unnecessary `+` and zeros and rewriting with/without exponent
- remove dimension and percentage for zero values
- remove quotes for URLs
- remove quotes for font families and make lowercase
- rewrite hex colors to/from color names, or to 3 digit hex
- rewrite `rgb(`, `rgba(`, `hsl(` and `hsla(` colors to hex or name
- replace `normal` and `bold` by numbers for `font-weight` and `font`
- replace `none` &#8594; `0` for `border`, `background` and `outline`
- lowercase all identifiers except classes, IDs and URLs to enhance gzip compression
- shorten MS alpha function
- rewrite data URIs with base64 or ASCII whichever is shorter
- calls minifier for data URI mediatypes, thus you can compress embedded SVG files if you have that minifier attached
It does purposely not use the following techniques:
- (partially) merge rulesets
- (partially) split rulesets
- collapse multiple declarations when main declaration is defined within a ruleset (don't put `font-weight` within an already existing `font`, too complex)
- remove overwritten properties in ruleset (this not always overwrites it, for example with `!important`)
- rewrite properties into one ruleset if possible (like `margin-top`, `margin-right`, `margin-bottom` and `margin-left` &#8594; `margin`)
- put nested ID selector at the front (`body > div#elem p` &#8594; `#elem p`)
- rewrite attribute selectors for IDs and classes (`div[id=a]` &#8594; `div#a`)
- put space after pseudo-selectors (IE6 is old, move on!)
It's great that so many other tools make comparison tables: [CSS Minifier Comparison](http://www.codenothing.com/benchmarks/css-compressor-3.0/full.html), [CSS minifiers comparison](http://www.phpied.com/css-minifiers-comparison/) and [CleanCSS tests](http://goalsmashers.github.io/css-minification-benchmark/). From the last link, this CSS minifier is almost without doubt the fastest and has near-perfect minification rates. It falls short with the purposely not implemented and often unsafe techniques, so that's fine.
Options:
- `Decimals` number of decimals to preserve for numbers, `-1` means no trimming
## JS
The JS minifier is pretty basic. It removes comments, whitespace and line breaks whenever it can. It employs all the rules that [JSMin](http://www.crockford.com/javascript/jsmin.html) does too, but has additional improvements. For example the prefix-postfix bug is fixed.
Common speeds of PHP and JS implementations are about 100-300kB/s (see [Uglify2](http://lisperator.net/uglifyjs/), [Adventures in PHP web asset minimization](https://www.happyassassin.net/2014/12/29/adventures-in-php-web-asset-minimization/)). This implementation or orders of magnitude faster, around ~50MB/s.
TODO:
- shorten local variables / function parameters names
- precise semicolon and newline omission
## JSON
Minification typically shaves off about 15% of filesize for common indented JSON such as generated by [JSON Generator](http://www.json-generator.com/).
The JSON minifier only removes whitespace, which is the only thing that can be left out.
## SVG
The SVG minifier uses these minifications:
- trim and collapse whitespace between all tags
- strip comments, empty `doctype`, XML prelude, `metadata`
- strip SVG version
- strip CDATA sections wherever possible
- collapse tags with no content to a void tag
- collapse empty container tags (`g`, `svg`, ...)
- minify style tag and attributes with the CSS minifier
- minify colors
- shorten lengths and numbers and remove default `px` unit
- shorten `path` data
- convert `rect`, `line`, `polygon`, `polyline` to `path`
- use relative or absolute positions in path data whichever is shorter
TODO:
- convert attributes to style attribute whenever shorter
- merge path data? (same style and no intersection -- the latter is difficult)
Options:
- `Decimals` number of decimals to preserve for numbers, `-1` means no trimming
## XML
The XML minifier uses these minifications:
- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
- strip comments
- collapse tags with no content to a void tag
- strip CDATA sections wherever possible
Options:
- `KeepWhitespace` preserve whitespace between inline tags but still collapse multiple whitespace characters into one
## Usage
Any input stream is being buffered by the minification functions. This is how the underlying buffer package inherently works to ensure high performance. The output stream however is not buffered. It is wise to preallocate a buffer as big as the input to which the output is written, or otherwise use `bufio` to buffer to a streaming writer.
### New
Retrieve a minifier struct which holds a map of mediatype &#8594; minifier functions.
``` go
m := minify.New()
```
The following loads all provided minifiers.
``` go
m := minify.New()
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/html", html.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
```
You can set options to several minifiers.
``` go
m.Add("text/html", &html.Minifier{
KeepDefaultAttrVals: true,
KeepWhitespace: true,
})
```
### From reader
Minify from an `io.Reader` to an `io.Writer` for a specific mediatype.
``` go
if err := m.Minify(mediatype, w, r); err != nil {
panic(err)
}
```
### From bytes
Minify from and to a `[]byte` for a specific mediatype.
``` go
b, err = m.Bytes(mediatype, b)
if err != nil {
panic(err)
}
```
### From string
Minify from and to a `string` for a specific mediatype.
``` go
s, err = m.String(mediatype, s)
if err != nil {
panic(err)
}
```
### To reader
Get a minifying reader for a specific mediatype.
``` go
mr := m.Reader(mediatype, r)
if _, err := mr.Read(b); err != nil {
panic(err)
}
```
### To writer
Get a minifying writer for a specific mediatype. Must be explicitly closed because it uses an `io.Pipe` underneath.
``` go
mw := m.Writer(mediatype, w)
if mw.Write([]byte("input")); err != nil {
panic(err)
}
if err := mw.Close(); err != nil {
panic(err)
}
```
### Middleware
Minify resources on the fly using middleware. It passes a wrapped response writer to the handler that removes the Content-Length header. The minifier is chosen based on the Content-Type header or, if the header is empty, by the request URI file extension. This is on-the-fly processing, you should preferably cache the results though!
``` go
fs := http.FileServer(http.Dir("www/"))
http.Handle("/", m.Middleware(fs))
```
### Custom minifier
Add a minifier for a specific mimetype.
``` go
type CustomMinifier struct {
KeepLineBreaks bool
}
func (c *CustomMinifier) Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
// ...
return nil
}
m.Add(mimetype, &CustomMinifier{KeepLineBreaks: true})
// or
m.AddRegexp(regexp.MustCompile("/x-custom$"), &CustomMinifier{KeepLineBreaks: true})
```
Add a minify function for a specific mimetype.
``` go
m.AddFunc(mimetype, func(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
// ...
return nil
})
m.AddFuncRegexp(regexp.MustCompile("/x-custom$"), func(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
// ...
return nil
})
```
Add a command `cmd` with arguments `args` for a specific mimetype.
``` go
m.AddCmd(mimetype, exec.Command(cmd, args...))
m.AddCmdRegexp(regexp.MustCompile("/x-custom$"), exec.Command(cmd, args...))
```
### Mediatypes
Using the `params map[string]string` argument one can pass parameters to the minifier such as seen in mediatypes (`type/subtype; key1=val2; key2=val2`). Examples are the encoding or charset of the data. Calling `Minify` will split the mimetype and parameters for the minifiers for you, but `MinifyMimetype` can be used if you already have them split up.
Minifiers can also be added using a regular expression. For example a minifier with `image/.*` will match any image mime.
## Examples
### Common minifiers
Basic example that minifies from stdin to stdout and loads the default HTML, CSS and JS minifiers. Optionally, one can enable `java -jar build/compiler.jar` to run for JS (for example the [ClosureCompiler](https://code.google.com/p/closure-compiler/)). Note that reading the file into a buffer first and writing to a pre-allocated buffer would be faster (but would disable streaming).
``` go
package main
import (
"log"
"os"
"os/exec"
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/html"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
)
func main() {
m := minify.New()
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/html", html.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
// Or use the following for better minification of JS but lower speed:
// m.AddCmd("text/javascript", exec.Command("java", "-jar", "build/compiler.jar"))
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}
```
### <a name="custom-minifier-example"></a> Custom minifier
Custom minifier showing an example that implements the minifier function interface. Within a custom minifier, it is possible to call any minifier function (through `m minify.Minifier`) recursively when dealing with embedded resources.
``` go
package main
import (
"bufio"
"fmt"
"io"
"log"
"strings"
"github.com/tdewolff/minify"
)
func main() {
m := minify.New()
m.AddFunc("text/plain", func(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
// remove newlines and spaces
rb := bufio.NewReader(r)
for {
line, err := rb.ReadString('\n')
if err != nil && err != io.EOF {
return err
}
if _, errws := io.WriteString(w, strings.Replace(line, " ", "", -1)); errws != nil {
return errws
}
if err == io.EOF {
break
}
}
return nil
})
in := "Because my coffee was too cold, I heated it in the microwave."
out, err := m.String("text/plain", in)
if err != nil {
panic(err)
}
fmt.Println(out)
// Output: Becausemycoffeewastoocold,Iheateditinthemicrowave.
}
```
### ResponseWriter
#### Middleware
``` go
func main() {
m := minify.New()
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/html", html.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
fs := http.FileServer(http.Dir("www/"))
http.Handle("/", m.Middleware(fs))
}
```
#### ResponseWriter
``` go
func Serve(w http.ResponseWriter, r *http.Request) {
mw := m.ResponseWriter(w, r)
defer mw.Close()
w = mw
http.ServeFile(w, r, path.Join("www", r.URL.Path))
}
```
#### Custom response writer
ResponseWriter example which returns a ResponseWriter that minifies the content and then writes to the original ResponseWriter. Any write after applying this filter will be minified.
``` go
type MinifyResponseWriter struct {
http.ResponseWriter
io.WriteCloser
}
func (m MinifyResponseWriter) Write(b []byte) (int, error) {
return m.WriteCloser.Write(b)
}
// MinifyResponseWriter must be closed explicitly by calling site.
func MinifyFilter(mediatype string, res http.ResponseWriter) MinifyResponseWriter {
m := minify.New()
// add minfiers
mw := m.Writer(mediatype, res)
return MinifyResponseWriter{res, mw}
}
```
``` go
// Usage
func(w http.ResponseWriter, req *http.Request) {
w = MinifyFilter("text/html", w)
if _, err := io.WriteString(w, "<p class="message"> This HTTP response will be minified. </p>"); err != nil {
panic(err)
}
if err := w.Close(); err != nil {
panic(err)
}
// Output: <p class=message>This HTTP response will be minified.
}
```
### Templates
Here's an example of a replacement for `template.ParseFiles` from `template/html`, which automatically minifies each template before parsing it.
Be aware that minifying templates will work in most cases but not all. Because the HTML minifier only works for valid HTML5, your template must be valid HTML5 of itself. Template tags are parsed as regular text by the minifier.
``` go
func compileTemplates(filenames ...string) (*template.Template, error) {
m := minify.New()
m.AddFunc("text/html", html.Minify)
var tmpl *template.Template
for _, filename := range filenames {
name := filepath.Base(filename)
if tmpl == nil {
tmpl = template.New(name)
} else {
tmpl = tmpl.New(name)
}
b, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
mb, err := m.Bytes("text/html", b)
if err != nil {
return nil, err
}
tmpl.Parse(string(mb))
}
return tmpl, nil
}
```
Example usage:
``` go
templates := template.MustCompile(compileTemplates("view.html", "home.html"))
```
## License
Released under the [MIT license](LICENSE.md).
[1]: http://golang.org/ "Go Language"

View file

@ -0,0 +1,32 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/css"
)
var cssSamples = []string{
"sample_bootstrap.css",
"sample_gumby.css",
}
func init() {
for _, sample := range cssSamples {
load(sample)
}
}
func BenchmarkCSS(b *testing.B) {
for _, sample := range cssSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
css.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,36 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/html"
)
var htmlSamples = []string{
"sample_amazon.html",
"sample_bbc.html",
"sample_blogpost.html",
"sample_es6.html",
"sample_stackoverflow.html",
"sample_wikipedia.html",
}
func init() {
for _, sample := range htmlSamples {
load(sample)
}
}
func BenchmarkHTML(b *testing.B) {
for _, sample := range htmlSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
html.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,35 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/js"
)
var jsSamples = []string{
"sample_ace.js",
"sample_dot.js",
"sample_jquery.js",
"sample_jqueryui.js",
"sample_moment.js",
}
func init() {
for _, sample := range jsSamples {
load(sample)
}
}
func BenchmarkJS(b *testing.B) {
for _, sample := range jsSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
js.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,33 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/json"
)
var jsonSamples = []string{
"sample_large.json",
"sample_testsuite.json",
"sample_twitter.json",
}
func init() {
for _, sample := range jsonSamples {
load(sample)
}
}
func BenchmarkJSON(b *testing.B) {
for _, sample := range jsonSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
json.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,18 @@
package benchmarks
import (
"io/ioutil"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse/buffer"
)
var m = minify.New()
var r = map[string]*buffer.Reader{}
var w = map[string]*buffer.Writer{}
func load(filename string) {
sample, _ := ioutil.ReadFile(filename)
r[filename] = buffer.NewReader(sample)
w[filename] = buffer.NewWriter(make([]byte, 0, len(sample)))
}

18655
vendor/github.com/tdewolff/minify/benchmarks/sample_ace.js generated vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

After

Width:  |  Height:  |  Size: 1.4 MiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,580 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>research!rsc: My Go Resolutions for 2017</title>
<link rel="alternate" type="application/atom+xml" title="research!rsc - Atom" href="http://research.swtch.com/feed.atom" />
<link href='https://fonts.googleapis.com/css?family=Inconsolata:400,700' rel='stylesheet' type='text/css'>
<script type="text/javascript" src="https://use.typekit.com/skm6yij.js"></script>
<script type="text/javascript">try{Typekit.load();}catch(e){}</script>
<style>
body {
padding: 0;
margin: 0;
font-size: 100%;
}
.header {
height: 1.25em;
background-color: #dff;
margin: 0;
padding: 0.1em 0.1em 0.2em;
border-top: 1px solid black;
border-bottom: 1px solid #8ff;
}
.header h3 {
margin: 0;
padding: 0 2em;
display: inline-block;
padding-right: 2em;
font-style: italic;
font-family: "adobe-text-pro" !important;
font-size: 90%;
}
.rss {
float: right;
padding-top: 0.2em;
padding-right: 2em;
display: none;
}
.toc {
margin-top: 2em;
}
.toc-title {
font-family: "caflisch-script-pro";
font-size: 300%;
line-height: 50%;
}
.toc-subtitle {
display: block;
margin-bottom: 1em;
font-size: 83%;
}
@media only screen and (max-width: 550px) { .toc-subtitle { display: none; } }
.header h3 a {
color: black;
}
.header h4 {
margin: 0;
padding: 0;
display: inline-block;
font-weight: normal;
font-size: 83%;
}
@media only screen and (max-width: 550px) { .header h4 { display: none; } }
.main {
padding: 0 2em;
}
@media only screen and (max-width: 479px) { .article { font-size: 120%; } }
.article h1 {
text-align: center;
}
.article h1, .article h2, .article h3 {
font-family: 'Myriad Pro';
}
.normal {
font-size: medium;
font-weight: normal;
}
.when {
text-align: center;
font-size: 100%;
margin: 0;
padding: 0;
}
.when p {
margin: 0;
padding: 0;
}
.article h2 {
font-size: 100%;
padding-top: 0.25em;
}
pre {
margin-left: 4em;
margin-right: 4em;
}
pre, code {
font-family: 'Inconsolata', monospace;
font-size: 100%;
}
.footer {
margin-top: 10px;
font-size: 83%;
font-family: sans-serif;
}
.comments {
margin-top: 2em;
background-color: #ffe;
border-top: 1px solid #aa4;
border-left: 1px solid #aa4;
border-right: 1px solid #aa4;
}
.comments-header {
padding: 0 5px 0 5px;
}
.comments-header p {
padding: 0;
margin: 3px 0 0 0;
}
.comments-body {
padding: 5px 5px 5px 5px;
}
#plus-comments {
border-bottom: 1px dotted #ccc;
}
.plus-comment {
width: 100%;
font-size: 14px;
border-top: 1px dotted #ccc;
}
.me {
background-color: #eec;
}
.plus-comment ul {
margin: 0;
padding: 0;
list-style: none;
width: 100%;
display: inline-block;
}
.comment-when {
color:#999;
width:auto;
padding:0 5px;
}
.old {
font-size: 83%;
}
.plus-comment ul li {
display: inline-block;
vertical-align: top;
margin-top: 5px;
margin-bottom: 5px;
padding: 0;
}
.plus-icon {
width: 45px;
}
.plus-img {
float: left;
margin: 4px 4px 4px 4px;
width: 32px;
height: 32px;
}
.plus-comment p {
margin: 0;
padding: 0;
}
.plus-clear {
clear: left;
}
.toc-when {
font-size: 83%;
color: #ccc;
}
.toc {
list-style: none;
}
.toc li {
margin-bottom: 0.5em;
}
.toc-head {
margin-bottom: 1em !important;
font-size: 117%;
}
.toc-summary {
margin-left: 2em;
}
.favorite {
font-weight: bold;
}
.article p {
line-height: 144%;
}
sup, sub {
vertical-align: baseline;
position: relative;
font-size: 83%;
}
sup {
bottom: 1ex;
}
sub {
top: 0.8ex;
}
.main {
position: relative;
margin: 0 auto;
padding: 0;
width: 900px;
}
@media only screen and (min-width: 768px) and (max-width: 959px) { .main { width: 708px; } }
@media only screen and (min-width: 640px) and (max-width: 767px) { .main { width: 580px; } }
@media only screen and (min-width: 480px) and (max-width: 639px) { .main { width: 420px; } }
@media only screen and (max-width: 479px) { .main { width: 300px; } }
</style>
</head>
<body>
<div class="header">
<h3><a href="/">research!rsc</a></h3>
<h4>Thoughts and links about programming,
by <a href="https://swtch.com/~rsc/" rel="author">Russ Cox</a> </h4>
<a class="rss" href="/feed.atom"><img src="/feed-icon-14x14.png" /></a>
</div>
<div class="main">
<div class="article">
<h1>My Go Resolutions for 2017
<div class="normal">
<div class="when">
Posted on Wednesday, January 18, 2017.
</div>
</div>
</h1>
<p class=lp>Tis the season for resolutions,
and I thought it would make sense to write a little
about what I hope to work on this year as far as Go is concerned.</p>
<p class=pp>My goal every year is to <em>help Go developers</em>.
I want to make sure that the work we do on the Go team
has a significant, positive impact on Go developers.
That may sound obvious, but there are a variety of common ways to fail to achieve that:
for example, spending too much time cleaning up or optimizing code that doesnt need it;
responding only to the most common or recent complaints or requests;
or focusing too much on short-term improvements.
Its important to step back and make sure were focusing
our development work where it does the most good.</p>
<p class=pp>This post outlines a few of my own major focuses for this year.
This is only my personal list, not the Go teams list.</p>
<p class=pp>One reason for posting this is to gather feedback.
If these spark any ideas or suggestions of your own,
please feel free to comment below or on the linked GitHub issues.</p>
<p class=pp>Another reason is to make clear that Im aware of these issues as important.
I think too often people interpret lack of action by the Go team
as a signal that we think everything is perfect, when instead
there is simply other, higher priority work to do first.</p>
<h2><a name="alias"></a>Type aliases</h2>
<p class=lp>There is a recurring problem with moving types
from one package to another during large codebase refactorings.
We tried to solve it last year with <a href="https://golang.org/issue/16339">general aliases</a>,
which didnt work for at least two reasons: we didnt explain the change well enough,
and we didnt deliver it on time, so it wasnt ready for Go 1.8.
Learning from that experience,
I <a href="https://www.youtube.com/watch?v=h6Cw9iCDVcU">gave a talk</a>
and <a href="https://talks.golang.org/2016/refactor.article">wrote an article</a>
about the underlying problem,
and that started a <a href="https://golang.org/issue/18130">productive discussion</a>
on the Go issue tracker about the solution space.
It looks like more limited <a href="https://golang.org/design/18130-type-alias">type aliases</a>
are the right next step.
I want to make sure those land smoothly in Go 1.9. <a href="https://golang.org/issue/18130">#18130</a>.</p>
<h2><a name="package"></a>Package management</h2>
<p class=lp>I designed the Go support for downloading published packages
(“goinstall”, which became “go get”) in February 2010.
A lot has happened since then.
In particular, other language ecosystems have really raised the bar
for what people expect from package management,
and the open source world has mostly agreed on
<a href="http://semver.org/">semantic versioning</a>, which provides a useful base
for inferring version compatibility.
Go needs to do better here, and a group of contributors have been
<a href="https://blog.gopheracademy.com/advent-2016/saga-go-dependency-management/">working on a solution</a>.
I want to make sure these ideas are integrated well
into the standard Go toolchain and to make package management
a reason that people love Go.</p>
<h2><a name="build"></a>Build improvements</h2>
<p class=lp>There are a handful of shortcomings in the design of
the go commands build system that are overdue to be fixed.
Here are three representative examples that I intend to
address with a bit of a redesign of the internals of the go command.</p>
<p class=pp>Builds can be too slow,
because the go command doesnt cache build results as aggressively as it should.
Many people dont realize that <code>go</code> <code>install</code> saves its work while <code>go</code> <code>build</code> does not,
and then they run repeated <code>go</code> <code>build</code> commands that are slow
because the later builds do more work than they should need to.
The same for repeated <code>go</code> <code>test</code> without <code>go</code> <code>test</code> <code>-i</code> when dependencies are modified.
All builds should be as incremental as possible.
<a href="https://golang.org/issue/4719">#4719</a>.</p>
<p class=pp>Test results should be cached too:
if none of the inputs to a test have changed,
then usually there is no need to rerun the test.
This will make it very cheap to run “all tests” when little or nothing has changed.
<a href="https://golang.org/issue/11193">#11193</a>.</p>
<p class=pp>Work outside GOPATH should be supported nearly as well
as work inside GOPATH.
In particular, it should be possible to <code>git</code> <code>clone</code> a repo,
<code>cd</code> into it, and run <code>go</code> commands and have them work fine.
Package management only makes that more important:
youll need to be able to work on different versions of a package (say, v1 and v2)
without having entirely separate GOPATHs for them.
<a href="https://golang.org/issue/17271">#17271</a>.</p>
<h2><a name="corpus"></a>Code corpus</h2>
<p class=lp>I think it helped to have concrete examples from real projects
in the talk and article I prepared about codebase refactoring (see <a href="#alias">above</a>).
We&rsquo;ve also defined that <a href="https://golang.org/src/cmd/vet/README">additions to vet</a>
must target problems that happen frequently in real programs.
I&rsquo;d like to see that kind of analysis of actual practice—examining
the effects on and possible improvements to real programs—become a
standard way we discuss and evaluate changes to Go.</p>
<p class=pp>Right now there&rsquo;s not an agreed-upon representative corpus of code to use for
those analyses: everyone must first create their own, which is too much work.
I&rsquo;d like to put together a single, self-contained Git repo people can check out that
contains our official baseline corpus for those analyses.
A possible starting point could be the top 100 Go language repos
on GitHub by stars or forks or both.</p>
<h2><a name="vet"></a>Automatic vet</h2>
<p class=lp>The Go distribution ships with this powerful tool,
<a href="https://golang.org/cmd/vet/"><code>go</code> <code>vet</code></a>,
that points out correctness bugs.
We have a high bar for checks, so that when vet speaks, you should listen.
But everyone has to remember to run it.
It would be better if you didnt have to remember.
In particular, I think we could probably run vet
in parallel with the final compile and link of the test binary
during <code>go</code> <code>test</code> without slowing the compile-edit-test cycle at all.
If we can do that, and if we limit the enabled vet checks to a subset
that is essentially 100% accurate,
we can make passing vet a precondition for running a test at all.
Then developers dont need to remember to run <code>go</code> <code>vet</code>.
They run <code>go</code> <code>test</code>,
and once in a while vet speaks up with something important
and avoids a debugging session.
<a href="https://golang.org/issue/18084">#18084</a>,
<a href="https://golang.org/issue/18085">#18085</a>.</p>
<h2><a name="error"></a>Errors &amp; best practices</h2>
<p class=lp>Part of the intended contract for error reporting in Go is that functions
include relevant available context, including the operation being attempted
(such as the function name and its arguments).
For example, this program:</p>
<pre><code>err := os.Remove(&quot;/tmp/nonexist&quot;)
fmt.Println(err)
</code></pre>
<p class=lp>prints this output:</p>
<pre><code>remove /tmp/nonexist: no such file or directory
</code></pre>
<p class=lp>Not enough Go code adds context like <code>os.Remove</code> does. Too much code does only</p>
<pre><code>if err != nil {
return err
}
</code></pre>
<p class=lp>all the way up the call stack,
discarding useful context that should be reported
(like <code>remove</code> <code>/tmp/nonexist:</code> above).
I would like to try to understand whether our expectations
for including context are wrong, or if there is something
we can do to make it easier to write code that returns better errors.</p>
<p class=pp>There are also various discussions in the community about
agreed-upon interfaces for stripping error context.
I would like to try to understand when that makes sense and
whether we should adopt an official recommendation.</p>
<h2><a name="context"></a>Context &amp; best practices</h2>
<p class=lp>We added the new <a href="https://golang.org/pkg/context/">context package</a>
in Go 1.7 for holding request-scoped information like
<a href="https://blog.golang.org/context">timeouts, cancellation state, and credentials</a>.
An individual context is immutable (like an individual string or int):
it is only possible to derive a new, updated context and
pass that context explicitly further down the call stack or
(less commonly) back up to the caller.
The context is now carried through APIs such as
<a href="https://golang.org/pkg/database/sql">database/sql</a>
and
<a href="https://golang.org/pkg/net/http">net/http</a>,
mainly so that those can stop processing a request when the caller
is no longer interested in the result.
Timeout information is appropriate to carry in a context,
but—to use a <a href="https://golang.org/issue/18284">real example we removed</a>—database options
are not, because they are unlikely to apply equally well to all possible
database operations carried out during a request.
What about the current clock source, or logging sink?
Is either of those appropriate to store in a context?
I would like to try to understand and characterize the
criteria for what is and is not an appropriate use of context.</p>
<h2><a name="memory"></a>Memory model</h2>
<p class=lp>Gos <a href="https://golang.org/ref/mem">memory model</a> is intentionally low-key,
making few promises to users, compared to other languages.
In fact it starts by discouraging people from reading the rest of the document.
At the same time, it demands more of the compiler than other languages:
in particular, a race on an integer value is not sufficient license
for your program to misbehave in arbitrary ways.
But there are some complete gaps, in particular no mention of
the <a href="https://golang.org/pkg/sync/atomic/">sync/atomic package</a>.
I think the core compiler and runtime developers all agree
that the behavior of those atomics should be roughly the same as
C++ seqcst atomics or Java volatiles,
but we still need to write that down carefully in the memory model,
and probably also in a long blog post.
<a href="https://golang.org/issue/5045">#5045</a>,
<a href="https://golang.org/issue/7948">#7948</a>,
<a href="https://golang.org/issue/9442">#9442</a>.</p>
<h2><a name="immutability"></a>Immutability</h2>
<p class=lp>The <a href="https://golang.org/doc/articles/race_detector.html">race detector</a>
is one of Gos most loved features.
But not having races would be even better.
I would love it if there were some reasonable way to integrate
<a href="https://www.google.com/search?q=%22reference+immutability%22">reference immutability</a> into Go,
so that programmers can make clear, checked assertions about what can and cannot
be written and thereby eliminate certain races at compile time.
Go already has one immutable type, <code>string</code>; it would
be nice to retroactively define that
<code>string</code> is a named type (or type alias) for <code>immutable</code> <code>[]byte</code>.
I dont think that will happen this year,
but Id like to understand the solution space better.
Javari, Midori, Pony, and Rust have all staked out interesting points
in the solution space, and there are plenty of research papers
beyond those.</p>
<p class=pp>In the long-term, if we could statically eliminate the possibility of races,
that would eliminate the need for most of the memory model.
That may well be an impossible dream,
but again Id like to understand the solution space better.</p>
<h2><a name="generics"></a>Generics</h2>
<p class=lp>Nothing sparks more <a href="https://research.swtch.com/dogma">heated arguments</a>
among Go and non-Go developers than the question of whether Go should
have support for generics (or how many years ago that should have happened).
I dont believe the Go team has ever said “Go does not need generics.”
What we <em>have</em> said is that there are higher-priority issues facing Go.
For example, I believe that better support for package management
would have a much larger immediate positive impact on most Go developers
than adding generics.
But we do certainly understand that for a certain subset of Go use cases,
the lack of parametric polymorphism is a significant hindrance.</p>
<p class=pp>Personally, I would like to be able to write general channel-processing
functions like:</p>
<pre><code>// Join makes all messages received on the input channels
// available for receiving from the returned channel.
func Join(inputs ...&lt;-chan T) &lt;-chan T
// Dup duplicates messages received on c to both c1 and c2.
func Dup(c &lt;-chan T) (c1, c2 &lt;-chan T)
</code></pre>
<p class=lp>I would also like to be able to write
Go support for high-level data processing abstractions,
analogous to
<a href="https://research.google.com/pubs/archive/35650.pdf">FlumeJava</a> or
C#s <a href="https://en.wikipedia.org/wiki/Language_Integrated_Query">LINQ</a>,
in a way that catches type errors at compile time instead of at run time.
There are also any number of data structures or generic algorithms
that might be written,
but I personally find these broader applications more compelling.</p>
<p class=pp>Weve <a href="https://research.swtch.com/generic">struggled</a> off and on
<a href="https://golang.org/design/15292-generics">for years</a>
to find the right way to add generics to Go.
At least a few of the past proposals got hung up on trying to design
something that provided both general parametric polymorphism
(like <code>chan</code> <code>T</code>) and also a unification of <code>string</code> and <code>[]byte</code>.
If the latter is handled by parameterization over immutability,
as described in the previous section, then maybe that simplifies
the demands on a design for generics.</p>
<p class=pp>When I first started thinking about generics for Go in 2008,
the main examples to learn from were C#, Java, Haskell, and ML.
None of the approaches in those languages seemed like a
perfect fit for Go.
Today, there are newer attempts to learn from as well,
including Dart, Midori, Rust, and Swift.</p>
<p class=pp>Its been a few years since we ventured out and explored the design space.
It is probably time to look around again,
especially in light of the insight about mutability and
the additional examples set by newer languages.
I dont think generics will happen this year,
but Id like to be able to say I understand the solution space better.</p>
</div>
<div id="disqus_thread"></div>
<script>
var disqus_config = function () {
this.page.url = "https://research.swtch.com/go2017";
this.page.identifier = "blog/go2017";
};
(function() {
var d = document, s = d.createElement('script');
s.src = '//swtch.disqus.com/embed.js';
s.setAttribute('data-timestamp', +new Date());
(d.head || d.body).appendChild(s);
})();
</script>
<noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
</div>
</div>
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
var pageTracker = _gat._getTracker("UA-3319603-2");
pageTracker._initData();
pageTracker._trackPageview();
</script>
</body>
</html>

View file

@ -0,0 +1,120 @@
<?xml version="1.0"?>
<catalog>
<book id="bk101">
<author>Gambardella, Matthew</author>
<title>XML Developer's Guide</title>
<genre>Computer</genre>
<price>44.95</price>
<publish_date>2000-10-01</publish_date>
<description>An in-depth look at creating applications
with XML.</description>
</book>
<book id="bk102">
<author>Ralls, Kim</author>
<title>Midnight Rain</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2000-12-16</publish_date>
<description>A former architect battles corporate zombies,
an evil sorceress, and her own childhood to become queen
of the world.</description>
</book>
<book id="bk103">
<author>Corets, Eva</author>
<title>Maeve Ascendant</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2000-11-17</publish_date>
<description>After the collapse of a nanotechnology
society in England, the young survivors lay the
foundation for a new society.</description>
</book>
<book id="bk104">
<author>Corets, Eva</author>
<title>Oberon's Legacy</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2001-03-10</publish_date>
<description>In post-apocalypse England, the mysterious
agent known only as Oberon helps to create a new life
for the inhabitants of London. Sequel to Maeve
Ascendant.</description>
</book>
<book id="bk105">
<author>Corets, Eva</author>
<title>The Sundered Grail</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2001-09-10</publish_date>
<description>The two daughters of Maeve, half-sisters,
battle one another for control of England. Sequel to
Oberon's Legacy.</description>
</book>
<book id="bk106">
<author>Randall, Cynthia</author>
<title>Lover Birds</title>
<genre>Romance</genre>
<price>4.95</price>
<publish_date>2000-09-02</publish_date>
<description>When Carla meets Paul at an ornithology
conference, tempers fly as feathers get ruffled.</description>
</book>
<book id="bk107">
<author>Thurman, Paula</author>
<title>Splish Splash</title>
<genre>Romance</genre>
<price>4.95</price>
<publish_date>2000-11-02</publish_date>
<description>A deep sea diver finds true love twenty
thousand leagues beneath the sea.</description>
</book>
<book id="bk108">
<author>Knorr, Stefan</author>
<title>Creepy Crawlies</title>
<genre>Horror</genre>
<price>4.95</price>
<publish_date>2000-12-06</publish_date>
<description>An anthology of horror stories about roaches,
centipedes, scorpions and other insects.</description>
</book>
<book id="bk109">
<author>Kress, Peter</author>
<title>Paradox Lost</title>
<genre>Science Fiction</genre>
<price>6.95</price>
<publish_date>2000-11-02</publish_date>
<description>After an inadvertant trip through a Heisenberg
Uncertainty Device, James Salway discovers the problems
of being quantum.</description>
</book>
<book id="bk110">
<author>O'Brien, Tim</author>
<title>Microsoft .NET: The Programming Bible</title>
<genre>Computer</genre>
<price>36.95</price>
<publish_date>2000-12-09</publish_date>
<description>Microsoft's .NET initiative is explored in
detail in this deep programmer's reference.</description>
</book>
<book id="bk111">
<author>O'Brien, Tim</author>
<title>MSXML3: A Comprehensive Guide</title>
<genre>Computer</genre>
<price>36.95</price>
<publish_date>2000-12-01</publish_date>
<description>The Microsoft MSXML3 parser is covered in
detail, with attention to XML DOM interfaces, XSLT processing,
SAX and more.</description>
</book>
<book id="bk112">
<author>Galos, Mike</author>
<title>Visual Studio 7: A Comprehensive Guide</title>
<genre>Computer</genre>
<price>49.95</price>
<publish_date>2001-04-16</publish_date>
<description>Microsoft Visual Studio 7 is explored in depth,
looking at how Visual Basic, Visual C++, C#, and ASP+ are
integrated into a comprehensive development
environment.</description>
</book>
</catalog>

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,42 @@
<?xml version="1.0"?>
<?xml-stylesheet href="catalog.xsl" type="text/xsl"?>
<!DOCTYPE catalog SYSTEM "catalog.dtd">
<catalog>
<product description="Cardigan Sweater" product_image="cardigan.jpg">
<catalog_item gender="Men's">
<item_number>QWZ5671</item_number>
<price>39.95</price>
<size description="Medium">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
</size>
<size description="Large">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
</size>
</catalog_item>
<catalog_item gender="Women's">
<item_number>RRX9856</item_number>
<price>42.50</price>
<size description="Small">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="navy_cardigan.jpg">Navy</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
</size>
<size description="Medium">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="navy_cardigan.jpg">Navy</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
<color_swatch image="black_cardigan.jpg">Black</color_swatch>
</size>
<size description="Large">
<color_swatch image="navy_cardigan.jpg">Navy</color_swatch>
<color_swatch image="black_cardigan.jpg">Black</color_swatch>
</size>
<size description="Extra Large">
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
<color_swatch image="black_cardigan.jpg">Black</color_swatch>
</size>
</catalog_item>
</product>
</catalog>

View file

@ -0,0 +1,140 @@
// doT.js
// 2011-2014, Laura Doktorova, https://github.com/olado/doT
// Licensed under the MIT license.
(function() {
"use strict";
var doT = {
version: "1.0.3",
templateSettings: {
evaluate: /\{\{([\s\S]+?(\}?)+)\}\}/g,
interpolate: /\{\{=([\s\S]+?)\}\}/g,
encode: /\{\{!([\s\S]+?)\}\}/g,
use: /\{\{#([\s\S]+?)\}\}/g,
useParams: /(^|[^\w$])def(?:\.|\[[\'\"])([\w$\.]+)(?:[\'\"]\])?\s*\:\s*([\w$\.]+|\"[^\"]+\"|\'[^\']+\'|\{[^\}]+\})/g,
define: /\{\{##\s*([\w\.$]+)\s*(\:|=)([\s\S]+?)#\}\}/g,
defineParams:/^\s*([\w$]+):([\s\S]+)/,
conditional: /\{\{\?(\?)?\s*([\s\S]*?)\s*\}\}/g,
iterate: /\{\{~\s*(?:\}\}|([\s\S]+?)\s*\:\s*([\w$]+)\s*(?:\:\s*([\w$]+))?\s*\}\})/g,
varname: "it",
strip: true,
append: true,
selfcontained: false,
doNotSkipEncoded: false
},
template: undefined, //fn, compile template
compile: undefined //fn, for express
}, _globals;
doT.encodeHTMLSource = function(doNotSkipEncoded) {
var encodeHTMLRules = { "&": "&#38;", "<": "&#60;", ">": "&#62;", '"': "&#34;", "'": "&#39;", "/": "&#47;" },
matchHTML = doNotSkipEncoded ? /[&<>\/]/g : /&(?!#?\w+;)|<|>|\//g;
return function(code) {
return code ? code.toString().replace(matchHTML, function(m) {return encodeHTMLRules[m] || m;}) : "";
};
};
_globals = (function(){ return this || (0,eval)("this"); }());
if (typeof module !== "undefined" && module.exports) {
module.exports = doT;
} else if (typeof define === "function" && define.amd) {
define(function(){return doT;});
} else {
_globals.doT = doT;
}
var startend = {
append: { start: "'+(", end: ")+'", startencode: "'+encodeHTML(" },
split: { start: "';out+=(", end: ");out+='", startencode: "';out+=encodeHTML(" }
}, skip = /$^/;
function resolveDefs(c, block, def) {
return ((typeof block === "string") ? block : block.toString())
.replace(c.define || skip, function(m, code, assign, value) {
if (code.indexOf("def.") === 0) {
code = code.substring(4);
}
if (!(code in def)) {
if (assign === ":") {
if (c.defineParams) value.replace(c.defineParams, function(m, param, v) {
def[code] = {arg: param, text: v};
});
if (!(code in def)) def[code]= value;
} else {
new Function("def", "def['"+code+"']=" + value)(def);
}
}
return "";
})
.replace(c.use || skip, function(m, code) {
if (c.useParams) code = code.replace(c.useParams, function(m, s, d, param) {
if (def[d] && def[d].arg && param) {
var rw = (d+":"+param).replace(/'|\\/g, "_");
def.__exp = def.__exp || {};
def.__exp[rw] = def[d].text.replace(new RegExp("(^|[^\\w$])" + def[d].arg + "([^\\w$])", "g"), "$1" + param + "$2");
return s + "def.__exp['"+rw+"']";
}
});
var v = new Function("def", "return " + code)(def);
return v ? resolveDefs(c, v, def) : v;
});
}
function unescape(code) {
return code.replace(/\\('|\\)/g, "$1").replace(/[\r\t\n]/g, " ");
}
doT.template = function(tmpl, c, def) {
c = c || doT.templateSettings;
var cse = c.append ? startend.append : startend.split, needhtmlencode, sid = 0, indv,
str = (c.use || c.define) ? resolveDefs(c, tmpl, def || {}) : tmpl;
str = ("var out='" + (c.strip ? str.replace(/(^|\r|\n)\t* +| +\t*(\r|\n|$)/g," ")
.replace(/\r|\n|\t|\/\*[\s\S]*?\*\//g,""): str)
.replace(/'|\\/g, "\\$&")
.replace(c.interpolate || skip, function(m, code) {
return cse.start + unescape(code) + cse.end;
})
.replace(c.encode || skip, function(m, code) {
needhtmlencode = true;
return cse.startencode + unescape(code) + cse.end;
})
.replace(c.conditional || skip, function(m, elsecase, code) {
return elsecase ?
(code ? "';}else if(" + unescape(code) + "){out+='" : "';}else{out+='") :
(code ? "';if(" + unescape(code) + "){out+='" : "';}out+='");
})
.replace(c.iterate || skip, function(m, iterate, vname, iname) {
if (!iterate) return "';} } out+='";
sid+=1; indv=iname || "i"+sid; iterate=unescape(iterate);
return "';var arr"+sid+"="+iterate+";if(arr"+sid+"){var "+vname+","+indv+"=-1,l"+sid+"=arr"+sid+".length-1;while("+indv+"<l"+sid+"){"
+vname+"=arr"+sid+"["+indv+"+=1];out+='";
})
.replace(c.evaluate || skip, function(m, code) {
return "';" + unescape(code) + "out+='";
})
+ "';return out;")
.replace(/\n/g, "\\n").replace(/\t/g, '\\t').replace(/\r/g, "\\r")
.replace(/(\s|;|\}|^|\{)out\+='';/g, '$1').replace(/\+''/g, "");
//.replace(/(\s|;|\}|^|\{)out\+=''\+/g,'$1out+=');
if (needhtmlencode) {
if (!c.selfcontained && _globals && !_globals._encodeHTML) _globals._encodeHTML = doT.encodeHTMLSource(c.doNotSkipEncoded);
str = "var encodeHTML = typeof _encodeHTML !== 'undefined' ? _encodeHTML : ("
+ doT.encodeHTMLSource.toString() + "(" + (c.doNotSkipEncoded || '') + "));"
+ str;
}
try {
return new Function(c.varname, str);
} catch (e) {
if (typeof console !== "undefined") console.log("Could not create a template function: " + str);
throw e;
}
};
doT.compile = function(tmpl, def) {
return doT.template(tmpl, null, def);
};
}());

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,68 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 15.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="レイヤー_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px"
y="0px" width="401.98px" height="559.472px" viewBox="0 0 401.98 559.472" enable-background="new 0 0 401.98 559.472"
xml:space="preserve">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M10.634,300.493c0.764,15.751,16.499,8.463,23.626,3.539c6.765-4.675,8.743-0.789,9.337-10.015
c0.389-6.064,1.088-12.128,0.744-18.216c-10.23-0.927-21.357,1.509-29.744,7.602C10.277,286.542,2.177,296.561,10.634,300.493"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#C6B198" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M10.634,300.493c2.29-0.852,4.717-1.457,6.271-3.528"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#6AD7E5" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M46.997,112.853C-13.3,95.897,31.536,19.189,79.956,50.74L46.997,112.853z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#6AD7E5" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M314.895,44.984c47.727-33.523,90.856,42.111,35.388,61.141L314.895,44.984z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M325.161,494.343c12.123,7.501,34.282,30.182,16.096,41.18c-17.474,15.999-27.254-17.561-42.591-22.211
C305.271,504.342,313.643,496.163,325.161,494.343z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="none" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M341.257,535.522c-2.696-5.361-3.601-11.618-8.102-15.939"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M108.579,519.975c-14.229,2.202-22.238,15.039-34.1,21.558c-11.178,6.665-15.454-2.134-16.461-3.92
c-1.752-0.799-1.605,0.744-4.309-1.979c-10.362-16.354,10.797-28.308,21.815-36.432C90.87,496.1,100.487,509.404,108.579,519.975z"
/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="none" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M58.019,537.612c0.542-6.233,5.484-10.407,7.838-15.677"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M49.513,91.667c-7.955-4.208-13.791-9.923-8.925-19.124
c4.505-8.518,12.874-7.593,20.83-3.385L49.513,91.667z"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M337.716,83.667c7.955-4.208,13.791-9.923,8.925-19.124
c-4.505-8.518-12.874-7.593-20.83-3.385L337.716,83.667z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M392.475,298.493c-0.764,15.751-16.499,8.463-23.626,3.539c-6.765-4.675-8.743-0.789-9.337-10.015
c-0.389-6.064-1.088-12.128-0.744-18.216c10.23-0.927,21.357,1.509,29.744,7.602C392.831,284.542,400.932,294.561,392.475,298.493"
/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#C6B198" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M392.475,298.493c-2.29-0.852-4.717-1.457-6.271-3.528"/>
<g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#6AD7E5" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M195.512,13.124c60.365,0,116.953,8.633,146.452,66.629c26.478,65.006,17.062,135.104,21.1,203.806
c3.468,58.992,11.157,127.145-16.21,181.812c-28.79,57.514-100.73,71.982-160,69.863c-46.555-1.666-102.794-16.854-129.069-59.389
c-30.826-49.9-16.232-124.098-13.993-179.622c2.652-65.771-17.815-131.742,3.792-196.101
C69.999,33.359,130.451,18.271,195.512,13.124"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="2.9081" stroke-linecap="round" d="
M206.169,94.16c10.838,63.003,113.822,46.345,99.03-17.197C291.935,19.983,202.567,35.755,206.169,94.16"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="2.8214" stroke-linecap="round" d="
M83.103,104.35c14.047,54.85,101.864,40.807,98.554-14.213C177.691,24.242,69.673,36.957,83.103,104.35"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M218.594,169.762c0.046,8.191,1.861,17.387,0.312,26.101c-2.091,3.952-6.193,4.37-9.729,5.967c-4.89-0.767-9.002-3.978-10.963-8.552
c-1.255-9.946,0.468-19.576,0.785-29.526L218.594,169.762z"/>
<g>
<ellipse fill-rule="evenodd" clip-rule="evenodd" cx="107.324" cy="95.404" rx="14.829" ry="16.062"/>
<ellipse fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" cx="114.069" cy="99.029" rx="3.496" ry="4.082"/>
</g>
<g>
<ellipse fill-rule="evenodd" clip-rule="evenodd" cx="231.571" cy="91.404" rx="14.582" ry="16.062"/>
<ellipse fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" cx="238.204" cy="95.029" rx="3.438" ry="4.082"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M176.217,168.87c-6.47,15.68,3.608,47.035,21.163,23.908c-1.255-9.946,0.468-19.576,0.785-29.526L176.217,168.87z"/>
<g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#231F20" stroke-width="3" stroke-linecap="round" d="
M178.431,138.673c-12.059,1.028-21.916,15.366-15.646,26.709c8.303,15.024,26.836-1.329,38.379,0.203
c13.285,0.272,24.17,14.047,34.84,2.49c11.867-12.854-5.109-25.373-18.377-30.97L178.431,138.673z"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M176.913,138.045c-0.893-20.891,38.938-23.503,43.642-6.016
C225.247,149.475,178.874,153.527,176.913,138.045C175.348,125.682,176.913,138.045,176.913,138.045z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.7 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,52 @@
[{
"created_at": "Thu Jun 22 21:00:00 +0000 2017",
"id": 877994604561387500,
"id_str": "877994604561387520",
"text": "Creating a Grocery List Manager Using Angular, Part 1: Add &amp; Display Items https://t.co/xFox78juL1 #Angular",
"truncated": false,
"entities": {
"hashtags": [{
"text": "Angular",
"indices": [103, 111]
}],
"symbols": [],
"user_mentions": [],
"urls": [{
"url": "https://t.co/xFox78juL1",
"expanded_url": "http://buff.ly/2sr60pf",
"display_url": "buff.ly/2sr60pf",
"indices": [79, 102]
}]
},
"source": "<a href=\"http://bufferapp.com\" rel=\"nofollow\">Buffer</a>",
"user": {
"id": 772682964,
"id_str": "772682964",
"name": "SitePoint JavaScript",
"screen_name": "SitePointJS",
"location": "Melbourne, Australia",
"description": "Keep up with JavaScript tutorials, tips, tricks and articles at SitePoint.",
"url": "http://t.co/cCH13gqeUK",
"entities": {
"url": {
"urls": [{
"url": "http://t.co/cCH13gqeUK",
"expanded_url": "http://sitepoint.com/javascript",
"display_url": "sitepoint.com/javascript",
"indices": [0, 22]
}]
},
"description": {
"urls": []
}
},
"protected": false,
"followers_count": 2145,
"friends_count": 18,
"listed_count": 328,
"created_at": "Wed Aug 22 02:06:33 +0000 2012",
"favourites_count": 57,
"utc_offset": 43200,
"time_zone": "Wellington"
}
}]

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 999 KiB

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,33 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/svg"
)
var svgSamples = []string{
"sample_arctic.svg",
"sample_gopher.svg",
"sample_usa.svg",
}
func init() {
for _, sample := range svgSamples {
load(sample)
}
}
func BenchmarkSVG(b *testing.B) {
for _, sample := range svgSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
svg.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,33 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/xml"
)
var xmlSamples = []string{
"sample_books.xml",
"sample_catalog.xml",
"sample_omg.xml",
}
func init() {
for _, sample := range xmlSamples {
load(sample)
}
}
func BenchmarkXML(b *testing.B) {
for _, sample := range xmlSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
xml.Minify(m, w[sample], r[sample], nil)
}
})
}
}

149
vendor/github.com/tdewolff/minify/cmd/minify/README.md generated vendored Normal file
View file

@ -0,0 +1,149 @@
# Minify [![Join the chat at https://gitter.im/tdewolff/minify](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/tdewolff/minify?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**[Download binaries](https://github.com/tdewolff/minify/releases) for Windows, Linux and macOS**
Minify is a CLI implementation of the minify [library package](https://github.com/tdewolff/minify).
## Installation
Make sure you have [Go](http://golang.org/) and [Git](http://git-scm.com/) installed.
Run the following command
go get github.com/tdewolff/minify/cmd/minify
and the `minify` command will be in your `$GOPATH/bin`.
## Usage
Usage: minify [options] [input]
Options:
-a, --all
Minify all files, including hidden files and files in hidden directories
-l, --list
List all accepted filetypes
--match string
Filename pattern matching using regular expressions, see https://github.com/google/re2/wiki/Syntax
--mime string
Mimetype (text/css, application/javascript, ...), optional for input filenames, has precedence over -type
-o, --output string
Output file or directory (must have trailing slash), leave blank to use stdout
-r, --recursive
Recursively minify directories
--type string
Filetype (css, html, js, ...), optional for input filenames
-u, --update
Update binary
--url string
URL of file to enable URL minification
-v, --verbose
Verbose
-w, --watch
Watch files and minify upon changes
--css-decimals
Number of decimals to preserve in numbers, -1 is all
--html-keep-conditional-comments
Preserve all IE conditional comments
--html-keep-default-attrvals
Preserve default attribute values
--html-keep-document-tags
Preserve html, head and body tags
--html-keep-end-tags
Preserve all end tags
--html-keep-whitespace
Preserve whitespace characters but still collapse multiple into one
--svg-decimals
Number of decimals to preserve in numbers, -1 is all
--xml-keep-whitespace
Preserve whitespace characters but still collapse multiple into one
Input:
Files or directories, leave blank to use stdin
### Types
css text/css
htm text/html
html text/html
js text/javascript
json application/json
svg image/svg+xml
xml text/xml
## Examples
Minify **index.html** to **index-min.html**:
```sh
$ minify -o index-min.html index.html
```
Minify **index.html** to standard output (leave `-o` blank):
```sh
$ minify index.html
```
Normally the mimetype is inferred from the extension, to set the mimetype explicitly:
```sh
$ minify --type=html -o index-min.tpl index.tpl
```
You need to set the type or the mimetype option when using standard input:
```sh
$ minify --mime=text/javascript < script.js > script-min.js
$ cat script.js | minify --type=js > script-min.js
```
### Directories
You can also give directories as input, and these directories can be minified recursively.
Minify files in the current working directory to **out/** (no subdirectories):
```sh
$ minify -o out/ .
```
Minify files recursively in **src/**:
```sh
$ minify -r -o out/ src
```
Minify only javascript files in **src/**:
```sh
$ minify -r -o out/ --match=\.js src
```
### Concatenate
When multiple inputs are given and either standard output or a single output file, it will concatenate the files together.
Concatenate **one.css** and **two.css** into **style.css**:
```sh
$ minify -o style.css one.css two.css
```
Concatenate all files in **styles/** into **style.css**:
```sh
$ minify -o style.css styles
```
You can also use `cat` as standard input to concatenate files and use gzip for example:
```sh
$ cat one.css two.css three.css | minify --type=css | gzip -9 -c > style.css.gz
```
### Watching
To watch file changes and automatically re-minify you can use the `-w` or `--watch` option.
Minify **style.css** to itself and watch changes:
```sh
$ minify -w -o style.css style.css
```
Minify and concatenate **one.css** and **two.css** to **style.css** and watch changes:
```sh
$ minify -w -o style.css one.css two.css
```
Minify files in **src/** and subdirectories to **out/** and watch changes:
```sh
$ minify -w -r -o out/ src
```

648
vendor/github.com/tdewolff/minify/cmd/minify/main.go generated vendored Normal file
View file

@ -0,0 +1,648 @@
package main
import (
"bufio"
"fmt"
"io"
"io/ioutil"
"log"
"net/url"
"os"
"os/signal"
"path"
"path/filepath"
"regexp"
"runtime"
"sort"
"strings"
"sync/atomic"
"time"
humanize "github.com/dustin/go-humanize"
"github.com/matryer/try"
flag "github.com/spf13/pflag"
min "github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/html"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
)
var Version = "master"
var Commit = ""
var Date = ""
var filetypeMime = map[string]string{
"css": "text/css",
"htm": "text/html",
"html": "text/html",
"js": "text/javascript",
"json": "application/json",
"svg": "image/svg+xml",
"xml": "text/xml",
}
var (
hidden bool
list bool
m *min.M
pattern *regexp.Regexp
recursive bool
verbose bool
version bool
watch bool
)
type task struct {
srcs []string
srcDir string
dst string
}
var (
Error *log.Logger
Info *log.Logger
)
func main() {
output := ""
mimetype := ""
filetype := ""
match := ""
siteurl := ""
cssMinifier := &css.Minifier{}
htmlMinifier := &html.Minifier{}
jsMinifier := &js.Minifier{}
jsonMinifier := &json.Minifier{}
svgMinifier := &svg.Minifier{}
xmlMinifier := &xml.Minifier{}
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage: %s [options] [input]\n\nOptions:\n", os.Args[0])
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, "\nInput:\n Files or directories, leave blank to use stdin\n")
}
flag.StringVarP(&output, "output", "o", "", "Output file or directory (must have trailing slash), leave blank to use stdout")
flag.StringVar(&mimetype, "mime", "", "Mimetype (text/css, application/javascript, ...), optional for input filenames, has precedence over -type")
flag.StringVar(&filetype, "type", "", "Filetype (css, html, js, ...), optional for input filenames")
flag.StringVar(&match, "match", "", "Filename pattern matching using regular expressions, see https://github.com/google/re2/wiki/Syntax")
flag.BoolVarP(&recursive, "recursive", "r", false, "Recursively minify directories")
flag.BoolVarP(&hidden, "all", "a", false, "Minify all files, including hidden files and files in hidden directories")
flag.BoolVarP(&list, "list", "l", false, "List all accepted filetypes")
flag.BoolVarP(&verbose, "verbose", "v", false, "Verbose")
flag.BoolVarP(&watch, "watch", "w", false, "Watch files and minify upon changes")
flag.BoolVarP(&version, "version", "", false, "Version")
flag.StringVar(&siteurl, "url", "", "URL of file to enable URL minification")
flag.IntVar(&cssMinifier.Decimals, "css-decimals", -1, "Number of decimals to preserve in numbers, -1 is all")
flag.BoolVar(&htmlMinifier.KeepConditionalComments, "html-keep-conditional-comments", false, "Preserve all IE conditional comments")
flag.BoolVar(&htmlMinifier.KeepDefaultAttrVals, "html-keep-default-attrvals", false, "Preserve default attribute values")
flag.BoolVar(&htmlMinifier.KeepDocumentTags, "html-keep-document-tags", false, "Preserve html, head and body tags")
flag.BoolVar(&htmlMinifier.KeepEndTags, "html-keep-end-tags", false, "Preserve all end tags")
flag.BoolVar(&htmlMinifier.KeepWhitespace, "html-keep-whitespace", false, "Preserve whitespace characters but still collapse multiple into one")
flag.IntVar(&svgMinifier.Decimals, "svg-decimals", -1, "Number of decimals to preserve in numbers, -1 is all")
flag.BoolVar(&xmlMinifier.KeepWhitespace, "xml-keep-whitespace", false, "Preserve whitespace characters but still collapse multiple into one")
flag.Parse()
rawInputs := flag.Args()
Error = log.New(os.Stderr, "ERROR: ", 0)
if verbose {
Info = log.New(os.Stderr, "INFO: ", 0)
} else {
Info = log.New(ioutil.Discard, "INFO: ", 0)
}
if version {
if Version == "devel" {
fmt.Printf("minify version devel+%.7s %s\n", Commit, Date)
} else {
fmt.Printf("minify version %s\n", Version)
}
return
}
if list {
var keys []string
for k := range filetypeMime {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
fmt.Println(k + "\t" + filetypeMime[k])
}
return
}
useStdin := len(rawInputs) == 0
mimetype = getMimetype(mimetype, filetype, useStdin)
var err error
if match != "" {
pattern, err = regexp.Compile(match)
if err != nil {
Error.Fatalln(err)
}
}
if watch && (useStdin || output == "") {
Error.Fatalln("watch doesn't work with stdin or stdout")
}
////////////////
dirDst := false
if output != "" {
output = sanitizePath(output)
if output[len(output)-1] == '/' {
dirDst = true
if err := os.MkdirAll(output, 0777); err != nil {
Error.Fatalln(err)
}
}
}
tasks, ok := expandInputs(rawInputs, dirDst)
if !ok {
os.Exit(1)
}
if ok = expandOutputs(output, &tasks); !ok {
os.Exit(1)
}
if len(tasks) == 0 {
tasks = append(tasks, task{[]string{""}, "", output}) // stdin
}
m = min.New()
m.Add("text/css", cssMinifier)
m.Add("text/html", htmlMinifier)
m.Add("text/javascript", jsMinifier)
m.Add("image/svg+xml", svgMinifier)
m.AddRegexp(regexp.MustCompile("[/+]json$"), jsonMinifier)
m.AddRegexp(regexp.MustCompile("[/+]xml$"), xmlMinifier)
if m.URL, err = url.Parse(siteurl); err != nil {
Error.Fatalln(err)
}
start := time.Now()
var fails int32
if verbose || len(tasks) == 1 {
for _, t := range tasks {
if ok := minify(mimetype, t); !ok {
fails++
}
}
} else {
numWorkers := 4
if n := runtime.NumCPU(); n > numWorkers {
numWorkers = n
}
sem := make(chan struct{}, numWorkers)
for _, t := range tasks {
sem <- struct{}{}
go func(t task) {
defer func() {
<-sem
}()
if ok := minify(mimetype, t); !ok {
atomic.AddInt32(&fails, 1)
}
}(t)
}
// wait for all jobs to be done
for i := 0; i < cap(sem); i++ {
sem <- struct{}{}
}
}
if watch {
var watcher *RecursiveWatcher
watcher, err = NewRecursiveWatcher(recursive)
if err != nil {
Error.Fatalln(err)
}
defer watcher.Close()
var watcherTasks = make(map[string]task, len(rawInputs))
for _, task := range tasks {
for _, src := range task.srcs {
watcherTasks[src] = task
watcher.AddPath(src)
}
}
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
skip := make(map[string]int)
changes := watcher.Run()
for changes != nil {
select {
case <-c:
watcher.Close()
case file, ok := <-changes:
if !ok {
changes = nil
break
}
file = sanitizePath(file)
if skip[file] > 0 {
skip[file]--
continue
}
var t task
if t, ok = watcherTasks[file]; ok {
if !verbose {
fmt.Fprintln(os.Stderr, file, "changed")
}
for _, src := range t.srcs {
if src == t.dst {
skip[file] = 2 // minify creates both a CREATE and WRITE on the file
break
}
}
if ok := minify(mimetype, t); !ok {
fails++
}
}
}
}
}
if verbose {
Info.Println(time.Since(start), "total")
}
if fails > 0 {
os.Exit(1)
}
}
func getMimetype(mimetype, filetype string, useStdin bool) string {
if mimetype == "" && filetype != "" {
var ok bool
if mimetype, ok = filetypeMime[filetype]; !ok {
Error.Fatalln("cannot find mimetype for filetype", filetype)
}
}
if mimetype == "" && useStdin {
Error.Fatalln("must specify mimetype or filetype for stdin")
}
if verbose {
if mimetype == "" {
Info.Println("infer mimetype from file extensions")
} else {
Info.Println("use mimetype", mimetype)
}
}
return mimetype
}
func sanitizePath(p string) string {
p = filepath.ToSlash(p)
isDir := p[len(p)-1] == '/'
p = path.Clean(p)
if isDir {
p += "/"
} else if info, err := os.Stat(p); err == nil && info.Mode().IsDir() {
p += "/"
}
return p
}
func validFile(info os.FileInfo) bool {
if info.Mode().IsRegular() && len(info.Name()) > 0 && (hidden || info.Name()[0] != '.') {
if pattern != nil && !pattern.MatchString(info.Name()) {
return false
}
ext := path.Ext(info.Name())
if len(ext) > 0 {
ext = ext[1:]
}
if _, ok := filetypeMime[ext]; !ok {
return false
}
return true
}
return false
}
func validDir(info os.FileInfo) bool {
return info.Mode().IsDir() && len(info.Name()) > 0 && (hidden || info.Name()[0] != '.')
}
func expandInputs(inputs []string, dirDst bool) ([]task, bool) {
ok := true
tasks := []task{}
for _, input := range inputs {
input = sanitizePath(input)
info, err := os.Stat(input)
if err != nil {
Error.Println(err)
ok = false
continue
}
if info.Mode().IsRegular() {
tasks = append(tasks, task{[]string{filepath.ToSlash(input)}, "", ""})
} else if info.Mode().IsDir() {
expandDir(input, &tasks, &ok)
} else {
Error.Println("not a file or directory", input)
ok = false
}
}
if len(tasks) > 1 && !dirDst {
// concatenate
tasks[0].srcDir = ""
for _, task := range tasks[1:] {
tasks[0].srcs = append(tasks[0].srcs, task.srcs[0])
}
tasks = tasks[:1]
}
if verbose && ok {
if len(inputs) == 0 {
Info.Println("minify from stdin")
} else if len(tasks) == 1 {
if len(tasks[0].srcs) > 1 {
Info.Println("minify and concatenate", len(tasks[0].srcs), "input files")
} else {
Info.Println("minify input file", tasks[0].srcs[0])
}
} else {
Info.Println("minify", len(tasks), "input files")
}
}
return tasks, ok
}
func expandDir(input string, tasks *[]task, ok *bool) {
if !recursive {
if verbose {
Info.Println("expanding directory", input)
}
infos, err := ioutil.ReadDir(input)
if err != nil {
Error.Println(err)
*ok = false
}
for _, info := range infos {
if validFile(info) {
*tasks = append(*tasks, task{[]string{path.Join(input, info.Name())}, input, ""})
}
}
} else {
if verbose {
Info.Println("expanding directory", input, "recursively")
}
err := filepath.Walk(input, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if validFile(info) {
*tasks = append(*tasks, task{[]string{filepath.ToSlash(path)}, input, ""})
} else if info.Mode().IsDir() && !validDir(info) && info.Name() != "." && info.Name() != ".." { // check for IsDir, so we don't skip the rest of the directory when we have an invalid file
return filepath.SkipDir
}
return nil
})
if err != nil {
Error.Println(err)
*ok = false
}
}
}
func expandOutputs(output string, tasks *[]task) bool {
if verbose {
if output == "" {
Info.Println("minify to stdout")
} else if output[len(output)-1] != '/' {
Info.Println("minify to output file", output)
} else if output == "./" {
Info.Println("minify to current working directory")
} else {
Info.Println("minify to output directory", output)
}
}
if output == "" {
return true
}
ok := true
for i, t := range *tasks {
var err error
(*tasks)[i].dst, err = getOutputFilename(output, t)
if err != nil {
Error.Println(err)
ok = false
}
}
return ok
}
func getOutputFilename(output string, t task) (string, error) {
if len(output) > 0 && output[len(output)-1] == '/' {
rel, err := filepath.Rel(t.srcDir, t.srcs[0])
if err != nil {
return "", err
}
return path.Clean(filepath.ToSlash(path.Join(output, rel))), nil
}
return output, nil
}
func openInputFile(input string) (*os.File, bool) {
var r *os.File
if input == "" {
r = os.Stdin
} else {
err := try.Do(func(attempt int) (bool, error) {
var err error
r, err = os.Open(input)
return attempt < 5, err
})
if err != nil {
Error.Println(err)
return nil, false
}
}
return r, true
}
func openOutputFile(output string) (*os.File, bool) {
var w *os.File
if output == "" {
w = os.Stdout
} else {
if err := os.MkdirAll(path.Dir(output), 0777); err != nil {
Error.Println(err)
return nil, false
}
err := try.Do(func(attempt int) (bool, error) {
var err error
w, err = os.OpenFile(output, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0666)
return attempt < 5, err
})
if err != nil {
Error.Println(err)
return nil, false
}
}
return w, true
}
func minify(mimetype string, t task) bool {
if mimetype == "" {
for _, src := range t.srcs {
if len(path.Ext(src)) > 0 {
srcMimetype, ok := filetypeMime[path.Ext(src)[1:]]
if !ok {
Error.Println("cannot infer mimetype from extension in", src)
return false
}
if mimetype == "" {
mimetype = srcMimetype
} else if srcMimetype != mimetype {
Error.Println("inferred mimetype", srcMimetype, "of", src, "for concatenation unequal to previous mimetypes", mimetype)
return false
}
}
}
}
srcName := strings.Join(t.srcs, " + ")
if len(t.srcs) > 1 {
srcName = "(" + srcName + ")"
}
if srcName == "" {
srcName = "stdin"
}
dstName := t.dst
if dstName == "" {
dstName = "stdin"
} else {
// rename original when overwriting
for i := range t.srcs {
if t.srcs[i] == t.dst {
t.srcs[i] += ".bak"
err := try.Do(func(attempt int) (bool, error) {
err := os.Rename(t.dst, t.srcs[i])
return attempt < 5, err
})
if err != nil {
Error.Println(err)
return false
}
break
}
}
}
frs := make([]io.Reader, len(t.srcs))
for i, src := range t.srcs {
fr, ok := openInputFile(src)
if !ok {
for _, fr := range frs {
fr.(io.ReadCloser).Close()
}
return false
}
if i > 0 && mimetype == filetypeMime["js"] {
// prepend newline when concatenating JS files
frs[i] = NewPrependReader(fr, []byte("\n"))
} else {
frs[i] = fr
}
}
r := &countingReader{io.MultiReader(frs...), 0}
fw, ok := openOutputFile(t.dst)
if !ok {
for _, fr := range frs {
fr.(io.ReadCloser).Close()
}
return false
}
var w *countingWriter
if fw == os.Stdout {
w = &countingWriter{fw, 0}
} else {
w = &countingWriter{bufio.NewWriter(fw), 0}
}
success := true
startTime := time.Now()
err := m.Minify(mimetype, w, r)
if err != nil {
Error.Println("cannot minify "+srcName+":", err)
success = false
}
if verbose {
dur := time.Since(startTime)
speed := "Inf MB"
if dur > 0 {
speed = humanize.Bytes(uint64(float64(r.N) / dur.Seconds()))
}
ratio := 1.0
if r.N > 0 {
ratio = float64(w.N) / float64(r.N)
}
stats := fmt.Sprintf("(%9v, %6v, %5.1f%%, %6v/s)", dur, humanize.Bytes(uint64(w.N)), ratio*100, speed)
if srcName != dstName {
Info.Println(stats, "-", srcName, "to", dstName)
} else {
Info.Println(stats, "-", srcName)
}
}
for _, fr := range frs {
fr.(io.ReadCloser).Close()
}
if bw, ok := w.Writer.(*bufio.Writer); ok {
bw.Flush()
}
fw.Close()
// remove original that was renamed, when overwriting files
for i := range t.srcs {
if t.srcs[i] == t.dst+".bak" {
if err == nil {
if err = os.Remove(t.srcs[i]); err != nil {
Error.Println(err)
return false
}
} else {
if err = os.Remove(t.dst); err != nil {
Error.Println(err)
return false
} else if err = os.Rename(t.srcs[i], t.dst); err != nil {
Error.Println(err)
return false
}
}
t.srcs[i] = t.dst
break
}
}
return success
}

46
vendor/github.com/tdewolff/minify/cmd/minify/util.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
package main
import "io"
type countingReader struct {
io.Reader
N int
}
func (r *countingReader) Read(p []byte) (int, error) {
n, err := r.Reader.Read(p)
r.N += n
return n, err
}
type countingWriter struct {
io.Writer
N int
}
func (w *countingWriter) Write(p []byte) (int, error) {
n, err := w.Writer.Write(p)
w.N += n
return n, err
}
type prependReader struct {
io.ReadCloser
prepend []byte
}
func NewPrependReader(r io.ReadCloser, prepend []byte) *prependReader {
return &prependReader{r, prepend}
}
func (r *prependReader) Read(p []byte) (int, error) {
if r.prepend != nil {
n := copy(p, r.prepend)
if n != len(r.prepend) {
return n, io.ErrShortBuffer
}
r.prepend = nil
return n, nil
}
return r.ReadCloser.Read(p)
}

106
vendor/github.com/tdewolff/minify/cmd/minify/watch.go generated vendored Normal file
View file

@ -0,0 +1,106 @@
package main
import (
"os"
"path/filepath"
"github.com/fsnotify/fsnotify"
)
type RecursiveWatcher struct {
watcher *fsnotify.Watcher
paths map[string]bool
recursive bool
}
func NewRecursiveWatcher(recursive bool) (*RecursiveWatcher, error) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, err
}
return &RecursiveWatcher{watcher, make(map[string]bool), recursive}, nil
}
func (rw *RecursiveWatcher) Close() error {
return rw.watcher.Close()
}
func (rw *RecursiveWatcher) AddPath(root string) error {
info, err := os.Stat(root)
if err != nil {
return err
}
if info.Mode().IsRegular() {
root = filepath.Dir(root)
if rw.paths[root] {
return nil
}
if err := rw.watcher.Add(root); err != nil {
return err
}
rw.paths[root] = true
return nil
} else if !rw.recursive {
if rw.paths[root] {
return nil
}
if err := rw.watcher.Add(root); err != nil {
return err
}
rw.paths[root] = true
return nil
} else {
return filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.Mode().IsDir() {
if !validDir(info) || rw.paths[path] {
return filepath.SkipDir
}
if err := rw.watcher.Add(path); err != nil {
return err
}
rw.paths[path] = true
}
return nil
})
}
}
func (rw *RecursiveWatcher) Run() chan string {
files := make(chan string, 10)
go func() {
for rw.watcher.Events != nil && rw.watcher.Errors != nil {
select {
case event, ok := <-rw.watcher.Events:
if !ok {
rw.watcher.Events = nil
break
}
if info, err := os.Stat(event.Name); err == nil {
if validDir(info) {
if event.Op&fsnotify.Create == fsnotify.Create {
if err := rw.AddPath(event.Name); err != nil {
Error.Println(err)
}
}
} else if validFile(info) {
if event.Op&fsnotify.Create == fsnotify.Create || event.Op&fsnotify.Write == fsnotify.Write {
files <- event.Name
}
}
}
case err, ok := <-rw.watcher.Errors:
if !ok {
rw.watcher.Errors = nil
break
}
Error.Println(err)
}
}
close(files)
}()
return files
}

339
vendor/github.com/tdewolff/minify/common.go generated vendored Normal file
View file

@ -0,0 +1,339 @@
package minify // import "github.com/tdewolff/minify"
import (
"bytes"
"encoding/base64"
"net/url"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/strconv"
)
// Epsilon is the closest number to zero that is not considered to be zero.
var Epsilon = 0.00001
// ContentType minifies a given mediatype by removing all whitespace.
func ContentType(b []byte) []byte {
j := 0
start := 0
inString := false
for i, c := range b {
if !inString && parse.IsWhitespace(c) {
if start != 0 {
j += copy(b[j:], b[start:i])
} else {
j += i
}
start = i + 1
} else if c == '"' {
inString = !inString
}
}
if start != 0 {
j += copy(b[j:], b[start:])
return parse.ToLower(b[:j])
}
return parse.ToLower(b)
}
// DataURI minifies a data URI and calls a minifier by the specified mediatype. Specifications: https://www.ietf.org/rfc/rfc2397.txt.
func DataURI(m *M, dataURI []byte) []byte {
if mediatype, data, err := parse.DataURI(dataURI); err == nil {
dataURI, _ = m.Bytes(string(mediatype), data)
base64Len := len(";base64") + base64.StdEncoding.EncodedLen(len(dataURI))
asciiLen := len(dataURI)
for _, c := range dataURI {
if 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-' || c == '_' || c == '.' || c == '~' || c == ' ' {
asciiLen++
} else {
asciiLen += 2
}
if asciiLen > base64Len {
break
}
}
if asciiLen > base64Len {
encoded := make([]byte, base64Len-len(";base64"))
base64.StdEncoding.Encode(encoded, dataURI)
dataURI = encoded
mediatype = append(mediatype, []byte(";base64")...)
} else {
dataURI = []byte(url.QueryEscape(string(dataURI)))
dataURI = bytes.Replace(dataURI, []byte("\""), []byte("\\\""), -1)
}
if len("text/plain") <= len(mediatype) && parse.EqualFold(mediatype[:len("text/plain")], []byte("text/plain")) {
mediatype = mediatype[len("text/plain"):]
}
for i := 0; i+len(";charset=us-ascii") <= len(mediatype); i++ {
// must start with semicolon and be followed by end of mediatype or semicolon
if mediatype[i] == ';' && parse.EqualFold(mediatype[i+1:i+len(";charset=us-ascii")], []byte("charset=us-ascii")) && (i+len(";charset=us-ascii") >= len(mediatype) || mediatype[i+len(";charset=us-ascii")] == ';') {
mediatype = append(mediatype[:i], mediatype[i+len(";charset=us-ascii"):]...)
break
}
}
dataURI = append(append(append([]byte("data:"), mediatype...), ','), dataURI...)
}
return dataURI
}
const MaxInt = int(^uint(0) >> 1)
const MinInt = -MaxInt - 1
// Number minifies a given byte slice containing a number (see parse.Number) and removes superfluous characters.
func Number(num []byte, prec int) []byte {
// omit first + and register mantissa start and end, whether it's negative and the exponent
neg := false
start := 0
dot := -1
end := len(num)
origExp := 0
if 0 < end && (num[0] == '+' || num[0] == '-') {
if num[0] == '-' {
neg = true
}
start++
}
for i, c := range num[start:] {
if c == '.' {
dot = start + i
} else if c == 'e' || c == 'E' {
end = start + i
i += start + 1
if i < len(num) && num[i] == '+' {
i++
}
if tmpOrigExp, n := strconv.ParseInt(num[i:]); n > 0 && tmpOrigExp >= int64(MinInt) && tmpOrigExp <= int64(MaxInt) {
// range checks for when int is 32 bit
origExp = int(tmpOrigExp)
} else {
return num
}
break
}
}
if dot == -1 {
dot = end
}
// trim leading zeros but leave at least one digit
for start < end-1 && num[start] == '0' {
start++
}
// trim trailing zeros
i := end - 1
for ; i > dot; i-- {
if num[i] != '0' {
end = i + 1
break
}
}
if i == dot {
end = dot
if start == end {
num[start] = '0'
return num[start : start+1]
}
} else if start == end-1 && num[start] == '0' {
return num[start:end]
}
// n is the number of significant digits
// normExp would be the exponent if it were normalised (0.1 <= f < 1)
n := 0
normExp := 0
if dot == start {
for i = dot + 1; i < end; i++ {
if num[i] != '0' {
n = end - i
normExp = dot - i + 1
break
}
}
} else if dot == end {
normExp = end - start
for i = end - 1; i >= start; i-- {
if num[i] != '0' {
n = i + 1 - start
end = i + 1
break
}
}
} else {
n = end - start - 1
normExp = dot - start
}
if origExp < 0 && (normExp < MinInt-origExp || normExp-n < MinInt-origExp) || origExp > 0 && (normExp > MaxInt-origExp || normExp-n > MaxInt-origExp) {
return num
}
normExp += origExp
// intExp would be the exponent if it were an integer
intExp := normExp - n
lenIntExp := 1
if intExp <= -10 || intExp >= 10 {
lenIntExp = strconv.LenInt(int64(intExp))
}
// there are three cases to consider when printing the number
// case 1: without decimals and with an exponent (large numbers)
// case 2: with decimals and without an exponent (around zero)
// case 3: without decimals and with a negative exponent (small numbers)
if normExp >= n {
// case 1
if dot < end {
if dot == start {
start = end - n
} else {
// TODO: copy the other part if shorter?
copy(num[dot:], num[dot+1:end])
end--
}
}
if normExp >= n+3 {
num[end] = 'e'
end++
for i := end + lenIntExp - 1; i >= end; i-- {
num[i] = byte(intExp%10) + '0'
intExp /= 10
}
end += lenIntExp
} else if normExp == n+2 {
num[end] = '0'
num[end+1] = '0'
end += 2
} else if normExp == n+1 {
num[end] = '0'
end++
}
} else if normExp >= -lenIntExp-1 {
// case 2
zeroes := -normExp
newDot := 0
if zeroes > 0 {
// dot placed at the front and add zeroes
newDot = end - n - zeroes - 1
if newDot != dot {
d := start - newDot
if d > 0 {
if dot < end {
// copy original digits behind the dot backwards
copy(num[dot+1+d:], num[dot+1:end])
if dot > start {
// copy original digits before the dot backwards
copy(num[start+d+1:], num[start:dot])
}
} else if dot > start {
// copy original digits before the dot backwards
copy(num[start+d:], num[start:dot])
}
newDot = start
end += d
} else {
start += -d
}
num[newDot] = '.'
for i := 0; i < zeroes; i++ {
num[newDot+1+i] = '0'
}
}
} else {
// placed in the middle
if dot == start {
// TODO: try if placing at the end reduces copying
// when there are zeroes after the dot
dot = end - n - 1
start = dot
} else if dot >= end {
// TODO: try if placing at the start reduces copying
// when input has no dot in it
dot = end
end++
}
newDot = start + normExp
if newDot > dot {
// copy digits forwards
copy(num[dot:], num[dot+1:newDot+1])
} else if newDot < dot {
// copy digits backwards
copy(num[newDot+1:], num[newDot:dot])
}
num[newDot] = '.'
}
// apply precision
dot = newDot
if prec > -1 && dot+1+prec < end {
end = dot + 1 + prec
inc := num[end] >= '5'
if inc || num[end-1] == '0' {
for i := end - 1; i > start; i-- {
if i == dot {
end--
} else if inc {
if num[i] == '9' {
if i > dot {
end--
} else {
num[i] = '0'
}
} else {
num[i]++
inc = false
break
}
} else if i > dot && num[i] == '0' {
end--
}
}
}
if dot == start && end == start+1 {
if inc {
num[start] = '1'
} else {
num[start] = '0'
}
} else {
if dot+1 == end {
end--
}
if inc {
if num[start] == '9' {
num[start] = '0'
copy(num[start+1:], num[start:end])
end++
num[start] = '1'
} else {
num[start]++
}
}
}
}
} else {
// case 3
if dot < end {
if dot == start {
copy(num[start:], num[end-n:end])
end = start + n
} else {
copy(num[dot:], num[dot+1:end])
end--
}
}
num[end] = 'e'
num[end+1] = '-'
end += 2
intExp = -intExp
for i := end + lenIntExp - 1; i >= end; i-- {
num[i] = byte(intExp%10) + '0'
intExp /= 10
}
end += lenIntExp
}
if neg {
start--
num[start] = '-'
}
return num[start:end]
}

237
vendor/github.com/tdewolff/minify/common_test.go generated vendored Normal file
View file

@ -0,0 +1,237 @@
package minify // import "github.com/tdewolff/minify"
import (
"fmt"
"io"
"io/ioutil"
"math"
"math/rand"
"strconv"
"testing"
"github.com/tdewolff/test"
)
func TestContentType(t *testing.T) {
contentTypeTests := []struct {
contentType string
expected string
}{
{"text/html", "text/html"},
{"text/html; charset=UTF-8", "text/html;charset=utf-8"},
{"text/html; charset=UTF-8 ; param = \" ; \"", "text/html;charset=utf-8;param=\" ; \""},
{"text/html, text/css", "text/html,text/css"},
}
for _, tt := range contentTypeTests {
t.Run(tt.contentType, func(t *testing.T) {
contentType := ContentType([]byte(tt.contentType))
test.Minify(t, tt.contentType, nil, string(contentType), tt.expected)
})
}
}
func TestDataURI(t *testing.T) {
dataURITests := []struct {
dataURI string
expected string
}{
{"data:,text", "data:,text"},
{"data:text/plain;charset=us-ascii,text", "data:,text"},
{"data:TEXT/PLAIN;CHARSET=US-ASCII,text", "data:,text"},
{"data:text/plain;charset=us-asciiz,text", "data:;charset=us-asciiz,text"},
{"data:;base64,dGV4dA==", "data:,text"},
{"data:text/svg+xml;base64,PT09PT09", "data:text/svg+xml;base64,PT09PT09"},
{"data:text/xml;version=2.0,content", "data:text/xml;version=2.0,content"},
{"data:text/xml; version = 2.0,content", "data:text/xml;version=2.0,content"},
{"data:,=====", "data:,%3D%3D%3D%3D%3D"},
{"data:,======", "data:;base64,PT09PT09"},
{"data:text/x,<?x?>", "data:text/x,%3C%3Fx%3F%3E"},
}
m := New()
m.AddFunc("text/x", func(_ *M, w io.Writer, r io.Reader, _ map[string]string) error {
b, _ := ioutil.ReadAll(r)
test.String(t, string(b), "<?x?>")
w.Write(b)
return nil
})
for _, tt := range dataURITests {
t.Run(tt.dataURI, func(t *testing.T) {
dataURI := DataURI(m, []byte(tt.dataURI))
test.Minify(t, tt.dataURI, nil, string(dataURI), tt.expected)
})
}
}
func TestNumber(t *testing.T) {
numberTests := []struct {
number string
expected string
}{
{"0", "0"},
{".0", "0"},
{"1.0", "1"},
{"0.1", ".1"},
{"+1", "1"},
{"-1", "-1"},
{"-0.1", "-.1"},
{"10", "10"},
{"100", "100"},
{"1000", "1e3"},
{"0.001", ".001"},
{"0.0001", "1e-4"},
{"100e1", "1e3"},
{"1.1e+1", "11"},
{"1.1e6", "11e5"},
{"0.252", ".252"},
{"1.252", "1.252"},
{"-1.252", "-1.252"},
{"0.075", ".075"},
{"789012345678901234567890123456789e9234567890123456789", "789012345678901234567890123456789e9234567890123456789"},
{".000100009", "100009e-9"},
{".0001000009", ".0001000009"},
{".0001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009", ".0001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009"},
{"E\x1f", "E\x1f"}, // fuzz
{"1e9223372036854775807", "1e9223372036854775807"},
{"11e9223372036854775807", "11e9223372036854775807"},
{".01e-9223372036854775808", ".01e-9223372036854775808"},
{".011e-9223372036854775808", ".011e-9223372036854775808"},
{".12345e8", "12345e3"},
{".12345e7", "1234500"},
{".12345e6", "123450"},
{".12345e5", "12345"},
{".012345e6", "12345"},
{".12345e4", "1234.5"},
{"-.12345e4", "-1234.5"},
{".12345e0", ".12345"},
{".12345e-1", ".012345"},
{".12345e-2", ".0012345"},
{".12345e-3", "12345e-8"},
{".12345e-4", "12345e-9"},
{".12345e-5", "12345e-10"},
{".123456e-3", "123456e-9"},
{".123456e-2", ".00123456"},
{".1234567e-4", "1234567e-11"},
{".1234567e-3", ".0001234567"},
{"12345678e-1", "1234567.8"},
{"72.e-3", ".072"},
{"7640e-2", "76.4"},
{"10.e-3", ".01"},
{".0319e3", "31.9"},
{"39.7e-2", ".397"},
{"39.7e-3", ".0397"},
{".01e1", ".1"},
{".001e1", ".01"},
{"39.7e-5", "397e-6"},
}
for _, tt := range numberTests {
t.Run(tt.number, func(t *testing.T) {
number := Number([]byte(tt.number), -1)
test.Minify(t, tt.number, nil, string(number), tt.expected)
})
}
}
func TestNumberTruncate(t *testing.T) {
numberTests := []struct {
number string
truncate int
expected string
}{
{"0.1", 1, ".1"},
{"0.0001", 1, "1e-4"},
{"0.111", 1, ".1"},
{"0.111", 0, "0"},
{"0.075", 1, ".1"},
{"0.025", 1, "0"},
{"9.99", 1, "10"},
{"8.88", 1, "8.9"},
{"8.88", 0, "9"},
{"8.00", 0, "8"},
{".88", 0, "1"},
{"1.234", 1, "1.2"},
{"33.33", 0, "33"},
{"29.666", 0, "30"},
{"1.51", 1, "1.5"},
}
for _, tt := range numberTests {
t.Run(tt.number, func(t *testing.T) {
number := Number([]byte(tt.number), tt.truncate)
test.Minify(t, tt.number, nil, string(number), tt.expected, "truncate to", tt.truncate)
})
}
}
func TestNumberRandom(t *testing.T) {
N := int(1e4)
if testing.Short() {
N = 0
}
for i := 0; i < N; i++ {
b := RandNumBytes()
f, _ := strconv.ParseFloat(string(b), 64)
b2 := make([]byte, len(b))
copy(b2, b)
b2 = Number(b2, -1)
f2, _ := strconv.ParseFloat(string(b2), 64)
if math.Abs(f-f2) > 1e-6 {
fmt.Println("Bad:", f, "!=", f2, "in", string(b), "to", string(b2))
}
}
}
////////////////
var n = 100
var numbers [][]byte
func TestMain(t *testing.T) {
numbers = make([][]byte, 0, n)
for j := 0; j < n; j++ {
numbers = append(numbers, RandNumBytes())
}
}
func RandNumBytes() []byte {
var b []byte
n := rand.Int() % 10
for i := 0; i < n; i++ {
b = append(b, byte(rand.Int()%10)+'0')
}
if rand.Int()%2 == 0 {
b = append(b, '.')
n = rand.Int() % 10
for i := 0; i < n; i++ {
b = append(b, byte(rand.Int()%10)+'0')
}
}
if rand.Int()%2 == 0 {
b = append(b, 'e')
if rand.Int()%2 == 0 {
b = append(b, '-')
}
n = 1 + rand.Int()%4
for i := 0; i < n; i++ {
b = append(b, byte(rand.Int()%10)+'0')
}
}
return b
}
func BenchmarkNumber(b *testing.B) {
for i := 0; i < b.N; i++ {
for j := 0; j < n; j++ {
Number(numbers[j], -1)
}
}
}
func BenchmarkNumber2(b *testing.B) {
num := []byte("1.2345e-6")
for i := 0; i < b.N; i++ {
Number(num, -1)
}
}

559
vendor/github.com/tdewolff/minify/css/css.go generated vendored Normal file
View file

@ -0,0 +1,559 @@
// Package css minifies CSS3 following the specifications at http://www.w3.org/TR/css-syntax-3/.
package css // import "github.com/tdewolff/minify/css"
import (
"bytes"
"encoding/hex"
"io"
"strconv"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/css"
)
var (
spaceBytes = []byte(" ")
colonBytes = []byte(":")
semicolonBytes = []byte(";")
commaBytes = []byte(",")
leftBracketBytes = []byte("{")
rightBracketBytes = []byte("}")
zeroBytes = []byte("0")
msfilterBytes = []byte("-ms-filter")
backgroundNoneBytes = []byte("0 0")
)
type cssMinifier struct {
m *minify.M
w io.Writer
p *css.Parser
o *Minifier
}
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{Decimals: -1}
// Minifier is a CSS minifier.
type Minifier struct {
Decimals int
}
// Minify minifies CSS data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies CSS data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
isInline := params != nil && params["inline"] == "1"
c := &cssMinifier{
m: m,
w: w,
p: css.NewParser(r, isInline),
o: o,
}
defer c.p.Restore()
if err := c.minifyGrammar(); err != nil && err != io.EOF {
return err
}
return nil
}
func (c *cssMinifier) minifyGrammar() error {
semicolonQueued := false
for {
gt, _, data := c.p.Next()
if gt == css.ErrorGrammar {
if perr, ok := c.p.Err().(*parse.Error); ok && perr.Message == "unexpected token in declaration" {
if semicolonQueued {
if _, err := c.w.Write(semicolonBytes); err != nil {
return err
}
}
// write out the offending declaration
if _, err := c.w.Write(data); err != nil {
return err
}
for _, val := range c.p.Values() {
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
semicolonQueued = true
continue
} else {
return c.p.Err()
}
} else if gt == css.EndAtRuleGrammar || gt == css.EndRulesetGrammar {
if _, err := c.w.Write(rightBracketBytes); err != nil {
return err
}
semicolonQueued = false
continue
}
if semicolonQueued {
if _, err := c.w.Write(semicolonBytes); err != nil {
return err
}
semicolonQueued = false
}
if gt == css.AtRuleGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
for _, val := range c.p.Values() {
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
semicolonQueued = true
} else if gt == css.BeginAtRuleGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
for _, val := range c.p.Values() {
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
if _, err := c.w.Write(leftBracketBytes); err != nil {
return err
}
} else if gt == css.QualifiedRuleGrammar {
if err := c.minifySelectors(data, c.p.Values()); err != nil {
return err
}
if _, err := c.w.Write(commaBytes); err != nil {
return err
}
} else if gt == css.BeginRulesetGrammar {
if err := c.minifySelectors(data, c.p.Values()); err != nil {
return err
}
if _, err := c.w.Write(leftBracketBytes); err != nil {
return err
}
} else if gt == css.DeclarationGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
if _, err := c.w.Write(colonBytes); err != nil {
return err
}
if err := c.minifyDeclaration(data, c.p.Values()); err != nil {
return err
}
semicolonQueued = true
} else if gt == css.CustomPropertyGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
if _, err := c.w.Write(colonBytes); err != nil {
return err
}
if _, err := c.w.Write(c.p.Values()[0].Data); err != nil {
return err
}
semicolonQueued = true
} else if gt == css.CommentGrammar {
if len(data) > 5 && data[1] == '*' && data[2] == '!' {
if _, err := c.w.Write(data[:3]); err != nil {
return err
}
comment := parse.TrimWhitespace(parse.ReplaceMultipleWhitespace(data[3 : len(data)-2]))
if _, err := c.w.Write(comment); err != nil {
return err
}
if _, err := c.w.Write(data[len(data)-2:]); err != nil {
return err
}
}
} else if _, err := c.w.Write(data); err != nil {
return err
}
}
}
func (c *cssMinifier) minifySelectors(property []byte, values []css.Token) error {
inAttr := false
isClass := false
for _, val := range c.p.Values() {
if !inAttr {
if val.TokenType == css.IdentToken {
if !isClass {
parse.ToLower(val.Data)
}
isClass = false
} else if val.TokenType == css.DelimToken && val.Data[0] == '.' {
isClass = true
} else if val.TokenType == css.LeftBracketToken {
inAttr = true
}
} else {
if val.TokenType == css.StringToken && len(val.Data) > 2 {
s := val.Data[1 : len(val.Data)-1]
if css.IsIdent([]byte(s)) {
if _, err := c.w.Write(s); err != nil {
return err
}
continue
}
} else if val.TokenType == css.RightBracketToken {
inAttr = false
}
}
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
return nil
}
func (c *cssMinifier) minifyDeclaration(property []byte, values []css.Token) error {
if len(values) == 0 {
return nil
}
prop := css.ToHash(property)
inProgid := false
for i, value := range values {
if inProgid {
if value.TokenType == css.FunctionToken {
inProgid = false
}
continue
} else if value.TokenType == css.IdentToken && css.ToHash(value.Data) == css.Progid {
inProgid = true
continue
}
value.TokenType, value.Data = c.shortenToken(prop, value.TokenType, value.Data)
if prop == css.Font || prop == css.Font_Family || prop == css.Font_Weight {
if value.TokenType == css.IdentToken && (prop == css.Font || prop == css.Font_Weight) {
val := css.ToHash(value.Data)
if val == css.Normal && prop == css.Font_Weight {
// normal could also be specified for font-variant, not just font-weight
value.TokenType = css.NumberToken
value.Data = []byte("400")
} else if val == css.Bold {
value.TokenType = css.NumberToken
value.Data = []byte("700")
}
} else if value.TokenType == css.StringToken && (prop == css.Font || prop == css.Font_Family) && len(value.Data) > 2 {
unquote := true
parse.ToLower(value.Data)
s := value.Data[1 : len(value.Data)-1]
if len(s) > 0 {
for _, split := range bytes.Split(s, spaceBytes) {
val := css.ToHash(split)
// if len is zero, it contains two consecutive spaces
if val == css.Inherit || val == css.Serif || val == css.Sans_Serif || val == css.Monospace || val == css.Fantasy || val == css.Cursive || val == css.Initial || val == css.Default ||
len(split) == 0 || !css.IsIdent(split) {
unquote = false
break
}
}
}
if unquote {
value.Data = s
}
}
} else if prop == css.Outline || prop == css.Border || prop == css.Border_Bottom || prop == css.Border_Left || prop == css.Border_Right || prop == css.Border_Top {
if css.ToHash(value.Data) == css.None {
value.TokenType = css.NumberToken
value.Data = zeroBytes
}
}
values[i].TokenType, values[i].Data = value.TokenType, value.Data
}
important := false
if len(values) > 2 && values[len(values)-2].TokenType == css.DelimToken && values[len(values)-2].Data[0] == '!' && css.ToHash(values[len(values)-1].Data) == css.Important {
values = values[:len(values)-2]
important = true
}
if len(values) == 1 {
if prop == css.Background && css.ToHash(values[0].Data) == css.None {
values[0].Data = backgroundNoneBytes
} else if bytes.Equal(property, msfilterBytes) {
alpha := []byte("progid:DXImageTransform.Microsoft.Alpha(Opacity=")
if values[0].TokenType == css.StringToken && bytes.HasPrefix(values[0].Data[1:len(values[0].Data)-1], alpha) {
values[0].Data = append(append([]byte{values[0].Data[0]}, []byte("alpha(opacity=")...), values[0].Data[1+len(alpha):]...)
}
}
} else {
if prop == css.Margin || prop == css.Padding || prop == css.Border_Width {
if (values[0].TokenType == css.NumberToken || values[0].TokenType == css.DimensionToken || values[0].TokenType == css.PercentageToken) && (len(values)+1)%2 == 0 {
valid := true
for i := 1; i < len(values); i += 2 {
if values[i].TokenType != css.WhitespaceToken || values[i+1].TokenType != css.NumberToken && values[i+1].TokenType != css.DimensionToken && values[i+1].TokenType != css.PercentageToken {
valid = false
break
}
}
if valid {
n := (len(values) + 1) / 2
if n == 2 {
if bytes.Equal(values[0].Data, values[2].Data) {
values = values[:1]
}
} else if n == 3 {
if bytes.Equal(values[0].Data, values[2].Data) && bytes.Equal(values[0].Data, values[4].Data) {
values = values[:1]
} else if bytes.Equal(values[0].Data, values[4].Data) {
values = values[:3]
}
} else if n == 4 {
if bytes.Equal(values[0].Data, values[2].Data) && bytes.Equal(values[0].Data, values[4].Data) && bytes.Equal(values[0].Data, values[6].Data) {
values = values[:1]
} else if bytes.Equal(values[0].Data, values[4].Data) && bytes.Equal(values[2].Data, values[6].Data) {
values = values[:3]
} else if bytes.Equal(values[2].Data, values[6].Data) {
values = values[:5]
}
}
}
}
} else if prop == css.Filter && len(values) == 11 {
if bytes.Equal(values[0].Data, []byte("progid")) &&
values[1].TokenType == css.ColonToken &&
bytes.Equal(values[2].Data, []byte("DXImageTransform")) &&
values[3].Data[0] == '.' &&
bytes.Equal(values[4].Data, []byte("Microsoft")) &&
values[5].Data[0] == '.' &&
bytes.Equal(values[6].Data, []byte("Alpha(")) &&
bytes.Equal(parse.ToLower(values[7].Data), []byte("opacity")) &&
values[8].Data[0] == '=' &&
values[10].Data[0] == ')' {
values = values[6:]
values[0].Data = []byte("alpha(")
}
}
}
for i := 0; i < len(values); i++ {
if values[i].TokenType == css.FunctionToken {
n, err := c.minifyFunction(values[i:])
if err != nil {
return err
}
i += n - 1
} else if _, err := c.w.Write(values[i].Data); err != nil {
return err
}
}
if important {
if _, err := c.w.Write([]byte("!important")); err != nil {
return err
}
}
return nil
}
func (c *cssMinifier) minifyFunction(values []css.Token) (int, error) {
n := 1
simple := true
for i, value := range values[1:] {
if value.TokenType == css.RightParenthesisToken {
n++
break
}
if i%2 == 0 && (value.TokenType != css.NumberToken && value.TokenType != css.PercentageToken) || (i%2 == 1 && value.TokenType != css.CommaToken) {
simple = false
}
n++
}
values = values[:n]
if simple && (n-1)%2 == 0 {
fun := css.ToHash(values[0].Data[:len(values[0].Data)-1])
nArgs := (n - 1) / 2
if (fun == css.Rgba || fun == css.Hsla) && nArgs == 4 {
d, _ := strconv.ParseFloat(string(values[7].Data), 32) // can never fail because if simple == true than this is a NumberToken or PercentageToken
if d-1.0 > -minify.Epsilon {
if fun == css.Rgba {
values[0].Data = []byte("rgb(")
fun = css.Rgb
} else {
values[0].Data = []byte("hsl(")
fun = css.Hsl
}
values = values[:len(values)-2]
values[len(values)-1].Data = []byte(")")
nArgs = 3
} else if d < minify.Epsilon {
values[0].Data = []byte("transparent")
values = values[:1]
fun = 0
nArgs = 0
}
}
if fun == css.Rgb && nArgs == 3 {
var err [3]error
rgb := [3]byte{}
for j := 0; j < 3; j++ {
val := values[j*2+1]
if val.TokenType == css.NumberToken {
var d int64
d, err[j] = strconv.ParseInt(string(val.Data), 10, 32)
if d < 0 {
d = 0
} else if d > 255 {
d = 255
}
rgb[j] = byte(d)
} else if val.TokenType == css.PercentageToken {
var d float64
d, err[j] = strconv.ParseFloat(string(val.Data[:len(val.Data)-1]), 32)
if d < 0.0 {
d = 0.0
} else if d > 100.0 {
d = 100.0
}
rgb[j] = byte((d / 100.0 * 255.0) + 0.5)
}
}
if err[0] == nil && err[1] == nil && err[2] == nil {
val := make([]byte, 7)
val[0] = '#'
hex.Encode(val[1:], rgb[:])
parse.ToLower(val)
if s, ok := ShortenColorHex[string(val)]; ok {
if _, err := c.w.Write(s); err != nil {
return 0, err
}
} else {
if len(val) == 7 && val[1] == val[2] && val[3] == val[4] && val[5] == val[6] {
val[2] = val[3]
val[3] = val[5]
val = val[:4]
}
if _, err := c.w.Write(val); err != nil {
return 0, err
}
}
return n, nil
}
} else if fun == css.Hsl && nArgs == 3 {
if values[1].TokenType == css.NumberToken && values[3].TokenType == css.PercentageToken && values[5].TokenType == css.PercentageToken {
h, err1 := strconv.ParseFloat(string(values[1].Data), 32)
s, err2 := strconv.ParseFloat(string(values[3].Data[:len(values[3].Data)-1]), 32)
l, err3 := strconv.ParseFloat(string(values[5].Data[:len(values[5].Data)-1]), 32)
if err1 == nil && err2 == nil && err3 == nil {
r, g, b := css.HSL2RGB(h/360.0, s/100.0, l/100.0)
rgb := []byte{byte((r * 255.0) + 0.5), byte((g * 255.0) + 0.5), byte((b * 255.0) + 0.5)}
val := make([]byte, 7)
val[0] = '#'
hex.Encode(val[1:], rgb[:])
parse.ToLower(val)
if s, ok := ShortenColorHex[string(val)]; ok {
if _, err := c.w.Write(s); err != nil {
return 0, err
}
} else {
if len(val) == 7 && val[1] == val[2] && val[3] == val[4] && val[5] == val[6] {
val[2] = val[3]
val[3] = val[5]
val = val[:4]
}
if _, err := c.w.Write(val); err != nil {
return 0, err
}
}
return n, nil
}
}
}
}
for _, value := range values {
if _, err := c.w.Write(value.Data); err != nil {
return 0, err
}
}
return n, nil
}
func (c *cssMinifier) shortenToken(prop css.Hash, tt css.TokenType, data []byte) (css.TokenType, []byte) {
if tt == css.NumberToken || tt == css.PercentageToken || tt == css.DimensionToken {
if tt == css.NumberToken && (prop == css.Z_Index || prop == css.Counter_Increment || prop == css.Counter_Reset || prop == css.Orphans || prop == css.Widows) {
return tt, data // integers
}
n := len(data)
if tt == css.PercentageToken {
n--
} else if tt == css.DimensionToken {
n = parse.Number(data)
}
dim := data[n:]
parse.ToLower(dim)
data = minify.Number(data[:n], c.o.Decimals)
if tt == css.PercentageToken && (len(data) != 1 || data[0] != '0' || prop == css.Color) {
data = append(data, '%')
} else if tt == css.DimensionToken && (len(data) != 1 || data[0] != '0' || requiredDimension[string(dim)]) {
data = append(data, dim...)
}
} else if tt == css.IdentToken {
//parse.ToLower(data) // TODO: not all identifiers are case-insensitive; all <custom-ident> properties are case-sensitive
if hex, ok := ShortenColorName[css.ToHash(data)]; ok {
tt = css.HashToken
data = hex
}
} else if tt == css.HashToken {
parse.ToLower(data)
if ident, ok := ShortenColorHex[string(data)]; ok {
tt = css.IdentToken
data = ident
} else if len(data) == 7 && data[1] == data[2] && data[3] == data[4] && data[5] == data[6] {
tt = css.HashToken
data[2] = data[3]
data[3] = data[5]
data = data[:4]
}
} else if tt == css.StringToken {
// remove any \\\r\n \\\r \\\n
for i := 1; i < len(data)-2; i++ {
if data[i] == '\\' && (data[i+1] == '\n' || data[i+1] == '\r') {
// encountered first replacee, now start to move bytes to the front
j := i + 2
if data[i+1] == '\r' && len(data) > i+2 && data[i+2] == '\n' {
j++
}
for ; j < len(data); j++ {
if data[j] == '\\' && len(data) > j+1 && (data[j+1] == '\n' || data[j+1] == '\r') {
if data[j+1] == '\r' && len(data) > j+2 && data[j+2] == '\n' {
j++
}
j++
} else {
data[i] = data[j]
i++
}
}
data = data[:i]
break
}
}
} else if tt == css.URLToken {
parse.ToLower(data[:3])
if len(data) > 10 {
uri := data[4 : len(data)-1]
delim := byte('"')
if uri[0] == '\'' || uri[0] == '"' {
delim = uri[0]
uri = uri[1 : len(uri)-1]
}
uri = minify.DataURI(c.m, uri)
if css.IsURLUnquoted(uri) {
data = append(append([]byte("url("), uri...), ')')
} else {
data = append(append(append([]byte("url("), delim), uri...), delim, ')')
}
}
}
return tt, data
}

234
vendor/github.com/tdewolff/minify/css/css_test.go generated vendored Normal file
View file

@ -0,0 +1,234 @@
package css // import "github.com/tdewolff/minify/css"
import (
"bytes"
"fmt"
"os"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestCSS(t *testing.T) {
cssTests := []struct {
css string
expected string
}{
{"/*comment*/", ""},
{"/*! bang comment */", "/*!bang comment*/"},
{"i{}/*! bang comment */", "i{}/*!bang comment*/"},
{"i { key: value; key2: value; }", "i{key:value;key2:value}"},
{".cla .ss > #id { x:y; }", ".cla .ss>#id{x:y}"},
{".cla[id ^= L] { x:y; }", ".cla[id^=L]{x:y}"},
{"area:focus { outline : 0;}", "area:focus{outline:0}"},
{"@import 'file';", "@import 'file'"},
{"@font-face { x:y; }", "@font-face{x:y}"},
{"input[type=\"radio\"]{x:y}", "input[type=radio]{x:y}"},
{"DIV{margin:1em}", "div{margin:1em}"},
{".CLASS{margin:1em}", ".CLASS{margin:1em}"},
{"@MEDIA all{}", "@media all{}"},
{"@media only screen and (max-width : 800px){}", "@media only screen and (max-width:800px){}"},
{"@media (-webkit-min-device-pixel-ratio:1.5),(min-resolution:1.5dppx){}", "@media(-webkit-min-device-pixel-ratio:1.5),(min-resolution:1.5dppx){}"},
{"[class^=icon-] i[class^=icon-],i[class*=\" icon-\"]{x:y}", "[class^=icon-] i[class^=icon-],i[class*=\" icon-\"]{x:y}"},
{"html{line-height:1;}html{line-height:1;}", "html{line-height:1}html{line-height:1}"},
{"a { b: 1", "a{b:1}"},
{":root { --custom-variable:0px; }", ":root{--custom-variable:0px}"},
// case sensitivity
{"@counter-style Ident{}", "@counter-style Ident{}"},
// coverage
{"a, b + c { x:y; }", "a,b+c{x:y}"},
// bad declaration
{".clearfix { *zoom: 1px; }", ".clearfix{*zoom:1px}"},
{".clearfix { *zoom: 1px }", ".clearfix{*zoom:1px}"},
{".clearfix { color:green; *zoom: 1px; color:red; }", ".clearfix{color:green;*zoom:1px;color:red}"},
// go-fuzz
{"input[type=\"\x00\"] { a: b\n}.a{}", "input[type=\"\x00\"]{a:b}.a{}"},
{"a{a:)'''", "a{a:)'''}"},
}
m := minify.New()
for _, tt := range cssTests {
t.Run(tt.css, func(t *testing.T) {
r := bytes.NewBufferString(tt.css)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.css, err, w.String(), tt.expected)
})
}
}
func TestCSSInline(t *testing.T) {
cssTests := []struct {
css string
expected string
}{
{"/*comment*/", ""},
{"/*! bang comment */", ""},
{";", ""},
{"empty:", "empty:"},
{"key: value;", "key:value"},
{"margin: 0 1; padding: 0 1;", "margin:0 1;padding:0 1"},
{"color: #FF0000;", "color:red"},
{"color: #000000;", "color:#000"},
{"color: black;", "color:#000"},
{"color: rgb(255,255,255);", "color:#fff"},
{"color: rgb(100%,100%,100%);", "color:#fff"},
{"color: rgba(255,0,0,1);", "color:red"},
{"color: rgba(255,0,0,2);", "color:red"},
{"color: rgba(255,0,0,0.5);", "color:rgba(255,0,0,.5)"},
{"color: rgba(255,0,0,-1);", "color:transparent"},
{"color: rgba(0%,15%,25%,0.2);", "color:rgba(0%,15%,25%,.2)"},
{"color: rgba(0,0,0,0.5);", "color:rgba(0,0,0,.5)"},
{"color: hsla(5,0%,10%,0.75);", "color:hsla(5,0%,10%,.75)"},
{"color: hsl(0,100%,50%);", "color:red"},
{"color: hsla(1,2%,3%,1);", "color:#080807"},
{"color: hsla(1,2%,3%,0);", "color:transparent"},
{"color: hsl(48,100%,50%);", "color:#fc0"},
{"font-weight: bold; font-weight: normal;", "font-weight:700;font-weight:400"},
{"font: bold \"Times new Roman\",\"Sans-Serif\";", "font:700 times new roman,\"sans-serif\""},
{"outline: none;", "outline:0"},
{"outline: none !important;", "outline:0!important"},
{"border-left: none;", "border-left:0"},
{"margin: 1 1 1 1;", "margin:1"},
{"margin: 1 2 1 2;", "margin:1 2"},
{"margin: 1 2 3 2;", "margin:1 2 3"},
{"margin: 1 2 3 4;", "margin:1 2 3 4"},
{"margin: 1 1 1 a;", "margin:1 1 1 a"},
{"margin: 1 1 1 1 !important;", "margin:1!important"},
{"padding:.2em .4em .2em", "padding:.2em .4em"},
{"margin: 0em;", "margin:0"},
{"font-family:'Arial', 'Times New Roman';", "font-family:arial,times new roman"},
{"background:url('http://domain.com/image.png');", "background:url(http://domain.com/image.png)"},
{"filter: progid : DXImageTransform.Microsoft.BasicImage(rotation=1);", "filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=1)"},
{"filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=0);", "filter:alpha(opacity=0)"},
{"content: \"a\\\nb\";", "content:\"ab\""},
{"content: \"a\\\r\nb\\\r\nc\";", "content:\"abc\""},
{"content: \"\";", "content:\"\""},
{"font:27px/13px arial,sans-serif", "font:27px/13px arial,sans-serif"},
{"text-decoration: none !important", "text-decoration:none!important"},
{"color:#fff", "color:#fff"},
{"border:2px rgb(255,255,255);", "border:2px #fff"},
{"margin:-1px", "margin:-1px"},
{"margin:+1px", "margin:1px"},
{"margin:0.5em", "margin:.5em"},
{"margin:-0.5em", "margin:-.5em"},
{"margin:05em", "margin:5em"},
{"margin:.50em", "margin:.5em"},
{"margin:5.0em", "margin:5em"},
{"margin:5000em", "margin:5e3em"},
{"color:#c0c0c0", "color:silver"},
{"-ms-filter: \"progid:DXImageTransform.Microsoft.Alpha(Opacity=80)\";", "-ms-filter:\"alpha(opacity=80)\""},
{"filter: progid:DXImageTransform.Microsoft.Alpha(Opacity = 80);", "filter:alpha(opacity=80)"},
{"MARGIN:1EM", "margin:1em"},
//{"color:CYAN", "color:cyan"}, // TODO
{"width:attr(Name em)", "width:attr(Name em)"},
{"content:CounterName", "content:CounterName"},
{"background:URL(x.PNG);", "background:url(x.PNG)"},
{"background:url(/*nocomment*/)", "background:url(/*nocomment*/)"},
{"background:url(data:,text)", "background:url(data:,text)"},
{"background:url('data:text/xml; version = 2.0,content')", "background:url(data:text/xml;version=2.0,content)"},
{"background:url('data:\\'\",text')", "background:url('data:\\'\",text')"},
{"margin:0 0 18px 0;", "margin:0 0 18px"},
{"background:none", "background:0 0"},
{"background:none 1 1", "background:none 1 1"},
{"z-index:1000", "z-index:1000"},
{"any:0deg 0s 0ms 0dpi 0dpcm 0dppx 0hz 0khz", "any:0 0s 0ms 0dpi 0dpcm 0dppx 0hz 0khz"},
{"--custom-variable:0px;", "--custom-variable:0px"},
{"--foo: if(x > 5) this.width = 10", "--foo: if(x > 5) this.width = 10"},
{"--foo: ;", "--foo: "},
// case sensitivity
{"animation:Ident", "animation:Ident"},
{"animation-name:Ident", "animation-name:Ident"},
// coverage
{"margin: 1 1;", "margin:1"},
{"margin: 1 2;", "margin:1 2"},
{"margin: 1 1 1;", "margin:1"},
{"margin: 1 2 1;", "margin:1 2"},
{"margin: 1 2 3;", "margin:1 2 3"},
{"margin: 0%;", "margin:0"},
{"color: rgb(255,64,64);", "color:#ff4040"},
{"color: rgb(256,-34,2342435);", "color:#f0f"},
{"color: rgb(120%,-45%,234234234%);", "color:#f0f"},
{"color: rgb(0, 1, ident);", "color:rgb(0,1,ident)"},
{"color: rgb(ident);", "color:rgb(ident)"},
{"margin: rgb(ident);", "margin:rgb(ident)"},
{"filter: progid:b().c.Alpha(rgba(x));", "filter:progid:b().c.Alpha(rgba(x))"},
// go-fuzz
{"FONT-FAMILY: ru\"", "font-family:ru\""},
}
m := minify.New()
params := map[string]string{"inline": "1"}
for _, tt := range cssTests {
t.Run(tt.css, func(t *testing.T) {
r := bytes.NewBufferString(tt.css)
w := &bytes.Buffer{}
err := Minify(m, w, r, params)
test.Minify(t, tt.css, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
css string
n []int
}{
{`@import 'file'`, []int{0, 2}},
{`@media all{}`, []int{0, 2, 3, 4}},
{`a[id^="L"]{margin:2in!important;color:red}`, []int{0, 4, 6, 7, 8, 9, 10, 11}},
{`a{color:rgb(255,0,0)}`, []int{4}},
{`a{color:rgb(255,255,255)}`, []int{4}},
{`a{color:hsl(0,100%,50%)}`, []int{4}},
{`a{color:hsl(360,100%,100%)}`, []int{4}},
{`a{color:f(arg)}`, []int{4}},
{`<!--`, []int{0}},
{`/*!comment*/`, []int{0, 1, 2}},
{`a{--var:val}`, []int{2, 3, 4}},
{`a{*color:0}`, []int{2, 3}},
{`a{color:0;baddecl 5}`, []int{5}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.css, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.css)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("text/css", Minify)
if err := m.Minify("text/css", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

153
vendor/github.com/tdewolff/minify/css/table.go generated vendored Normal file
View file

@ -0,0 +1,153 @@
package css
import "github.com/tdewolff/parse/css"
var requiredDimension = map[string]bool{
"s": true,
"ms": true,
"dpi": true,
"dpcm": true,
"dppx": true,
"hz": true,
"khz": true,
}
// Uses http://www.w3.org/TR/2010/PR-css3-color-20101028/ for colors
// ShortenColorHex maps a color hexcode to its shorter name
var ShortenColorHex = map[string][]byte{
"#000080": []byte("navy"),
"#008000": []byte("green"),
"#008080": []byte("teal"),
"#4b0082": []byte("indigo"),
"#800000": []byte("maroon"),
"#800080": []byte("purple"),
"#808000": []byte("olive"),
"#808080": []byte("gray"),
"#a0522d": []byte("sienna"),
"#a52a2a": []byte("brown"),
"#c0c0c0": []byte("silver"),
"#cd853f": []byte("peru"),
"#d2b48c": []byte("tan"),
"#da70d6": []byte("orchid"),
"#dda0dd": []byte("plum"),
"#ee82ee": []byte("violet"),
"#f0e68c": []byte("khaki"),
"#f0ffff": []byte("azure"),
"#f5deb3": []byte("wheat"),
"#f5f5dc": []byte("beige"),
"#fa8072": []byte("salmon"),
"#faf0e6": []byte("linen"),
"#ff6347": []byte("tomato"),
"#ff7f50": []byte("coral"),
"#ffa500": []byte("orange"),
"#ffc0cb": []byte("pink"),
"#ffd700": []byte("gold"),
"#ffe4c4": []byte("bisque"),
"#fffafa": []byte("snow"),
"#fffff0": []byte("ivory"),
"#ff0000": []byte("red"),
"#f00": []byte("red"),
}
// ShortenColorName maps a color name to its shorter hexcode
var ShortenColorName = map[css.Hash][]byte{
css.Black: []byte("#000"),
css.Darkblue: []byte("#00008b"),
css.Mediumblue: []byte("#0000cd"),
css.Darkgreen: []byte("#006400"),
css.Darkcyan: []byte("#008b8b"),
css.Deepskyblue: []byte("#00bfff"),
css.Darkturquoise: []byte("#00ced1"),
css.Mediumspringgreen: []byte("#00fa9a"),
css.Springgreen: []byte("#00ff7f"),
css.Midnightblue: []byte("#191970"),
css.Dodgerblue: []byte("#1e90ff"),
css.Lightseagreen: []byte("#20b2aa"),
css.Forestgreen: []byte("#228b22"),
css.Seagreen: []byte("#2e8b57"),
css.Darkslategray: []byte("#2f4f4f"),
css.Limegreen: []byte("#32cd32"),
css.Mediumseagreen: []byte("#3cb371"),
css.Turquoise: []byte("#40e0d0"),
css.Royalblue: []byte("#4169e1"),
css.Steelblue: []byte("#4682b4"),
css.Darkslateblue: []byte("#483d8b"),
css.Mediumturquoise: []byte("#48d1cc"),
css.Darkolivegreen: []byte("#556b2f"),
css.Cadetblue: []byte("#5f9ea0"),
css.Cornflowerblue: []byte("#6495ed"),
css.Mediumaquamarine: []byte("#66cdaa"),
css.Slateblue: []byte("#6a5acd"),
css.Olivedrab: []byte("#6b8e23"),
css.Slategray: []byte("#708090"),
css.Lightslateblue: []byte("#789"),
css.Mediumslateblue: []byte("#7b68ee"),
css.Lawngreen: []byte("#7cfc00"),
css.Chartreuse: []byte("#7fff00"),
css.Aquamarine: []byte("#7fffd4"),
css.Lightskyblue: []byte("#87cefa"),
css.Blueviolet: []byte("#8a2be2"),
css.Darkmagenta: []byte("#8b008b"),
css.Saddlebrown: []byte("#8b4513"),
css.Darkseagreen: []byte("#8fbc8f"),
css.Lightgreen: []byte("#90ee90"),
css.Mediumpurple: []byte("#9370db"),
css.Darkviolet: []byte("#9400d3"),
css.Palegreen: []byte("#98fb98"),
css.Darkorchid: []byte("#9932cc"),
css.Yellowgreen: []byte("#9acd32"),
css.Darkgray: []byte("#a9a9a9"),
css.Lightblue: []byte("#add8e6"),
css.Greenyellow: []byte("#adff2f"),
css.Paleturquoise: []byte("#afeeee"),
css.Lightsteelblue: []byte("#b0c4de"),
css.Powderblue: []byte("#b0e0e6"),
css.Firebrick: []byte("#b22222"),
css.Darkgoldenrod: []byte("#b8860b"),
css.Mediumorchid: []byte("#ba55d3"),
css.Rosybrown: []byte("#bc8f8f"),
css.Darkkhaki: []byte("#bdb76b"),
css.Mediumvioletred: []byte("#c71585"),
css.Indianred: []byte("#cd5c5c"),
css.Chocolate: []byte("#d2691e"),
css.Lightgray: []byte("#d3d3d3"),
css.Goldenrod: []byte("#daa520"),
css.Palevioletred: []byte("#db7093"),
css.Gainsboro: []byte("#dcdcdc"),
css.Burlywood: []byte("#deb887"),
css.Lightcyan: []byte("#e0ffff"),
css.Lavender: []byte("#e6e6fa"),
css.Darksalmon: []byte("#e9967a"),
css.Palegoldenrod: []byte("#eee8aa"),
css.Lightcoral: []byte("#f08080"),
css.Aliceblue: []byte("#f0f8ff"),
css.Honeydew: []byte("#f0fff0"),
css.Sandybrown: []byte("#f4a460"),
css.Whitesmoke: []byte("#f5f5f5"),
css.Mintcream: []byte("#f5fffa"),
css.Ghostwhite: []byte("#f8f8ff"),
css.Antiquewhite: []byte("#faebd7"),
css.Lightgoldenrodyellow: []byte("#fafad2"),
css.Fuchsia: []byte("#f0f"),
css.Magenta: []byte("#f0f"),
css.Deeppink: []byte("#ff1493"),
css.Orangered: []byte("#ff4500"),
css.Darkorange: []byte("#ff8c00"),
css.Lightsalmon: []byte("#ffa07a"),
css.Lightpink: []byte("#ffb6c1"),
css.Peachpuff: []byte("#ffdab9"),
css.Navajowhite: []byte("#ffdead"),
css.Moccasin: []byte("#ffe4b5"),
css.Mistyrose: []byte("#ffe4e1"),
css.Blanchedalmond: []byte("#ffebcd"),
css.Papayawhip: []byte("#ffefd5"),
css.Lavenderblush: []byte("#fff0f5"),
css.Seashell: []byte("#fff5ee"),
css.Cornsilk: []byte("#fff8dc"),
css.Lemonchiffon: []byte("#fffacd"),
css.Floralwhite: []byte("#fffaf0"),
css.Yellow: []byte("#ff0"),
css.Lightyellow: []byte("#ffffe0"),
css.White: []byte("#fff"),
}

131
vendor/github.com/tdewolff/minify/html/buffer.go generated vendored Normal file
View file

@ -0,0 +1,131 @@
package html // import "github.com/tdewolff/minify/html"
import (
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/html"
)
// Token is a single token unit with an attribute value (if given) and hash of the data.
type Token struct {
html.TokenType
Hash html.Hash
Data []byte
Text []byte
AttrVal []byte
Traits traits
}
// TokenBuffer is a buffer that allows for token look-ahead.
type TokenBuffer struct {
l *html.Lexer
buf []Token
pos int
attrBuffer []*Token
}
// NewTokenBuffer returns a new TokenBuffer.
func NewTokenBuffer(l *html.Lexer) *TokenBuffer {
return &TokenBuffer{
l: l,
buf: make([]Token, 0, 8),
}
}
func (z *TokenBuffer) read(t *Token) {
t.TokenType, t.Data = z.l.Next()
t.Text = z.l.Text()
if t.TokenType == html.AttributeToken {
t.AttrVal = z.l.AttrVal()
if len(t.AttrVal) > 1 && (t.AttrVal[0] == '"' || t.AttrVal[0] == '\'') {
t.AttrVal = parse.TrimWhitespace(t.AttrVal[1 : len(t.AttrVal)-1]) // quotes will be readded in attribute loop if necessary
}
t.Hash = html.ToHash(t.Text)
t.Traits = attrMap[t.Hash]
} else if t.TokenType == html.StartTagToken || t.TokenType == html.EndTagToken {
t.AttrVal = nil
t.Hash = html.ToHash(t.Text)
t.Traits = tagMap[t.Hash]
} else {
t.AttrVal = nil
t.Hash = 0
t.Traits = 0
}
}
// Peek returns the ith element and possibly does an allocation.
// Peeking past an error will panic.
func (z *TokenBuffer) Peek(pos int) *Token {
pos += z.pos
if pos >= len(z.buf) {
if len(z.buf) > 0 && z.buf[len(z.buf)-1].TokenType == html.ErrorToken {
return &z.buf[len(z.buf)-1]
}
c := cap(z.buf)
d := len(z.buf) - z.pos
p := pos - z.pos + 1 // required peek length
var buf []Token
if 2*p > c {
buf = make([]Token, 0, 2*c+p)
} else {
buf = z.buf
}
copy(buf[:d], z.buf[z.pos:])
buf = buf[:p]
pos -= z.pos
for i := d; i < p; i++ {
z.read(&buf[i])
if buf[i].TokenType == html.ErrorToken {
buf = buf[:i+1]
pos = i
break
}
}
z.pos, z.buf = 0, buf
}
return &z.buf[pos]
}
// Shift returns the first element and advances position.
func (z *TokenBuffer) Shift() *Token {
if z.pos >= len(z.buf) {
t := &z.buf[:1][0]
z.read(t)
return t
}
t := &z.buf[z.pos]
z.pos++
return t
}
// Attributes extracts the gives attribute hashes from a tag.
// It returns in the same order pointers to the requested token data or nil.
func (z *TokenBuffer) Attributes(hashes ...html.Hash) []*Token {
n := 0
for {
if t := z.Peek(n); t.TokenType != html.AttributeToken {
break
}
n++
}
if len(hashes) > cap(z.attrBuffer) {
z.attrBuffer = make([]*Token, len(hashes))
} else {
z.attrBuffer = z.attrBuffer[:len(hashes)]
for i := range z.attrBuffer {
z.attrBuffer[i] = nil
}
}
for i := z.pos; i < z.pos+n; i++ {
attr := &z.buf[i]
for j, hash := range hashes {
if hash == attr.Hash {
z.attrBuffer[j] = attr
}
}
}
return z.attrBuffer
}

37
vendor/github.com/tdewolff/minify/html/buffer_test.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package html // import "github.com/tdewolff/minify/html"
import (
"bytes"
"testing"
"github.com/tdewolff/parse/html"
"github.com/tdewolff/test"
)
func TestBuffer(t *testing.T) {
// 0 12 3 45 6 7 8 9 0
s := `<p><a href="//url">text</a>text<!--comment--></p>`
z := NewTokenBuffer(html.NewLexer(bytes.NewBufferString(s)))
tok := z.Shift()
test.That(t, tok.Hash == html.P, "first token is <p>")
test.That(t, z.pos == 0, "shift first token and restore position")
test.That(t, len(z.buf) == 0, "shift first token and restore length")
test.That(t, z.Peek(2).Hash == html.Href, "third token is href")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 3, "two tokens after peeking")
test.That(t, z.Peek(8).Hash == html.P, "ninth token is <p>")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 9, "nine tokens after peeking")
test.That(t, z.Peek(9).TokenType == html.ErrorToken, "tenth token is an error")
test.That(t, z.Peek(9) == z.Peek(10), "tenth and eleventh tokens are EOF")
test.That(t, len(z.buf) == 10, "ten tokens after peeking")
_ = z.Shift()
tok = z.Shift()
test.That(t, tok.Hash == html.A, "third token is <a>")
test.That(t, z.pos == 2, "don't change position after peeking")
}

463
vendor/github.com/tdewolff/minify/html/html.go generated vendored Normal file
View file

@ -0,0 +1,463 @@
// Package html minifies HTML5 following the specifications at http://www.w3.org/TR/html5/syntax.html.
package html // import "github.com/tdewolff/minify/html"
import (
"bytes"
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
"github.com/tdewolff/parse/html"
)
var (
gtBytes = []byte(">")
isBytes = []byte("=")
spaceBytes = []byte(" ")
doctypeBytes = []byte("<!doctype html>")
jsMimeBytes = []byte("text/javascript")
cssMimeBytes = []byte("text/css")
htmlMimeBytes = []byte("text/html")
svgMimeBytes = []byte("image/svg+xml")
mathMimeBytes = []byte("application/mathml+xml")
dataSchemeBytes = []byte("data:")
jsSchemeBytes = []byte("javascript:")
httpBytes = []byte("http")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is an HTML minifier.
type Minifier struct {
KeepConditionalComments bool
KeepDefaultAttrVals bool
KeepDocumentTags bool
KeepEndTags bool
KeepWhitespace bool
}
// Minify minifies HTML data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies HTML data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
var rawTagHash html.Hash
var rawTagMediatype []byte
omitSpace := true // if true the next leading space is omitted
inPre := false
defaultScriptType := jsMimeBytes
defaultScriptParams := map[string]string(nil)
defaultStyleType := cssMimeBytes
defaultStyleParams := map[string]string(nil)
defaultInlineStyleParams := map[string]string{"inline": "1"}
attrMinifyBuffer := buffer.NewWriter(make([]byte, 0, 64))
attrByteBuffer := make([]byte, 0, 64)
l := html.NewLexer(r)
defer l.Restore()
tb := NewTokenBuffer(l)
for {
t := *tb.Shift()
SWITCH:
switch t.TokenType {
case html.ErrorToken:
if l.Err() == io.EOF {
return nil
}
return l.Err()
case html.DoctypeToken:
if _, err := w.Write(doctypeBytes); err != nil {
return err
}
case html.CommentToken:
if o.KeepConditionalComments && len(t.Text) > 6 && (bytes.HasPrefix(t.Text, []byte("[if ")) || bytes.Equal(t.Text, []byte("[endif]"))) {
// [if ...] is always 7 or more characters, [endif] is only encountered for downlevel-revealed
// see https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).aspx#syntax
if bytes.HasPrefix(t.Data, []byte("<!--[if ")) { // downlevel-hidden
begin := bytes.IndexByte(t.Data, '>') + 1
end := len(t.Data) - len("<![endif]-->")
if _, err := w.Write(t.Data[:begin]); err != nil {
return err
}
if err := o.Minify(m, w, buffer.NewReader(t.Data[begin:end]), nil); err != nil {
return err
}
if _, err := w.Write(t.Data[end:]); err != nil {
return err
}
} else if _, err := w.Write(t.Data); err != nil { // downlevel-revealed
return err
}
}
case html.SvgToken:
if err := m.MinifyMimetype(svgMimeBytes, w, buffer.NewReader(t.Data), nil); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
case html.MathToken:
if err := m.MinifyMimetype(mathMimeBytes, w, buffer.NewReader(t.Data), nil); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
case html.TextToken:
// CSS and JS minifiers for inline code
if rawTagHash != 0 {
if rawTagHash == html.Style || rawTagHash == html.Script || rawTagHash == html.Iframe {
var mimetype []byte
var params map[string]string
if rawTagHash == html.Iframe {
mimetype = htmlMimeBytes
} else if len(rawTagMediatype) > 0 {
mimetype, params = parse.Mediatype(rawTagMediatype)
} else if rawTagHash == html.Script {
mimetype = defaultScriptType
params = defaultScriptParams
} else if rawTagHash == html.Style {
mimetype = defaultStyleType
params = defaultStyleParams
}
if err := m.MinifyMimetype(mimetype, w, buffer.NewReader(t.Data), params); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
} else if _, err := w.Write(t.Data); err != nil {
return err
}
} else if inPre {
if _, err := w.Write(t.Data); err != nil {
return err
}
} else {
t.Data = parse.ReplaceMultipleWhitespace(t.Data)
// whitespace removal; trim left
if omitSpace && (t.Data[0] == ' ' || t.Data[0] == '\n') {
t.Data = t.Data[1:]
}
// whitespace removal; trim right
omitSpace = false
if len(t.Data) == 0 {
omitSpace = true
} else if t.Data[len(t.Data)-1] == ' ' || t.Data[len(t.Data)-1] == '\n' {
omitSpace = true
i := 0
for {
next := tb.Peek(i)
// trim if EOF, text token with leading whitespace or block token
if next.TokenType == html.ErrorToken {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
break
} else if next.TokenType == html.TextToken {
// this only happens when a comment, doctype or phrasing end tag (only for !o.KeepWhitespace) was in between
// remove if the text token starts with a whitespace
if len(next.Data) > 0 && parse.IsWhitespace(next.Data[0]) {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
} else if next.TokenType == html.StartTagToken || next.TokenType == html.EndTagToken {
if o.KeepWhitespace {
break
}
// remove when followed up by a block tag
if next.Traits&nonPhrasingTag != 0 {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
break
} else if next.TokenType == html.StartTagToken {
break
}
}
i++
}
}
if _, err := w.Write(t.Data); err != nil {
return err
}
}
case html.StartTagToken, html.EndTagToken:
rawTagHash = 0
hasAttributes := false
if t.TokenType == html.StartTagToken {
if next := tb.Peek(0); next.TokenType == html.AttributeToken {
hasAttributes = true
}
if t.Traits&rawTag != 0 {
// ignore empty script and style tags
if !hasAttributes && (t.Hash == html.Script || t.Hash == html.Style) {
if next := tb.Peek(1); next.TokenType == html.EndTagToken {
tb.Shift()
tb.Shift()
break
}
}
rawTagHash = t.Hash
rawTagMediatype = nil
}
} else if t.Hash == html.Template {
omitSpace = true // EndTagToken
}
if t.Hash == html.Pre {
inPre = t.TokenType == html.StartTagToken
}
// remove superfluous tags, except for html, head and body tags when KeepDocumentTags is set
if !hasAttributes && (!o.KeepDocumentTags && (t.Hash == html.Html || t.Hash == html.Head || t.Hash == html.Body) || t.Hash == html.Colgroup) {
break
} else if t.TokenType == html.EndTagToken {
if !o.KeepEndTags {
if t.Hash == html.Thead || t.Hash == html.Tbody || t.Hash == html.Tfoot || t.Hash == html.Tr || t.Hash == html.Th || t.Hash == html.Td ||
t.Hash == html.Optgroup || t.Hash == html.Option || t.Hash == html.Dd || t.Hash == html.Dt ||
t.Hash == html.Li || t.Hash == html.Rb || t.Hash == html.Rt || t.Hash == html.Rtc || t.Hash == html.Rp {
break
} else if t.Hash == html.P {
i := 0
for {
next := tb.Peek(i)
i++
// continue if text token is empty or whitespace
if next.TokenType == html.TextToken && parse.IsAllWhitespace(next.Data) {
continue
}
if next.TokenType == html.ErrorToken || next.TokenType == html.EndTagToken && next.Traits&keepPTag == 0 || next.TokenType == html.StartTagToken && next.Traits&omitPTag != 0 {
break SWITCH // omit p end tag
}
break
}
}
}
if o.KeepWhitespace || t.Traits&objectTag != 0 {
omitSpace = false
} else if t.Traits&nonPhrasingTag != 0 {
omitSpace = true // omit spaces after block elements
}
if len(t.Data) > 3+len(t.Text) {
t.Data[2+len(t.Text)] = '>'
t.Data = t.Data[:3+len(t.Text)]
}
if _, err := w.Write(t.Data); err != nil {
return err
}
break
}
if o.KeepWhitespace || t.Traits&objectTag != 0 {
omitSpace = false
} else if t.Traits&nonPhrasingTag != 0 {
omitSpace = true // omit spaces after block elements
}
if _, err := w.Write(t.Data); err != nil {
return err
}
if hasAttributes {
if t.Hash == html.Meta {
attrs := tb.Attributes(html.Content, html.Http_Equiv, html.Charset, html.Name)
if content := attrs[0]; content != nil {
if httpEquiv := attrs[1]; httpEquiv != nil {
content.AttrVal = minify.ContentType(content.AttrVal)
if charset := attrs[2]; charset == nil && parse.EqualFold(httpEquiv.AttrVal, []byte("content-type")) && bytes.Equal(content.AttrVal, []byte("text/html;charset=utf-8")) {
httpEquiv.Text = nil
content.Text = []byte("charset")
content.Hash = html.Charset
content.AttrVal = []byte("utf-8")
} else if parse.EqualFold(httpEquiv.AttrVal, []byte("content-style-type")) {
defaultStyleType, defaultStyleParams = parse.Mediatype(content.AttrVal)
if defaultStyleParams != nil {
defaultInlineStyleParams = defaultStyleParams
defaultInlineStyleParams["inline"] = "1"
} else {
defaultInlineStyleParams = map[string]string{"inline": "1"}
}
} else if parse.EqualFold(httpEquiv.AttrVal, []byte("content-script-type")) {
defaultScriptType, defaultScriptParams = parse.Mediatype(content.AttrVal)
}
}
if name := attrs[3]; name != nil {
if parse.EqualFold(name.AttrVal, []byte("keywords")) {
content.AttrVal = bytes.Replace(content.AttrVal, []byte(", "), []byte(","), -1)
} else if parse.EqualFold(name.AttrVal, []byte("viewport")) {
content.AttrVal = bytes.Replace(content.AttrVal, []byte(" "), []byte(""), -1)
for i := 0; i < len(content.AttrVal); i++ {
if content.AttrVal[i] == '=' && i+2 < len(content.AttrVal) {
i++
if n := parse.Number(content.AttrVal[i:]); n > 0 {
minNum := minify.Number(content.AttrVal[i:i+n], -1)
if len(minNum) < n {
copy(content.AttrVal[i:i+len(minNum)], minNum)
copy(content.AttrVal[i+len(minNum):], content.AttrVal[i+n:])
content.AttrVal = content.AttrVal[:len(content.AttrVal)+len(minNum)-n]
}
i += len(minNum)
}
i-- // mitigate for-loop increase
}
}
}
}
}
} else if t.Hash == html.Script {
attrs := tb.Attributes(html.Src, html.Charset)
if attrs[0] != nil && attrs[1] != nil {
attrs[1].Text = nil
}
}
// write attributes
htmlEqualIdName := false
for {
attr := *tb.Shift()
if attr.TokenType != html.AttributeToken {
break
} else if attr.Text == nil {
continue // removed attribute
}
if t.Hash == html.A && (attr.Hash == html.Id || attr.Hash == html.Name) {
if attr.Hash == html.Id {
if name := tb.Attributes(html.Name)[0]; name != nil && bytes.Equal(attr.AttrVal, name.AttrVal) {
htmlEqualIdName = true
}
} else if htmlEqualIdName {
continue
} else if id := tb.Attributes(html.Id)[0]; id != nil && bytes.Equal(id.AttrVal, attr.AttrVal) {
continue
}
}
val := attr.AttrVal
if len(val) == 0 && (attr.Hash == html.Class ||
attr.Hash == html.Dir ||
attr.Hash == html.Id ||
attr.Hash == html.Lang ||
attr.Hash == html.Name ||
attr.Hash == html.Title ||
attr.Hash == html.Action && t.Hash == html.Form ||
attr.Hash == html.Value && t.Hash == html.Input) {
continue // omit empty attribute values
}
if attr.Traits&caselessAttr != 0 {
val = parse.ToLower(val)
if attr.Hash == html.Enctype || attr.Hash == html.Codetype || attr.Hash == html.Accept || attr.Hash == html.Type && (t.Hash == html.A || t.Hash == html.Link || t.Hash == html.Object || t.Hash == html.Param || t.Hash == html.Script || t.Hash == html.Style || t.Hash == html.Source) {
val = minify.ContentType(val)
}
}
if rawTagHash != 0 && attr.Hash == html.Type {
rawTagMediatype = parse.Copy(val)
}
// default attribute values can be omitted
if !o.KeepDefaultAttrVals && (attr.Hash == html.Type && (t.Hash == html.Script && bytes.Equal(val, []byte("text/javascript")) ||
t.Hash == html.Style && bytes.Equal(val, []byte("text/css")) ||
t.Hash == html.Link && bytes.Equal(val, []byte("text/css")) ||
t.Hash == html.Input && bytes.Equal(val, []byte("text")) ||
t.Hash == html.Button && bytes.Equal(val, []byte("submit"))) ||
attr.Hash == html.Language && t.Hash == html.Script ||
attr.Hash == html.Method && bytes.Equal(val, []byte("get")) ||
attr.Hash == html.Enctype && bytes.Equal(val, []byte("application/x-www-form-urlencoded")) ||
attr.Hash == html.Colspan && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Rowspan && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Shape && bytes.Equal(val, []byte("rect")) ||
attr.Hash == html.Span && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Clear && bytes.Equal(val, []byte("none")) ||
attr.Hash == html.Frameborder && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Scrolling && bytes.Equal(val, []byte("auto")) ||
attr.Hash == html.Valuetype && bytes.Equal(val, []byte("data")) ||
attr.Hash == html.Media && t.Hash == html.Style && bytes.Equal(val, []byte("all"))) {
continue
}
// CSS and JS minifiers for attribute inline code
if attr.Hash == html.Style {
attrMinifyBuffer.Reset()
if err := m.MinifyMimetype(defaultStyleType, attrMinifyBuffer, buffer.NewReader(val), defaultInlineStyleParams); err == nil {
val = attrMinifyBuffer.Bytes()
} else if err != minify.ErrNotExist {
return err
}
if len(val) == 0 {
continue
}
} else if len(attr.Text) > 2 && attr.Text[0] == 'o' && attr.Text[1] == 'n' {
if len(val) >= 11 && parse.EqualFold(val[:11], jsSchemeBytes) {
val = val[11:]
}
attrMinifyBuffer.Reset()
if err := m.MinifyMimetype(defaultScriptType, attrMinifyBuffer, buffer.NewReader(val), defaultScriptParams); err == nil {
val = attrMinifyBuffer.Bytes()
} else if err != minify.ErrNotExist {
return err
}
if len(val) == 0 {
continue
}
} else if len(val) > 5 && attr.Traits&urlAttr != 0 { // anchors are already handled
if parse.EqualFold(val[:4], httpBytes) {
if val[4] == ':' {
if m.URL != nil && m.URL.Scheme == "http" {
val = val[5:]
} else {
parse.ToLower(val[:4])
}
} else if (val[4] == 's' || val[4] == 'S') && val[5] == ':' {
if m.URL != nil && m.URL.Scheme == "https" {
val = val[6:]
} else {
parse.ToLower(val[:5])
}
}
} else if parse.EqualFold(val[:5], dataSchemeBytes) {
val = minify.DataURI(m, val)
}
}
if _, err := w.Write(spaceBytes); err != nil {
return err
}
if _, err := w.Write(attr.Text); err != nil {
return err
}
if len(val) > 0 && attr.Traits&booleanAttr == 0 {
if _, err := w.Write(isBytes); err != nil {
return err
}
// no quotes if possible, else prefer single or double depending on which occurs more often in value
val = html.EscapeAttrVal(&attrByteBuffer, attr.AttrVal, val)
if _, err := w.Write(val); err != nil {
return err
}
}
}
}
if _, err := w.Write(gtBytes); err != nil {
return err
}
}
}
}

408
vendor/github.com/tdewolff/minify/html/html_test.go generated vendored Normal file
View file

@ -0,0 +1,408 @@
package html // import "github.com/tdewolff/minify/html"
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/url"
"os"
"regexp"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
"github.com/tdewolff/test"
)
func TestHTML(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`html`, `html`},
{`<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd">`, `<!doctype html>`},
{`<!-- comment -->`, ``},
{`<style><!--\ncss\n--></style>`, `<style><!--\ncss\n--></style>`},
{`<style>&</style>`, `<style>&</style>`},
{`<html><head></head><body>x</body></html>`, `x`},
{`<meta http-equiv="content-type" content="text/html; charset=utf-8">`, `<meta charset=utf-8>`},
{`<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />`, `<meta charset=utf-8>`},
{`<meta name="keywords" content="a, b">`, `<meta name=keywords content=a,b>`},
{`<meta name="viewport" content="width = 996" />`, `<meta name=viewport content="width=996">`},
{`<span attr="test"></span>`, `<span attr=test></span>`},
{`<span attr='test&apos;test'></span>`, `<span attr="test'test"></span>`},
{`<span attr="test&quot;test"></span>`, `<span attr='test"test'></span>`},
{`<span attr='test""&apos;&amp;test'></span>`, `<span attr='test""&#39;&amp;test'></span>`},
{`<span attr="test/test"></span>`, `<span attr=test/test></span>`},
{`<span>&amp;</span>`, `<span>&amp;</span>`},
{`<span clear=none method=GET></span>`, `<span></span>`},
{`<span onload="javascript:x;"></span>`, `<span onload=x;></span>`},
{`<span selected="selected"></span>`, `<span selected></span>`},
{`<noscript><html><img id="x"></noscript>`, `<noscript><img id=x></noscript>`},
{`<body id="main"></body>`, `<body id=main>`},
{`<link href="data:text/plain, data">`, `<link href=data:,+data>`},
{`<svg width="100" height="100"><circle cx="50" cy="50" r="40" stroke="green" stroke-width="4" fill="yellow" /></svg>`, `<svg width="100" height="100"><circle cx="50" cy="50" r="40" stroke="green" stroke-width="4" fill="yellow" /></svg>`},
{`</span >`, `</span>`},
{`<meta name=viewport content="width=0.1, initial-scale=1.0 , maximum-scale=1000">`, `<meta name=viewport content="width=.1,initial-scale=1,maximum-scale=1e3">`},
{`<br/>`, `<br>`},
// increase coverage
{`<script style="css">js</script>`, `<script style=css>js</script>`},
{`<script type="application/javascript">js</script>`, `<script type=application/javascript>js</script>`},
{`<meta http-equiv="content-type" content="text/plain, text/html">`, `<meta http-equiv=content-type content=text/plain,text/html>`},
{`<meta http-equiv="content-style-type" content="text/less">`, `<meta http-equiv=content-style-type content=text/less>`},
{`<meta http-equiv="content-style-type" content="text/less; charset=utf-8">`, `<meta http-equiv=content-style-type content="text/less;charset=utf-8">`},
{`<meta http-equiv="content-script-type" content="application/js">`, `<meta http-equiv=content-script-type content=application/js>`},
{`<span attr=""></span>`, `<span attr></span>`},
{`<code>x</code>`, `<code>x</code>`},
{`<p></p><p></p>`, `<p><p>`},
{`<ul><li></li> <li></li></ul>`, `<ul><li><li></ul>`},
{`<p></p><a></a>`, `<p></p><a></a>`},
{`<p></p>x<a></a>`, `<p></p>x<a></a>`},
{`<span style=>`, `<span>`},
{`<button onclick=>`, `<button>`},
// whitespace
{`cats and dogs `, `cats and dogs`},
{` <div> <i> test </i> <b> test </b> </div> `, `<div><i>test</i> <b>test</b></div>`},
{`<strong>x </strong>y`, `<strong>x </strong>y`},
{`<strong>x </strong> y`, `<strong>x</strong> y`},
{"<strong>x </strong>\ny", "<strong>x</strong>\ny"},
{`<p>x </p>y`, `<p>x</p>y`},
{`x <p>y</p>`, `x<p>y`},
{` <!doctype html> <!--comment--> <html> <body><p></p></body></html> `, `<!doctype html><p>`}, // spaces before html and at the start of html are dropped
{`<p>x<br> y`, `<p>x<br>y`},
{`<p>x </b> <b> y`, `<p>x</b> <b>y`},
{`a <code></code> b`, `a <code></code>b`},
{`a <code>code</code> b`, `a <code>code</code> b`},
{`a <code> code </code> b`, `a <code>code</code> b`},
{`a <script>script</script> b`, `a <script>script</script>b`},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"abc\n</body>\ndef", "abc\ndef"},
{"<x>\n<!--y-->\n</x>", "<x></x>"},
{"a <template> b </template> c", "a <template>b</template>c"},
// from HTML Minifier
{`<DIV TITLE="blah">boo</DIV>`, `<div title=blah>boo</div>`},
{"<p title\n\n\t =\n \"bar\">foo</p>", `<p title=bar>foo`},
{`<p class=" foo ">foo bar baz</p>`, `<p class=foo>foo bar baz`},
{`<input maxlength=" 5 ">`, `<input maxlength=5>`},
{`<input type="text">`, `<input>`},
{`<form method="get">`, `<form>`},
{`<script language="Javascript">alert(1)</script>`, `<script>alert(1)</script>`},
{`<script></script>`, ``},
{`<p onclick=" JavaScript: x">x</p>`, `<p onclick=" x">x`},
{`<span Selected="selected"></span>`, `<span selected></span>`},
{`<table><thead><tr><th>foo</th><th>bar</th></tr></thead><tfoot><tr><th>baz</th><th>qux</th></tr></tfoot><tbody><tr><td>boo</td><td>moo</td></tr></tbody></table>`,
`<table><thead><tr><th>foo<th>bar<tfoot><tr><th>baz<th>qux<tbody><tr><td>boo<td>moo</table>`},
{`<select><option>foo</option><option>bar</option></select>`, `<select><option>foo<option>bar</select>`},
{`<meta name="keywords" content="A, B">`, `<meta name=keywords content=A,B>`},
{`<iframe><html> <p> x </p> </html></iframe>`, `<iframe><p>x</iframe>`},
{`<math> &int;_a_^b^{f(x)<over>1+x} dx </math>`, `<math> &int;_a_^b^{f(x)<over>1+x} dx </math>`},
{`<script language="x" charset="x" src="y"></script>`, `<script src=y></script>`},
{`<style media="all">x</style>`, `<style>x</style>`},
{`<a id="abc" name="abc">y</a>`, `<a id=abc>y</a>`},
{`<a id="" value="">y</a>`, `<a value>y</a>`},
// from Kangax html-minfier
{`<span style="font-family:&quot;Helvetica Neue&quot;,&quot;Helvetica&quot;,Helvetica,Arial,sans-serif">text</span>`, `<span style='font-family:"Helvetica Neue","Helvetica",Helvetica,Arial,sans-serif'>text</span>`},
// go-fuzz
{`<meta e t n content=ful><a b`, `<meta e t n content=ful><a b>`},
{`<img alt=a'b="">`, `<img alt='a&#39;b=""'>`},
{`</b`, `</b`},
// bugs
{`<p>text</p><br>text`, `<p>text</p><br>text`}, // #122
{`text <img> text`, `text <img> text`}, // #89
{`text <progress></progress> text`, `text <progress></progress> text`}, // #89
{`<pre> <x> a b </x> </pre>`, `<pre> <x> a b </x> </pre>`}, // #82
{`<svg id="1"></svg>`, `<svg id="1"></svg>`}, // #67
}
m := minify.New()
m.AddFunc("text/html", Minify)
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
m.AddFunc("text/javascript", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLKeepEndTags(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`<p></p><p></p>`, `<p></p><p></p>`},
{`<ul><li></li><li></li></ul>`, `<ul><li></li><li></li></ul>`},
}
m := minify.New()
htmlMinifier := &Minifier{KeepEndTags: true}
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := htmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLKeepConditionalComments(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`<!--[if IE 6]> <b> </b> <![endif]-->`, `<!--[if IE 6]><b></b><![endif]-->`},
{`<![if IE 6]> <b> </b> <![endif]>`, `<![if IE 6]><b></b><![endif]>`},
}
m := minify.New()
htmlMinifier := &Minifier{KeepConditionalComments: true}
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := htmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLKeepWhitespace(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`cats and dogs `, `cats and dogs`},
{` <div> <i> test </i> <b> test </b> </div> `, `<div> <i> test </i> <b> test </b> </div>`},
{`<strong>x </strong>y`, `<strong>x </strong>y`},
{`<strong>x </strong> y`, `<strong>x </strong> y`},
{"<strong>x </strong>\ny", "<strong>x </strong>\ny"},
{`<p>x </p>y`, `<p>x </p>y`},
{`x <p>y</p>`, `x <p>y`},
{` <!doctype html> <!--comment--> <html> <body><p></p></body></html> `, `<!doctype html><p>`}, // spaces before html and at the start of html are dropped
{`<p>x<br> y`, `<p>x<br> y`},
{`<p>x </b> <b> y`, `<p>x </b> <b> y`},
{`a <code>code</code> b`, `a <code>code</code> b`},
{`a <code></code> b`, `a <code></code> b`},
{`a <script>script</script> b`, `a <script>script</script> b`},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"text\n<!--comment-->text<!--comment--> text", "text\ntext text"},
{"abc\n</body>\ndef", "abc\ndef"},
{"<x>\n<!--y-->\n</x>", "<x>\n</x>"},
{"<style>lala{color:red}</style>", "<style>lala{color:red}</style>"},
}
m := minify.New()
htmlMinifier := &Minifier{KeepWhitespace: true}
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := htmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLURL(t *testing.T) {
htmlTests := []struct {
url string
html string
expected string
}{
{`http://example.com/`, `<a href=http://example.com/>link</a>`, `<a href=//example.com/>link</a>`},
{`https://example.com/`, `<a href=http://example.com/>link</a>`, `<a href=http://example.com/>link</a>`},
{`http://example.com/`, `<a href=https://example.com/>link</a>`, `<a href=https://example.com/>link</a>`},
{`https://example.com/`, `<a href=https://example.com/>link</a>`, `<a href=//example.com/>link</a>`},
{`http://example.com/`, `<a href=" http://example.com ">x</a>`, `<a href=//example.com>x</a>`},
{`http://example.com/`, `<link rel="stylesheet" type="text/css" href="http://example.com">`, `<link rel=stylesheet href=//example.com>`},
{`http://example.com/`, `<!doctype html> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head profile="http://dublincore.org/documents/dcq-html/"> <!-- Barlesque 2.75.0 --> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />`,
`<!doctype html><html xmlns=//www.w3.org/1999/xhtml xml:lang=en><head profile=//dublincore.org/documents/dcq-html/><meta charset=utf-8>`},
{`http://example.com/`, `<html xmlns="http://www.w3.org/1999/xhtml"></html>`, `<html xmlns=//www.w3.org/1999/xhtml>`},
{`https://example.com/`, `<html xmlns="http://www.w3.org/1999/xhtml"></html>`, `<html xmlns=http://www.w3.org/1999/xhtml>`},
{`http://example.com/`, `<html xmlns="https://www.w3.org/1999/xhtml"></html>`, `<html xmlns=https://www.w3.org/1999/xhtml>`},
{`https://example.com/`, `<html xmlns="https://www.w3.org/1999/xhtml"></html>`, `<html xmlns=//www.w3.org/1999/xhtml>`},
}
m := minify.New()
m.AddFunc("text/html", Minify)
for _, tt := range htmlTests {
t.Run(tt.url, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
m.URL, _ = url.Parse(tt.url)
err := Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestSpecialTagClosing(t *testing.T) {
m := minify.New()
m.AddFunc("text/html", Minify)
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
b, err := ioutil.ReadAll(r)
test.Error(t, err, nil)
test.String(t, string(b), "</script>")
_, err = w.Write(b)
return err
})
html := `<style></script></style>`
r := bytes.NewBufferString(html)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, html, err, w.String(), html)
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
html string
n []int
}{
{`<!doctype>`, []int{0}},
{`text`, []int{0}},
{`<foo attr=val>`, []int{0, 1, 2, 3, 4, 5}},
{`</foo>`, []int{0}},
{`<style>x</style>`, []int{2}},
{`<textarea>x</textarea>`, []int{2}},
{`<code>x</code>`, []int{2}},
{`<pre>x</pre>`, []int{2}},
{`<svg>x</svg>`, []int{0}},
{`<math>x</math>`, []int{0}},
{`<!--[if IE 6]> text <![endif]-->`, []int{0, 1, 2}},
{`<![if IE 6]> text <![endif]>`, []int{0}},
}
m := minify.New()
m.Add("text/html", &Minifier{
KeepConditionalComments: true,
})
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.html, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := test.NewErrorWriter(n)
err := m.Minify("text/html", w, r)
test.T(t, err, test.ErrPlain)
})
}
}
}
func TestMinifyErrors(t *testing.T) {
errorTests := []struct {
html string
err error
}{
{`<style>abc</style>`, test.ErrPlain},
{`<path style="abc"/>`, test.ErrPlain},
{`<path onclick="abc"/>`, test.ErrPlain},
{`<svg></svg>`, test.ErrPlain},
{`<math></math>`, test.ErrPlain},
}
m := minify.New()
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
m.AddFunc("text/javascript", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
m.AddFunc("image/svg+xml", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
m.AddFunc("application/mathml+xml", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
for _, tt := range errorTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.T(t, err, tt.err)
})
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("text/html", Minify)
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
// set URL to minify link locations too
m.URL, _ = url.Parse("https://www.example.com/")
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}
func ExampleMinify_options() {
m := minify.New()
m.Add("text/html", &Minifier{
KeepDefaultAttrVals: true,
KeepWhitespace: true,
})
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}
func ExampleMinify_reader() {
b := bytes.NewReader([]byte("<html><body><h1>Example</h1></body></html>"))
m := minify.New()
m.Add("text/html", &Minifier{})
r := m.Reader("text/html", b)
if _, err := io.Copy(os.Stdout, r); err != nil {
panic(err)
}
// Output: <h1>Example</h1>
}
func ExampleMinify_writer() {
m := minify.New()
m.Add("text/html", &Minifier{})
w := m.Writer("text/html", os.Stdout)
w.Write([]byte("<html><body><h1>Example</h1></body></html>"))
w.Close()
// Output: <h1>Example</h1>
}

187
vendor/github.com/tdewolff/minify/html/table.go generated vendored Normal file
View file

@ -0,0 +1,187 @@
package html // import "github.com/tdewolff/minify/html"
import "github.com/tdewolff/parse/html"
type traits uint8
const (
rawTag traits = 1 << iota
nonPhrasingTag
objectTag
booleanAttr
caselessAttr
urlAttr
omitPTag // omit p end tag if it is followed by this start tag
keepPTag // keep p end tag if it is followed by this end tag
)
var tagMap = map[html.Hash]traits{
html.A: keepPTag,
html.Address: nonPhrasingTag | omitPTag,
html.Article: nonPhrasingTag | omitPTag,
html.Aside: nonPhrasingTag | omitPTag,
html.Audio: objectTag | keepPTag,
html.Blockquote: nonPhrasingTag | omitPTag,
html.Body: nonPhrasingTag,
html.Br: nonPhrasingTag,
html.Button: objectTag,
html.Canvas: objectTag,
html.Caption: nonPhrasingTag,
html.Col: nonPhrasingTag,
html.Colgroup: nonPhrasingTag,
html.Dd: nonPhrasingTag,
html.Del: keepPTag,
html.Details: omitPTag,
html.Div: nonPhrasingTag | omitPTag,
html.Dl: nonPhrasingTag | omitPTag,
html.Dt: nonPhrasingTag,
html.Embed: nonPhrasingTag,
html.Fieldset: nonPhrasingTag | omitPTag,
html.Figcaption: nonPhrasingTag | omitPTag,
html.Figure: nonPhrasingTag | omitPTag,
html.Footer: nonPhrasingTag | omitPTag,
html.Form: nonPhrasingTag | omitPTag,
html.H1: nonPhrasingTag | omitPTag,
html.H2: nonPhrasingTag | omitPTag,
html.H3: nonPhrasingTag | omitPTag,
html.H4: nonPhrasingTag | omitPTag,
html.H5: nonPhrasingTag | omitPTag,
html.H6: nonPhrasingTag | omitPTag,
html.Head: nonPhrasingTag,
html.Header: nonPhrasingTag | omitPTag,
html.Hgroup: nonPhrasingTag,
html.Hr: nonPhrasingTag | omitPTag,
html.Html: nonPhrasingTag,
html.Iframe: rawTag | objectTag,
html.Img: objectTag,
html.Input: objectTag,
html.Ins: keepPTag,
html.Keygen: objectTag,
html.Li: nonPhrasingTag,
html.Main: nonPhrasingTag | omitPTag,
html.Map: keepPTag,
html.Math: rawTag,
html.Menu: omitPTag,
html.Meta: nonPhrasingTag,
html.Meter: objectTag,
html.Nav: nonPhrasingTag | omitPTag,
html.Noscript: nonPhrasingTag | keepPTag,
html.Object: objectTag,
html.Ol: nonPhrasingTag | omitPTag,
html.Output: nonPhrasingTag,
html.P: nonPhrasingTag | omitPTag,
html.Picture: objectTag,
html.Pre: nonPhrasingTag | omitPTag,
html.Progress: objectTag,
html.Q: objectTag,
html.Script: rawTag,
html.Section: nonPhrasingTag | omitPTag,
html.Select: objectTag,
html.Style: rawTag | nonPhrasingTag,
html.Svg: rawTag | objectTag,
html.Table: nonPhrasingTag | omitPTag,
html.Tbody: nonPhrasingTag,
html.Td: nonPhrasingTag,
html.Textarea: rawTag | objectTag,
html.Tfoot: nonPhrasingTag,
html.Th: nonPhrasingTag,
html.Thead: nonPhrasingTag,
html.Title: nonPhrasingTag,
html.Tr: nonPhrasingTag,
html.Ul: nonPhrasingTag | omitPTag,
html.Video: objectTag | keepPTag,
}
var attrMap = map[html.Hash]traits{
html.Accept: caselessAttr,
html.Accept_Charset: caselessAttr,
html.Action: urlAttr,
html.Align: caselessAttr,
html.Alink: caselessAttr,
html.Allowfullscreen: booleanAttr,
html.Async: booleanAttr,
html.Autofocus: booleanAttr,
html.Autoplay: booleanAttr,
html.Axis: caselessAttr,
html.Background: urlAttr,
html.Bgcolor: caselessAttr,
html.Charset: caselessAttr,
html.Checked: booleanAttr,
html.Cite: urlAttr,
html.Classid: urlAttr,
html.Clear: caselessAttr,
html.Codebase: urlAttr,
html.Codetype: caselessAttr,
html.Color: caselessAttr,
html.Compact: booleanAttr,
html.Controls: booleanAttr,
html.Data: urlAttr,
html.Declare: booleanAttr,
html.Default: booleanAttr,
html.DefaultChecked: booleanAttr,
html.DefaultMuted: booleanAttr,
html.DefaultSelected: booleanAttr,
html.Defer: booleanAttr,
html.Dir: caselessAttr,
html.Disabled: booleanAttr,
html.Draggable: booleanAttr,
html.Enabled: booleanAttr,
html.Enctype: caselessAttr,
html.Face: caselessAttr,
html.Formaction: urlAttr,
html.Formnovalidate: booleanAttr,
html.Frame: caselessAttr,
html.Hidden: booleanAttr,
html.Href: urlAttr,
html.Hreflang: caselessAttr,
html.Http_Equiv: caselessAttr,
html.Icon: urlAttr,
html.Inert: booleanAttr,
html.Ismap: booleanAttr,
html.Itemscope: booleanAttr,
html.Lang: caselessAttr,
html.Language: caselessAttr,
html.Link: caselessAttr,
html.Longdesc: urlAttr,
html.Manifest: urlAttr,
html.Media: caselessAttr,
html.Method: caselessAttr,
html.Multiple: booleanAttr,
html.Muted: booleanAttr,
html.Nohref: booleanAttr,
html.Noresize: booleanAttr,
html.Noshade: booleanAttr,
html.Novalidate: booleanAttr,
html.Nowrap: booleanAttr,
html.Open: booleanAttr,
html.Pauseonexit: booleanAttr,
html.Poster: urlAttr,
html.Profile: urlAttr,
html.Readonly: booleanAttr,
html.Rel: caselessAttr,
html.Required: booleanAttr,
html.Rev: caselessAttr,
html.Reversed: booleanAttr,
html.Rules: caselessAttr,
html.Scope: caselessAttr,
html.Scoped: booleanAttr,
html.Scrolling: caselessAttr,
html.Seamless: booleanAttr,
html.Selected: booleanAttr,
html.Shape: caselessAttr,
html.Sortable: booleanAttr,
html.Src: urlAttr,
html.Target: caselessAttr,
html.Text: caselessAttr,
html.Translate: booleanAttr,
html.Truespeed: booleanAttr,
html.Type: caselessAttr,
html.Typemustmatch: booleanAttr,
html.Undeterminate: booleanAttr,
html.Usemap: urlAttr,
html.Valign: caselessAttr,
html.Valuetype: caselessAttr,
html.Vlink: caselessAttr,
html.Visible: booleanAttr,
html.Xmlns: urlAttr,
}

88
vendor/github.com/tdewolff/minify/js/js.go generated vendored Normal file
View file

@ -0,0 +1,88 @@
// Package js minifies ECMAScript5.1 following the specifications at http://www.ecma-international.org/ecma-262/5.1/.
package js // import "github.com/tdewolff/minify/js"
import (
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/js"
)
var (
spaceBytes = []byte(" ")
newlineBytes = []byte("\n")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is a JS minifier.
type Minifier struct{}
// Minify minifies JS data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies JS data, it reads from r and writes to w.
func (o *Minifier) Minify(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
prev := js.LineTerminatorToken
prevLast := byte(' ')
lineTerminatorQueued := false
whitespaceQueued := false
l := js.NewLexer(r)
defer l.Restore()
for {
tt, data := l.Next()
if tt == js.ErrorToken {
if l.Err() != io.EOF {
return l.Err()
}
return nil
} else if tt == js.LineTerminatorToken {
lineTerminatorQueued = true
} else if tt == js.WhitespaceToken {
whitespaceQueued = true
} else if tt == js.CommentToken {
if len(data) > 5 && data[1] == '*' && data[2] == '!' {
if _, err := w.Write(data[:3]); err != nil {
return err
}
comment := parse.TrimWhitespace(parse.ReplaceMultipleWhitespace(data[3 : len(data)-2]))
if _, err := w.Write(comment); err != nil {
return err
}
if _, err := w.Write(data[len(data)-2:]); err != nil {
return err
}
}
} else {
first := data[0]
if (prev == js.IdentifierToken || prev == js.NumericToken || prev == js.PunctuatorToken || prev == js.StringToken || prev == js.RegexpToken) &&
(tt == js.IdentifierToken || tt == js.NumericToken || tt == js.StringToken || tt == js.PunctuatorToken || tt == js.RegexpToken) {
if lineTerminatorQueued && (prev != js.PunctuatorToken || prevLast == '}' || prevLast == ']' || prevLast == ')' || prevLast == '+' || prevLast == '-' || prevLast == '"' || prevLast == '\'') &&
(tt != js.PunctuatorToken || first == '{' || first == '[' || first == '(' || first == '+' || first == '-' || first == '!' || first == '~') {
if _, err := w.Write(newlineBytes); err != nil {
return err
}
} else if whitespaceQueued && (prev != js.StringToken && prev != js.PunctuatorToken && tt != js.PunctuatorToken || (prevLast == '+' || prevLast == '-') && first == prevLast) {
if _, err := w.Write(spaceBytes); err != nil {
return err
}
}
}
if _, err := w.Write(data); err != nil {
return err
}
prev = tt
prevLast = data[len(data)-1]
lineTerminatorQueued = false
whitespaceQueued = false
}
}
}

96
vendor/github.com/tdewolff/minify/js/js_test.go generated vendored Normal file
View file

@ -0,0 +1,96 @@
package js // import "github.com/tdewolff/minify/js"
import (
"bytes"
"fmt"
"os"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestJS(t *testing.T) {
jsTests := []struct {
js string
expected string
}{
{"/*comment*/", ""},
{"// comment\na", "a"},
{"/*! bang comment */", "/*!bang comment*/"},
{"function x(){}", "function x(){}"},
{"function x(a, b){}", "function x(a,b){}"},
{"a b", "a b"},
{"a\n\nb", "a\nb"},
{"a// comment\nb", "a\nb"},
{"''\na", "''\na"},
{"''\n''", "''\n''"},
{"]\n0", "]\n0"},
{"a\n{", "a\n{"},
{";\na", ";a"},
{",\na", ",a"},
{"}\na", "}\na"},
{"+\na", "+\na"},
{"+\n(", "+\n("},
{"+\n\"\"", "+\n\"\""},
{"a + ++b", "a+ ++b"}, // JSMin caution
{"var a=/\\s?auto?\\s?/i\nvar", "var a=/\\s?auto?\\s?/i\nvar"}, // #14
{"var a=0\n!function(){}", "var a=0\n!function(){}"}, // #107
{"function(){}\n\"string\"", "function(){}\n\"string\""}, // #109
{"false\n\"string\"", "false\n\"string\""}, // #109
{"`\n", "`"}, // go fuzz
{"a\n~b", "a\n~b"}, // #132
}
m := minify.New()
for _, tt := range jsTests {
t.Run(tt.js, func(t *testing.T) {
r := bytes.NewBufferString(tt.js)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.js, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
js string
n []int
}{
{"a\n{5 5", []int{0, 1, 4}},
{`/*!comment*/`, []int{0, 1, 2}},
{"false\n\"string\"", []int{1}}, // #109
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.js, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.js)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("text/javascript", Minify)
if err := m.Minify("text/javascript", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

63
vendor/github.com/tdewolff/minify/json/json.go generated vendored Normal file
View file

@ -0,0 +1,63 @@
// Package json minifies JSON following the specifications at http://json.org/.
package json // import "github.com/tdewolff/minify/json"
import (
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse/json"
)
var (
commaBytes = []byte(",")
colonBytes = []byte(":")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is a JSON minifier.
type Minifier struct{}
// Minify minifies JSON data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies JSON data, it reads from r and writes to w.
func (o *Minifier) Minify(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
skipComma := true
p := json.NewParser(r)
defer p.Restore()
for {
state := p.State()
gt, text := p.Next()
if gt == json.ErrorGrammar {
if p.Err() != io.EOF {
return p.Err()
}
return nil
}
if !skipComma && gt != json.EndObjectGrammar && gt != json.EndArrayGrammar {
if state == json.ObjectKeyState || state == json.ArrayState {
if _, err := w.Write(commaBytes); err != nil {
return err
}
} else if state == json.ObjectValueState {
if _, err := w.Write(colonBytes); err != nil {
return err
}
}
}
skipComma = gt == json.StartObjectGrammar || gt == json.StartArrayGrammar
if _, err := w.Write(text); err != nil {
return err
}
}
}

74
vendor/github.com/tdewolff/minify/json/json_test.go generated vendored Normal file
View file

@ -0,0 +1,74 @@
package json // import "github.com/tdewolff/minify/json"
import (
"bytes"
"fmt"
"os"
"regexp"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestJSON(t *testing.T) {
jsonTests := []struct {
json string
expected string
}{
{"{ \"a\": [1, 2] }", "{\"a\":[1,2]}"},
{"[{ \"a\": [{\"x\": null}, true] }]", "[{\"a\":[{\"x\":null},true]}]"},
{"{ \"a\": 1, \"b\": 2 }", "{\"a\":1,\"b\":2}"},
}
m := minify.New()
for _, tt := range jsonTests {
t.Run(tt.json, func(t *testing.T) {
r := bytes.NewBufferString(tt.json)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.json, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
json string
n []int
}{
//01 234 56 78
{`{"key":[100,200]}`, []int{0, 1, 2, 3, 4, 5, 7, 8}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.json, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.json)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), Minify)
if err := m.Minify("application/json", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

279
vendor/github.com/tdewolff/minify/minify.go generated vendored Normal file
View file

@ -0,0 +1,279 @@
// Package minify relates MIME type to minifiers. Several minifiers are provided in the subpackages.
package minify // import "github.com/tdewolff/minify"
import (
"errors"
"io"
"mime"
"net/http"
"net/url"
"os/exec"
"path"
"regexp"
"sync"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
)
// ErrNotExist is returned when no minifier exists for a given mimetype.
var ErrNotExist = errors.New("minifier does not exist for mimetype")
////////////////////////////////////////////////////////////////
// MinifierFunc is a function that implements Minifer.
type MinifierFunc func(*M, io.Writer, io.Reader, map[string]string) error
// Minify calls f(m, w, r, params)
func (f MinifierFunc) Minify(m *M, w io.Writer, r io.Reader, params map[string]string) error {
return f(m, w, r, params)
}
// Minifier is the interface for minifiers.
// The *M parameter is used for minifying embedded resources, such as JS within HTML.
type Minifier interface {
Minify(*M, io.Writer, io.Reader, map[string]string) error
}
////////////////////////////////////////////////////////////////
type patternMinifier struct {
pattern *regexp.Regexp
Minifier
}
type cmdMinifier struct {
cmd *exec.Cmd
}
func (c *cmdMinifier) Minify(_ *M, w io.Writer, r io.Reader, _ map[string]string) error {
cmd := &exec.Cmd{}
*cmd = *c.cmd // concurrency safety
cmd.Stdout = w
cmd.Stdin = r
return cmd.Run()
}
////////////////////////////////////////////////////////////////
// M holds a map of mimetype => function to allow recursive minifier calls of the minifier functions.
type M struct {
literal map[string]Minifier
pattern []patternMinifier
URL *url.URL
}
// New returns a new M.
func New() *M {
return &M{
map[string]Minifier{},
[]patternMinifier{},
nil,
}
}
// Add adds a minifier to the mimetype => function map (unsafe for concurrent use).
func (m *M) Add(mimetype string, minifier Minifier) {
m.literal[mimetype] = minifier
}
// AddFunc adds a minify function to the mimetype => function map (unsafe for concurrent use).
func (m *M) AddFunc(mimetype string, minifier MinifierFunc) {
m.literal[mimetype] = minifier
}
// AddRegexp adds a minifier to the mimetype => function map (unsafe for concurrent use).
func (m *M) AddRegexp(pattern *regexp.Regexp, minifier Minifier) {
m.pattern = append(m.pattern, patternMinifier{pattern, minifier})
}
// AddFuncRegexp adds a minify function to the mimetype => function map (unsafe for concurrent use).
func (m *M) AddFuncRegexp(pattern *regexp.Regexp, minifier MinifierFunc) {
m.pattern = append(m.pattern, patternMinifier{pattern, minifier})
}
// AddCmd adds a minify function to the mimetype => function map (unsafe for concurrent use) that executes a command to process the minification.
// It allows the use of external tools like ClosureCompiler, UglifyCSS, etc. for a specific mimetype.
func (m *M) AddCmd(mimetype string, cmd *exec.Cmd) {
m.literal[mimetype] = &cmdMinifier{cmd}
}
// AddCmdRegexp adds a minify function to the mimetype => function map (unsafe for concurrent use) that executes a command to process the minification.
// It allows the use of external tools like ClosureCompiler, UglifyCSS, etc. for a specific mimetype regular expression.
func (m *M) AddCmdRegexp(pattern *regexp.Regexp, cmd *exec.Cmd) {
m.pattern = append(m.pattern, patternMinifier{pattern, &cmdMinifier{cmd}})
}
// Match returns the pattern and minifier that gets matched with the mediatype.
// It returns nil when no matching minifier exists.
// It has the same matching algorithm as Minify.
func (m *M) Match(mediatype string) (string, map[string]string, MinifierFunc) {
mimetype, params := parse.Mediatype([]byte(mediatype))
if minifier, ok := m.literal[string(mimetype)]; ok { // string conversion is optimized away
return string(mimetype), params, minifier.Minify
}
for _, minifier := range m.pattern {
if minifier.pattern.Match(mimetype) {
return minifier.pattern.String(), params, minifier.Minify
}
}
return string(mimetype), params, nil
}
// Minify minifies the content of a Reader and writes it to a Writer (safe for concurrent use).
// An error is returned when no such mimetype exists (ErrNotExist) or when an error occurred in the minifier function.
// Mediatype may take the form of 'text/plain', 'text/*', '*/*' or 'text/plain; charset=UTF-8; version=2.0'.
func (m *M) Minify(mediatype string, w io.Writer, r io.Reader) error {
mimetype, params := parse.Mediatype([]byte(mediatype))
return m.MinifyMimetype(mimetype, w, r, params)
}
// MinifyMimetype minifies the content of a Reader and writes it to a Writer (safe for concurrent use).
// It is a lower level version of Minify and requires the mediatype to be split up into mimetype and parameters.
// It is mostly used internally by minifiers because it is faster (no need to convert a byte-slice to string and vice versa).
func (m *M) MinifyMimetype(mimetype []byte, w io.Writer, r io.Reader, params map[string]string) error {
err := ErrNotExist
if minifier, ok := m.literal[string(mimetype)]; ok { // string conversion is optimized away
err = minifier.Minify(m, w, r, params)
} else {
for _, minifier := range m.pattern {
if minifier.pattern.Match(mimetype) {
err = minifier.Minify(m, w, r, params)
break
}
}
}
return err
}
// Bytes minifies an array of bytes (safe for concurrent use). When an error occurs it return the original array and the error.
// It returns an error when no such mimetype exists (ErrNotExist) or any error occurred in the minifier function.
func (m *M) Bytes(mediatype string, v []byte) ([]byte, error) {
out := buffer.NewWriter(make([]byte, 0, len(v)))
if err := m.Minify(mediatype, out, buffer.NewReader(v)); err != nil {
return v, err
}
return out.Bytes(), nil
}
// String minifies a string (safe for concurrent use). When an error occurs it return the original string and the error.
// It returns an error when no such mimetype exists (ErrNotExist) or any error occurred in the minifier function.
func (m *M) String(mediatype string, v string) (string, error) {
out := buffer.NewWriter(make([]byte, 0, len(v)))
if err := m.Minify(mediatype, out, buffer.NewReader([]byte(v))); err != nil {
return v, err
}
return string(out.Bytes()), nil
}
// Reader wraps a Reader interface and minifies the stream.
// Errors from the minifier are returned by the reader.
func (m *M) Reader(mediatype string, r io.Reader) io.Reader {
pr, pw := io.Pipe()
go func() {
if err := m.Minify(mediatype, pw, r); err != nil {
pw.CloseWithError(err)
} else {
pw.Close()
}
}()
return pr
}
// minifyWriter makes sure that errors from the minifier are passed down through Close (can be blocking).
type minifyWriter struct {
pw *io.PipeWriter
wg sync.WaitGroup
err error
}
// Write intercepts any writes to the writer.
func (w *minifyWriter) Write(b []byte) (int, error) {
return w.pw.Write(b)
}
// Close must be called when writing has finished. It returns the error from the minifier.
func (w *minifyWriter) Close() error {
w.pw.Close()
w.wg.Wait()
return w.err
}
// Writer wraps a Writer interface and minifies the stream.
// Errors from the minifier are returned by Close on the writer.
// The writer must be closed explicitly.
func (m *M) Writer(mediatype string, w io.Writer) *minifyWriter {
pr, pw := io.Pipe()
mw := &minifyWriter{pw, sync.WaitGroup{}, nil}
mw.wg.Add(1)
go func() {
defer mw.wg.Done()
if err := m.Minify(mediatype, w, pr); err != nil {
io.Copy(w, pr)
mw.err = err
}
pr.Close()
}()
return mw
}
// minifyResponseWriter wraps an http.ResponseWriter and makes sure that errors from the minifier are passed down through Close (can be blocking).
// All writes to the response writer are intercepted and minified on the fly.
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
type minifyResponseWriter struct {
http.ResponseWriter
writer *minifyWriter
m *M
mediatype string
}
// WriteHeader intercepts any header writes and removes the Content-Length header.
func (w *minifyResponseWriter) WriteHeader(status int) {
w.ResponseWriter.Header().Del("Content-Length")
w.ResponseWriter.WriteHeader(status)
}
// Write intercepts any writes to the response writer.
// The first write will extract the Content-Type as the mediatype. Otherwise it falls back to the RequestURI extension.
func (w *minifyResponseWriter) Write(b []byte) (int, error) {
if w.writer == nil {
// first write
if mediatype := w.ResponseWriter.Header().Get("Content-Type"); mediatype != "" {
w.mediatype = mediatype
}
w.writer = w.m.Writer(w.mediatype, w.ResponseWriter)
}
return w.writer.Write(b)
}
// Close must be called when writing has finished. It returns the error from the minifier.
func (w *minifyResponseWriter) Close() error {
if w.writer != nil {
return w.writer.Close()
}
return nil
}
// ResponseWriter minifies any writes to the http.ResponseWriter.
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
// Minification might be slower than just sending the original file! Caching is advised.
func (m *M) ResponseWriter(w http.ResponseWriter, r *http.Request) *minifyResponseWriter {
mediatype := mime.TypeByExtension(path.Ext(r.RequestURI))
return &minifyResponseWriter{w, nil, m, mediatype}
}
// Middleware provides a middleware function that minifies content on the fly by intercepting writes to http.ResponseWriter.
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
// Minification might be slower than just sending the original file! Caching is advised.
func (m *M) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
mw := m.ResponseWriter(w, r)
defer mw.Close()
next.ServeHTTP(mw, r)
})
}

358
vendor/github.com/tdewolff/minify/minify_test.go generated vendored Normal file
View file

@ -0,0 +1,358 @@
package minify // import "github.com/tdewolff/minify"
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"os/exec"
"regexp"
"strings"
"testing"
"github.com/tdewolff/test"
)
var errDummy = errors.New("dummy error")
// from os/exec/exec_test.go
func helperCommand(t *testing.T, s ...string) *exec.Cmd {
cs := []string{"-test.run=TestHelperProcess", "--"}
cs = append(cs, s...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
return cmd
}
////////////////////////////////////////////////////////////////
var m *M
func init() {
m = New()
m.AddFunc("dummy/copy", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
io.Copy(w, r)
return nil
})
m.AddFunc("dummy/nil", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return nil
})
m.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
m.AddFunc("dummy/charset", func(m *M, w io.Writer, r io.Reader, params map[string]string) error {
w.Write([]byte(params["charset"]))
return nil
})
m.AddFunc("dummy/params", func(m *M, w io.Writer, r io.Reader, params map[string]string) error {
return m.Minify(params["type"]+"/"+params["sub"], w, r)
})
m.AddFunc("type/sub", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
w.Write([]byte("type/sub"))
return nil
})
m.AddFuncRegexp(regexp.MustCompile("^type/.+$"), func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
w.Write([]byte("type/*"))
return nil
})
m.AddFuncRegexp(regexp.MustCompile("^.+/.+$"), func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
w.Write([]byte("*/*"))
return nil
})
}
func TestMinify(t *testing.T) {
test.T(t, m.Minify("?", nil, nil), ErrNotExist, "minifier doesn't exist")
test.T(t, m.Minify("dummy/nil", nil, nil), nil)
test.T(t, m.Minify("dummy/err", nil, nil), errDummy)
b := []byte("test")
out, err := m.Bytes("dummy/nil", b)
test.T(t, err, nil)
test.Bytes(t, out, []byte{}, "dummy/nil returns empty byte slice")
out, err = m.Bytes("?", b)
test.T(t, err, ErrNotExist, "minifier doesn't exist")
test.Bytes(t, out, b, "return input when minifier doesn't exist")
s := "test"
out2, err := m.String("dummy/nil", s)
test.T(t, err, nil)
test.String(t, out2, "", "dummy/nil returns empty string")
out2, err = m.String("?", s)
test.T(t, err, ErrNotExist, "minifier doesn't exist")
test.String(t, out2, s, "return input when minifier doesn't exist")
}
type DummyMinifier struct{}
func (d *DummyMinifier) Minify(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
}
func TestAdd(t *testing.T) {
mAdd := New()
r := bytes.NewBufferString("test")
w := &bytes.Buffer{}
mAdd.Add("dummy/err", &DummyMinifier{})
test.T(t, mAdd.Minify("dummy/err", nil, nil), errDummy)
mAdd.AddRegexp(regexp.MustCompile("err1$"), &DummyMinifier{})
test.T(t, mAdd.Minify("dummy/err1", nil, nil), errDummy)
mAdd.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
test.T(t, mAdd.Minify("dummy/err", nil, nil), errDummy)
mAdd.AddFuncRegexp(regexp.MustCompile("err2$"), func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
test.T(t, mAdd.Minify("dummy/err2", nil, nil), errDummy)
mAdd.AddCmd("dummy/copy", helperCommand(t, "dummy/copy"))
mAdd.AddCmd("dummy/err", helperCommand(t, "dummy/err"))
mAdd.AddCmdRegexp(regexp.MustCompile("err6$"), helperCommand(t, "werr6"))
test.T(t, mAdd.Minify("dummy/copy", w, r), nil)
test.String(t, w.String(), "test", "dummy/copy command returns input")
test.String(t, mAdd.Minify("dummy/err", w, r).Error(), "exit status 1", "command returns status 1 for dummy/err")
test.String(t, mAdd.Minify("werr6", w, r).Error(), "exit status 2", "command returns status 2 when minifier doesn't exist")
test.String(t, mAdd.Minify("stderr6", w, r).Error(), "exit status 2", "command returns status 2 when minifier doesn't exist")
}
func TestMatch(t *testing.T) {
pattern, params, _ := m.Match("dummy/copy; a=b")
test.String(t, pattern, "dummy/copy")
test.String(t, params["a"], "b")
pattern, _, _ = m.Match("type/foobar")
test.String(t, pattern, "^type/.+$")
_, _, minifier := m.Match("dummy/")
test.That(t, minifier == nil)
}
func TestWildcard(t *testing.T) {
mimetypeTests := []struct {
mimetype string
expected string
}{
{"type/sub", "type/sub"},
{"type/*", "type/*"},
{"*/*", "*/*"},
{"type/sub2", "type/*"},
{"type2/sub", "*/*"},
{"dummy/charset;charset=UTF-8", "UTF-8"},
{"dummy/charset; charset = UTF-8 ", "UTF-8"},
{"dummy/params;type=type;sub=two2", "type/*"},
}
for _, tt := range mimetypeTests {
r := bytes.NewBufferString("")
w := &bytes.Buffer{}
err := m.Minify(tt.mimetype, w, r)
test.Error(t, err)
test.Minify(t, tt.mimetype, nil, w.String(), tt.expected)
}
}
func TestReader(t *testing.T) {
m := New()
m.AddFunc("dummy/dummy", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
m.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
w := &bytes.Buffer{}
r := bytes.NewBufferString("test")
mr := m.Reader("dummy/dummy", r)
_, err := io.Copy(w, mr)
test.Error(t, err)
test.String(t, w.String(), "test", "equal input after dummy minify reader")
mr = m.Reader("dummy/err", r)
_, err = io.Copy(w, mr)
test.T(t, err, errDummy)
}
func TestWriter(t *testing.T) {
m := New()
m.AddFunc("dummy/dummy", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
m.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
m.AddFunc("dummy/late-err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, _ = ioutil.ReadAll(r)
return errDummy
})
w := &bytes.Buffer{}
mw := m.Writer("dummy/dummy", w)
_, _ = mw.Write([]byte("test"))
test.Error(t, mw.Close())
test.String(t, w.String(), "test", "equal input after dummy minify writer")
w = &bytes.Buffer{}
mw = m.Writer("dummy/err", w)
_, _ = mw.Write([]byte("test"))
test.T(t, mw.Close(), errDummy)
test.String(t, w.String(), "test", "equal input after dummy minify writer")
w = &bytes.Buffer{}
mw = m.Writer("dummy/late-err", w)
_, _ = mw.Write([]byte("test"))
test.T(t, mw.Close(), errDummy)
test.String(t, w.String(), "")
}
type responseWriter struct {
writer io.Writer
header http.Header
}
func (w *responseWriter) Header() http.Header {
return w.header
}
func (w *responseWriter) WriteHeader(_ int) {}
func (w *responseWriter) Write(b []byte) (int, error) {
return w.writer.Write(b)
}
func TestResponseWriter(t *testing.T) {
m := New()
m.AddFunc("text/html", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
b := &bytes.Buffer{}
w := &responseWriter{b, http.Header{}}
r := &http.Request{RequestURI: "/index.html"}
mw := m.ResponseWriter(w, r)
test.Error(t, mw.Close())
_, _ = mw.Write([]byte("test"))
test.Error(t, mw.Close())
test.String(t, b.String(), "test", "equal input after dummy minify response writer")
b = &bytes.Buffer{}
w = &responseWriter{b, http.Header{}}
r = &http.Request{RequestURI: "/index"}
mw = m.ResponseWriter(w, r)
mw.Header().Add("Content-Type", "text/html")
_, _ = mw.Write([]byte("test"))
test.Error(t, mw.Close())
test.String(t, b.String(), "test", "equal input after dummy minify response writer")
}
func TestMiddleware(t *testing.T) {
m := New()
m.AddFunc("text/html", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
b := &bytes.Buffer{}
w := &responseWriter{b, http.Header{}}
r := &http.Request{RequestURI: "/index.html"}
m.Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("test"))
})).ServeHTTP(w, r)
test.String(t, b.String(), "test", "equal input after dummy minify middleware")
}
func TestHelperProcess(*testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
args := os.Args
for len(args) > 0 {
if args[0] == "--" {
args = args[1:]
break
}
args = args[1:]
}
if len(args) == 0 {
fmt.Fprintf(os.Stderr, "No command\n")
os.Exit(2)
}
switch args[0] {
case "dummy/copy":
io.Copy(os.Stdout, os.Stdin)
case "dummy/err":
os.Exit(1)
default:
os.Exit(2)
}
os.Exit(0)
}
////////////////////////////////////////////////////////////////
func ExampleM_Minify_custom() {
m := New()
m.AddFunc("text/plain", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
// remove all newlines and spaces
rb := bufio.NewReader(r)
for {
line, err := rb.ReadString('\n')
if err != nil && err != io.EOF {
return err
}
if _, errws := io.WriteString(w, strings.Replace(line, " ", "", -1)); errws != nil {
return errws
}
if err == io.EOF {
break
}
}
return nil
})
in := "Because my coffee was too cold, I heated it in the microwave."
out, err := m.String("text/plain", in)
if err != nil {
panic(err)
}
fmt.Println(out)
// Output: Becausemycoffeewastoocold,Iheateditinthemicrowave.
}
func ExampleM_Reader() {
b := bytes.NewReader([]byte("input"))
m := New()
// add minfiers
r := m.Reader("mime/type", b)
if _, err := io.Copy(os.Stdout, r); err != nil {
if _, err := io.Copy(os.Stdout, b); err != nil {
panic(err)
}
}
}
func ExampleM_Writer() {
m := New()
// add minfiers
w := m.Writer("mime/type", os.Stdout)
if _, err := w.Write([]byte("input")); err != nil {
panic(err)
}
if err := w.Close(); err != nil {
panic(err)
}
}

130
vendor/github.com/tdewolff/minify/svg/buffer.go generated vendored Normal file
View file

@ -0,0 +1,130 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/svg"
"github.com/tdewolff/parse/xml"
)
// Token is a single token unit with an attribute value (if given) and hash of the data.
type Token struct {
xml.TokenType
Hash svg.Hash
Data []byte
Text []byte
AttrVal []byte
}
// TokenBuffer is a buffer that allows for token look-ahead.
type TokenBuffer struct {
l *xml.Lexer
buf []Token
pos int
attrBuffer []*Token
}
// NewTokenBuffer returns a new TokenBuffer.
func NewTokenBuffer(l *xml.Lexer) *TokenBuffer {
return &TokenBuffer{
l: l,
buf: make([]Token, 0, 8),
}
}
func (z *TokenBuffer) read(t *Token) {
t.TokenType, t.Data = z.l.Next()
t.Text = z.l.Text()
if t.TokenType == xml.AttributeToken {
t.AttrVal = z.l.AttrVal()
if len(t.AttrVal) > 1 && (t.AttrVal[0] == '"' || t.AttrVal[0] == '\'') {
t.AttrVal = parse.ReplaceMultipleWhitespace(parse.TrimWhitespace(t.AttrVal[1 : len(t.AttrVal)-1])) // quotes will be readded in attribute loop if necessary
}
t.Hash = svg.ToHash(t.Text)
} else if t.TokenType == xml.StartTagToken || t.TokenType == xml.EndTagToken {
t.AttrVal = nil
t.Hash = svg.ToHash(t.Text)
} else {
t.AttrVal = nil
t.Hash = 0
}
}
// Peek returns the ith element and possibly does an allocation.
// Peeking past an error will panic.
func (z *TokenBuffer) Peek(pos int) *Token {
pos += z.pos
if pos >= len(z.buf) {
if len(z.buf) > 0 && z.buf[len(z.buf)-1].TokenType == xml.ErrorToken {
return &z.buf[len(z.buf)-1]
}
c := cap(z.buf)
d := len(z.buf) - z.pos
p := pos - z.pos + 1 // required peek length
var buf []Token
if 2*p > c {
buf = make([]Token, 0, 2*c+p)
} else {
buf = z.buf
}
copy(buf[:d], z.buf[z.pos:])
buf = buf[:p]
pos -= z.pos
for i := d; i < p; i++ {
z.read(&buf[i])
if buf[i].TokenType == xml.ErrorToken {
buf = buf[:i+1]
pos = i
break
}
}
z.pos, z.buf = 0, buf
}
return &z.buf[pos]
}
// Shift returns the first element and advances position.
func (z *TokenBuffer) Shift() *Token {
if z.pos >= len(z.buf) {
t := &z.buf[:1][0]
z.read(t)
return t
}
t := &z.buf[z.pos]
z.pos++
return t
}
// Attributes extracts the gives attribute hashes from a tag.
// It returns in the same order pointers to the requested token data or nil.
func (z *TokenBuffer) Attributes(hashes ...svg.Hash) ([]*Token, *Token) {
n := 0
for {
if t := z.Peek(n); t.TokenType != xml.AttributeToken {
break
}
n++
}
if len(hashes) > cap(z.attrBuffer) {
z.attrBuffer = make([]*Token, len(hashes))
} else {
z.attrBuffer = z.attrBuffer[:len(hashes)]
for i := range z.attrBuffer {
z.attrBuffer[i] = nil
}
}
var replacee *Token
for i := z.pos; i < z.pos+n; i++ {
attr := &z.buf[i]
for j, hash := range hashes {
if hash == attr.Hash {
z.attrBuffer[j] = attr
replacee = attr
}
}
}
return z.attrBuffer, replacee
}

68
vendor/github.com/tdewolff/minify/svg/buffer_test.go generated vendored Normal file
View file

@ -0,0 +1,68 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"bytes"
"strconv"
"testing"
"github.com/tdewolff/parse/svg"
"github.com/tdewolff/parse/xml"
"github.com/tdewolff/test"
)
func TestBuffer(t *testing.T) {
// 0 12 3 4 5 6 7 8 9 01
s := `<svg><path d="M0 0L1 1z"/>text<tag/>text</svg>`
z := NewTokenBuffer(xml.NewLexer(bytes.NewBufferString(s)))
tok := z.Shift()
test.That(t, tok.Hash == svg.Svg, "first token is <svg>")
test.That(t, z.pos == 0, "shift first token and restore position")
test.That(t, len(z.buf) == 0, "shift first token and restore length")
test.That(t, z.Peek(2).Hash == svg.D, "third token is d")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 3, "mtwo tokens after peeking")
test.That(t, z.Peek(8).Hash == svg.Svg, "ninth token is <svg>")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 9, "nine tokens after peeking")
test.That(t, z.Peek(9).TokenType == xml.ErrorToken, "tenth token is an error")
test.That(t, z.Peek(9) == z.Peek(10), "tenth and eleventh token are EOF")
test.That(t, len(z.buf) == 10, "ten tokens after peeking")
_ = z.Shift()
tok = z.Shift()
test.That(t, tok.Hash == svg.Path, "third token is <path>")
test.That(t, z.pos == 2, "don't change position after peeking")
}
func TestAttributes(t *testing.T) {
r := bytes.NewBufferString(`<rect x="0" y="1" width="2" height="3" rx="4" ry="5"/>`)
l := xml.NewLexer(r)
tb := NewTokenBuffer(l)
tb.Shift()
for k := 0; k < 2; k++ { // run twice to ensure similar results
attrs, _ := tb.Attributes(svg.X, svg.Y, svg.Width, svg.Height, svg.Rx, svg.Ry)
for i := 0; i < 6; i++ {
test.That(t, attrs[i] != nil, "attr must not be nil")
val := string(attrs[i].AttrVal)
j, _ := strconv.ParseInt(val, 10, 32)
test.That(t, int(j) == i, "attr data is bad at position", i)
}
}
}
////////////////////////////////////////////////////////////////
func BenchmarkAttributes(b *testing.B) {
r := bytes.NewBufferString(`<rect x="0" y="1" width="2" height="3" rx="4" ry="5"/>`)
l := xml.NewLexer(r)
tb := NewTokenBuffer(l)
tb.Shift()
tb.Peek(6)
for i := 0; i < b.N; i++ {
tb.Attributes(svg.X, svg.Y, svg.Width, svg.Height, svg.Rx, svg.Ry)
}
}

282
vendor/github.com/tdewolff/minify/svg/pathdata.go generated vendored Normal file
View file

@ -0,0 +1,282 @@
package svg
import (
strconvStdlib "strconv"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/strconv"
)
type PathData struct {
o *Minifier
x, y float64
coords [][]byte
coordFloats []float64
state PathDataState
curBuffer []byte
altBuffer []byte
coordBuffer []byte
}
type PathDataState struct {
cmd byte
prevDigit bool
prevDigitIsInt bool
}
func NewPathData(o *Minifier) *PathData {
return &PathData{
o: o,
}
}
func (p *PathData) ShortenPathData(b []byte) []byte {
var x0, y0 float64
var cmd byte
p.x, p.y = 0.0, 0.0
p.coords = p.coords[:0]
p.coordFloats = p.coordFloats[:0]
p.state = PathDataState{}
j := 0
for i := 0; i < len(b); i++ {
c := b[i]
if c == ' ' || c == ',' || c == '\n' || c == '\r' || c == '\t' {
continue
} else if c >= 'A' && (cmd == 0 || cmd != c || c == 'M' || c == 'm') { // any command
if cmd != 0 {
j += p.copyInstruction(b[j:], cmd)
if cmd == 'M' || cmd == 'm' {
x0 = p.x
y0 = p.y
} else if cmd == 'Z' || cmd == 'z' {
p.x = x0
p.y = y0
}
}
cmd = c
p.coords = p.coords[:0]
p.coordFloats = p.coordFloats[:0]
} else if n := parse.Number(b[i:]); n > 0 {
f, _ := strconv.ParseFloat(b[i : i+n])
p.coords = append(p.coords, b[i:i+n])
p.coordFloats = append(p.coordFloats, f)
i += n - 1
}
}
if cmd != 0 {
j += p.copyInstruction(b[j:], cmd)
}
return b[:j]
}
func (p *PathData) copyInstruction(b []byte, cmd byte) int {
n := len(p.coords)
if n == 0 {
if cmd == 'Z' || cmd == 'z' {
b[0] = 'z'
return 1
}
return 0
}
isRelCmd := cmd >= 'a'
// get new cursor coordinates
di := 0
if (cmd == 'M' || cmd == 'm' || cmd == 'L' || cmd == 'l' || cmd == 'T' || cmd == 't') && n%2 == 0 {
di = 2
// reprint M always, as the first pair is a move but subsequent pairs are L
if cmd == 'M' || cmd == 'm' {
p.state.cmd = byte(0)
}
} else if cmd == 'H' || cmd == 'h' || cmd == 'V' || cmd == 'v' {
di = 1
} else if (cmd == 'S' || cmd == 's' || cmd == 'Q' || cmd == 'q') && n%4 == 0 {
di = 4
} else if (cmd == 'C' || cmd == 'c') && n%6 == 0 {
di = 6
} else if (cmd == 'A' || cmd == 'a') && n%7 == 0 {
di = 7
} else {
return 0
}
j := 0
origCmd := cmd
ax, ay := 0.0, 0.0
for i := 0; i < n; i += di {
// subsequent coordinate pairs for M are really L
if i > 0 && (origCmd == 'M' || origCmd == 'm') {
origCmd = 'L' + (origCmd - 'M')
}
cmd = origCmd
coords := p.coords[i : i+di]
coordFloats := p.coordFloats[i : i+di]
if cmd == 'H' || cmd == 'h' {
ax = coordFloats[di-1]
if isRelCmd {
ay = 0
} else {
ay = p.y
}
} else if cmd == 'V' || cmd == 'v' {
if isRelCmd {
ax = 0
} else {
ax = p.x
}
ay = coordFloats[di-1]
} else {
ax = coordFloats[di-2]
ay = coordFloats[di-1]
}
// switch from L to H or V whenever possible
if cmd == 'L' || cmd == 'l' {
if isRelCmd {
if coordFloats[0] == 0 {
cmd = 'v'
coords = coords[1:]
coordFloats = coordFloats[1:]
} else if coordFloats[1] == 0 {
cmd = 'h'
coords = coords[:1]
coordFloats = coordFloats[:1]
}
} else {
if coordFloats[0] == p.x {
cmd = 'V'
coords = coords[1:]
coordFloats = coordFloats[1:]
} else if coordFloats[1] == p.y {
cmd = 'H'
coords = coords[:1]
coordFloats = coordFloats[:1]
}
}
}
// make a current and alternated path with absolute/relative altered
var curState, altState PathDataState
curState = p.shortenCurPosInstruction(cmd, coords)
if isRelCmd {
altState = p.shortenAltPosInstruction(cmd-'a'+'A', coordFloats, p.x, p.y)
} else {
altState = p.shortenAltPosInstruction(cmd-'A'+'a', coordFloats, -p.x, -p.y)
}
// choose shortest, relative or absolute path?
if len(p.altBuffer) < len(p.curBuffer) {
j += copy(b[j:], p.altBuffer)
p.state = altState
} else {
j += copy(b[j:], p.curBuffer)
p.state = curState
}
if isRelCmd {
p.x += ax
p.y += ay
} else {
p.x = ax
p.y = ay
}
}
return j
}
func (p *PathData) shortenCurPosInstruction(cmd byte, coords [][]byte) PathDataState {
state := p.state
p.curBuffer = p.curBuffer[:0]
if cmd != state.cmd && !(state.cmd == 'M' && cmd == 'L' || state.cmd == 'm' && cmd == 'l') {
p.curBuffer = append(p.curBuffer, cmd)
state.cmd = cmd
state.prevDigit = false
state.prevDigitIsInt = false
}
for i, coord := range coords {
isFlag := false
if (cmd == 'A' || cmd == 'a') && (i%7 == 3 || i%7 == 4) {
isFlag = true
}
coord = minify.Number(coord, p.o.Decimals)
state.copyNumber(&p.curBuffer, coord, isFlag)
}
return state
}
func (p *PathData) shortenAltPosInstruction(cmd byte, coordFloats []float64, x, y float64) PathDataState {
state := p.state
p.altBuffer = p.altBuffer[:0]
if cmd != state.cmd && !(state.cmd == 'M' && cmd == 'L' || state.cmd == 'm' && cmd == 'l') {
p.altBuffer = append(p.altBuffer, cmd)
state.cmd = cmd
state.prevDigit = false
state.prevDigitIsInt = false
}
for i, f := range coordFloats {
isFlag := false
if cmd == 'L' || cmd == 'l' || cmd == 'C' || cmd == 'c' || cmd == 'S' || cmd == 's' || cmd == 'Q' || cmd == 'q' || cmd == 'T' || cmd == 't' || cmd == 'M' || cmd == 'm' {
if i%2 == 0 {
f += x
} else {
f += y
}
} else if cmd == 'H' || cmd == 'h' {
f += x
} else if cmd == 'V' || cmd == 'v' {
f += y
} else if cmd == 'A' || cmd == 'a' {
if i%7 == 5 {
f += x
} else if i%7 == 6 {
f += y
} else if i%7 == 3 || i%7 == 4 {
isFlag = true
}
}
p.coordBuffer = strconvStdlib.AppendFloat(p.coordBuffer[:0], f, 'g', -1, 64)
coord := minify.Number(p.coordBuffer, p.o.Decimals)
state.copyNumber(&p.altBuffer, coord, isFlag)
}
return state
}
func (state *PathDataState) copyNumber(buffer *[]byte, coord []byte, isFlag bool) {
if state.prevDigit && (coord[0] >= '0' && coord[0] <= '9' || coord[0] == '.' && state.prevDigitIsInt) {
if coord[0] == '0' && !state.prevDigitIsInt {
if isFlag {
*buffer = append(*buffer, ' ', '0')
state.prevDigitIsInt = true
} else {
*buffer = append(*buffer, '.', '0') // aggresively add dot so subsequent numbers could drop leading space
// prevDigit stays true and prevDigitIsInt stays false
}
return
}
*buffer = append(*buffer, ' ')
}
state.prevDigit = true
state.prevDigitIsInt = true
if len(coord) > 2 && coord[len(coord)-2] == '0' && coord[len(coord)-1] == '0' {
coord[len(coord)-2] = 'e'
coord[len(coord)-1] = '2'
state.prevDigitIsInt = false
} else {
for _, c := range coord {
if c == '.' || c == 'e' || c == 'E' {
state.prevDigitIsInt = false
break
}
}
}
*buffer = append(*buffer, coord...)
}

60
vendor/github.com/tdewolff/minify/svg/pathdata_test.go generated vendored Normal file
View file

@ -0,0 +1,60 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"testing"
"github.com/tdewolff/test"
)
func TestPathData(t *testing.T) {
var pathDataTests = []struct {
pathData string
expected string
}{
{"M10 10 20 10", "M10 10H20"},
{"M10 10 10 20", "M10 10V20"},
{"M50 50 100 100", "M50 50l50 50"},
{"m50 50 40 40m50 50", "m50 50 40 40m50 50"},
{"M10 10zM15 15", "M10 10zm5 5"},
{"M50 50H55V55", "M50 50h5v5"},
{"M10 10L11 10 11 11", "M10 10h1v1"},
{"M10 10l1 0 0 1", "M10 10h1v1"},
{"M10 10L11 11 0 0", "M10 10l1 1L0 0"},
{"M246.614 51.028L246.614-5.665 189.922-5.665", "M246.614 51.028V-5.665H189.922"},
{"M100,200 C100,100 250,100 250,200 S400,300 400,200", "M1e2 2e2c0-1e2 150-1e2 150 0s150 1e2 150 0"},
{"M200,300 Q400,50 600,300 T1000,300", "M2e2 3e2q2e2-250 4e2.0t4e2.0"},
{"M300,200 h-150 a150,150 0 1,0 150,-150 z", "M3e2 2e2H150A150 150 0 1 0 3e2 50z"},
{"x5 5L10 10", "L10 10"},
{"M.0.1", "M0 .1"},
{"M200.0.1", "M2e2.1"},
{"M0 0a3.28 3.28.0.0.0 3.279 3.28", "M0 0a3.28 3.28.0 0 0 3.279 3.28"}, // #114
{"A1.1.0.0.0.0.2.3", "A1.1.0.0 0 0 .2."}, // bad input (sweep and large-arc are not booleans) gives bad output
// fuzz
{"", ""},
{"ML", ""},
{".8.00c0", ""},
{".1.04h0e6.0e6.0e0.0", "h0 0 0 0"},
{"M.1.0.0.2Z", "M.1.0.0.2z"},
{"A.0.0.0.0.3.2e3.7.0.0.0.0.0.1.3.0.0.0.0.2.3.2.0.0.0.0.20.2e-10.0.0.0.0.0.0.0.0", "A0 0 0 0 .3 2e2.7.0.0.0 0 0 .1.3 30 0 0 0 .2.3.2 3 20 0 0 .2 2e-1100 11 0 0 0 "}, // bad input (sweep and large-arc are not booleans) gives bad output
}
p := NewPathData(&Minifier{Decimals: -1})
for _, tt := range pathDataTests {
t.Run(tt.pathData, func(t *testing.T) {
path := p.ShortenPathData([]byte(tt.pathData))
test.Minify(t, tt.pathData, nil, string(path), tt.expected)
})
}
}
////////////////////////////////////////////////////////////////
func BenchmarkShortenPathData(b *testing.B) {
p := NewPathData(&Minifier{})
r := []byte("M8.64,223.948c0,0,143.468,3.431,185.777-181.808c2.673-11.702-1.23-20.154,1.316-33.146h16.287c0,0-3.14,17.248,1.095,30.848c21.392,68.692-4.179,242.343-204.227,196.59L8.64,223.948z")
for i := 0; i < b.N; i++ {
p.ShortenPathData(r)
}
}

434
vendor/github.com/tdewolff/minify/svg/svg.go generated vendored Normal file
View file

@ -0,0 +1,434 @@
// Package svg minifies SVG1.1 following the specifications at http://www.w3.org/TR/SVG11/.
package svg // import "github.com/tdewolff/minify/svg"
import (
"bytes"
"io"
"github.com/tdewolff/minify"
minifyCSS "github.com/tdewolff/minify/css"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
"github.com/tdewolff/parse/css"
"github.com/tdewolff/parse/svg"
"github.com/tdewolff/parse/xml"
)
var (
voidBytes = []byte("/>")
isBytes = []byte("=")
spaceBytes = []byte(" ")
cdataEndBytes = []byte("]]>")
pathBytes = []byte("<path")
dBytes = []byte("d")
zeroBytes = []byte("0")
cssMimeBytes = []byte("text/css")
urlBytes = []byte("url(")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{Decimals: -1}
// Minifier is an SVG minifier.
type Minifier struct {
Decimals int
}
// Minify minifies SVG data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies SVG data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
var tag svg.Hash
defaultStyleType := cssMimeBytes
defaultStyleParams := map[string]string(nil)
defaultInlineStyleParams := map[string]string{"inline": "1"}
p := NewPathData(o)
minifyBuffer := buffer.NewWriter(make([]byte, 0, 64))
attrByteBuffer := make([]byte, 0, 64)
gStack := make([]bool, 0)
l := xml.NewLexer(r)
defer l.Restore()
tb := NewTokenBuffer(l)
for {
t := *tb.Shift()
SWITCH:
switch t.TokenType {
case xml.ErrorToken:
if l.Err() == io.EOF {
return nil
}
return l.Err()
case xml.DOCTYPEToken:
if len(t.Text) > 0 && t.Text[len(t.Text)-1] == ']' {
if _, err := w.Write(t.Data); err != nil {
return err
}
}
case xml.TextToken:
t.Data = parse.ReplaceMultipleWhitespace(parse.TrimWhitespace(t.Data))
if tag == svg.Style && len(t.Data) > 0 {
if err := m.MinifyMimetype(defaultStyleType, w, buffer.NewReader(t.Data), defaultStyleParams); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
} else if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.CDATAToken:
if tag == svg.Style {
minifyBuffer.Reset()
if err := m.MinifyMimetype(defaultStyleType, minifyBuffer, buffer.NewReader(t.Text), defaultStyleParams); err == nil {
t.Data = append(t.Data[:9], minifyBuffer.Bytes()...)
t.Text = t.Data[9:]
t.Data = append(t.Data, cdataEndBytes...)
} else if err != minify.ErrNotExist {
return err
}
}
var useText bool
if t.Text, useText = xml.EscapeCDATAVal(&attrByteBuffer, t.Text); useText {
t.Text = parse.ReplaceMultipleWhitespace(parse.TrimWhitespace(t.Text))
if _, err := w.Write(t.Text); err != nil {
return err
}
} else if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.StartTagPIToken:
for {
if t := *tb.Shift(); t.TokenType == xml.StartTagClosePIToken || t.TokenType == xml.ErrorToken {
break
}
}
case xml.StartTagToken:
tag = t.Hash
if containerTagMap[tag] { // skip empty containers
i := 0
for {
next := tb.Peek(i)
i++
if next.TokenType == xml.EndTagToken && next.Hash == tag || next.TokenType == xml.StartTagCloseVoidToken || next.TokenType == xml.ErrorToken {
for j := 0; j < i; j++ {
tb.Shift()
}
break SWITCH
} else if next.TokenType != xml.AttributeToken && next.TokenType != xml.StartTagCloseToken {
break
}
}
if tag == svg.G {
if tb.Peek(0).TokenType == xml.StartTagCloseToken {
gStack = append(gStack, false)
tb.Shift()
break
}
gStack = append(gStack, true)
}
} else if tag == svg.Metadata {
skipTag(tb, tag)
break
} else if tag == svg.Line {
o.shortenLine(tb, &t, p)
} else if tag == svg.Rect && !o.shortenRect(tb, &t, p) {
skipTag(tb, tag)
break
} else if tag == svg.Polygon || tag == svg.Polyline {
o.shortenPoly(tb, &t, p)
}
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.AttributeToken:
if len(t.AttrVal) == 0 || t.Text == nil { // data is nil when attribute has been removed
continue
}
attr := t.Hash
val := t.AttrVal
if n, m := parse.Dimension(val); n+m == len(val) && attr != svg.Version { // TODO: inefficient, temporary measure
val, _ = o.shortenDimension(val)
}
if attr == svg.Xml_Space && bytes.Equal(val, []byte("preserve")) ||
tag == svg.Svg && (attr == svg.Version && bytes.Equal(val, []byte("1.1")) ||
attr == svg.X && bytes.Equal(val, []byte("0")) ||
attr == svg.Y && bytes.Equal(val, []byte("0")) ||
attr == svg.Width && bytes.Equal(val, []byte("100%")) ||
attr == svg.Height && bytes.Equal(val, []byte("100%")) ||
attr == svg.PreserveAspectRatio && bytes.Equal(val, []byte("xMidYMid meet")) ||
attr == svg.BaseProfile && bytes.Equal(val, []byte("none")) ||
attr == svg.ContentScriptType && bytes.Equal(val, []byte("application/ecmascript")) ||
attr == svg.ContentStyleType && bytes.Equal(val, []byte("text/css"))) ||
tag == svg.Style && attr == svg.Type && bytes.Equal(val, []byte("text/css")) {
continue
}
if _, err := w.Write(spaceBytes); err != nil {
return err
}
if _, err := w.Write(t.Text); err != nil {
return err
}
if _, err := w.Write(isBytes); err != nil {
return err
}
if tag == svg.Svg && attr == svg.ContentStyleType {
val = minify.ContentType(val)
defaultStyleType = val
} else if attr == svg.Style {
minifyBuffer.Reset()
if err := m.MinifyMimetype(defaultStyleType, minifyBuffer, buffer.NewReader(val), defaultInlineStyleParams); err == nil {
val = minifyBuffer.Bytes()
} else if err != minify.ErrNotExist {
return err
}
} else if attr == svg.D {
val = p.ShortenPathData(val)
} else if attr == svg.ViewBox {
j := 0
newVal := val[:0]
for i := 0; i < 4; i++ {
if i != 0 {
if j >= len(val) || val[j] != ' ' && val[j] != ',' {
newVal = append(newVal, val[j:]...)
break
}
newVal = append(newVal, ' ')
j++
}
if dim, n := o.shortenDimension(val[j:]); n > 0 {
newVal = append(newVal, dim...)
j += n
} else {
newVal = append(newVal, val[j:]...)
break
}
}
val = newVal
} else if colorAttrMap[attr] && len(val) > 0 && (len(val) < 5 || !parse.EqualFold(val[:4], urlBytes)) {
parse.ToLower(val)
if val[0] == '#' {
if name, ok := minifyCSS.ShortenColorHex[string(val)]; ok {
val = name
} else if len(val) == 7 && val[1] == val[2] && val[3] == val[4] && val[5] == val[6] {
val[2] = val[3]
val[3] = val[5]
val = val[:4]
}
} else if hex, ok := minifyCSS.ShortenColorName[css.ToHash(val)]; ok {
val = hex
// } else if len(val) > 5 && bytes.Equal(val[:4], []byte("rgb(")) && val[len(val)-1] == ')' {
// TODO: handle rgb(x, y, z) and hsl(x, y, z)
}
}
// prefer single or double quotes depending on what occurs more often in value
val = xml.EscapeAttrVal(&attrByteBuffer, val)
if _, err := w.Write(val); err != nil {
return err
}
case xml.StartTagCloseToken:
next := tb.Peek(0)
skipExtra := false
if next.TokenType == xml.TextToken && parse.IsAllWhitespace(next.Data) {
next = tb.Peek(1)
skipExtra = true
}
if next.TokenType == xml.EndTagToken {
// collapse empty tags to single void tag
tb.Shift()
if skipExtra {
tb.Shift()
}
if _, err := w.Write(voidBytes); err != nil {
return err
}
} else {
if _, err := w.Write(t.Data); err != nil {
return err
}
}
case xml.StartTagCloseVoidToken:
tag = 0
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.EndTagToken:
tag = 0
if t.Hash == svg.G && len(gStack) > 0 {
if !gStack[len(gStack)-1] {
gStack = gStack[:len(gStack)-1]
break
}
gStack = gStack[:len(gStack)-1]
}
if len(t.Data) > 3+len(t.Text) {
t.Data[2+len(t.Text)] = '>'
t.Data = t.Data[:3+len(t.Text)]
}
if _, err := w.Write(t.Data); err != nil {
return err
}
}
}
}
func (o *Minifier) shortenDimension(b []byte) ([]byte, int) {
if n, m := parse.Dimension(b); n > 0 {
unit := b[n : n+m]
b = minify.Number(b[:n], o.Decimals)
if len(b) != 1 || b[0] != '0' {
if m == 2 && unit[0] == 'p' && unit[1] == 'x' {
unit = nil
} else if m > 1 { // only percentage is length 1
parse.ToLower(unit)
}
b = append(b, unit...)
}
return b, n + m
}
return b, 0
}
func (o *Minifier) shortenLine(tb *TokenBuffer, t *Token, p *PathData) {
x1, y1, x2, y2 := zeroBytes, zeroBytes, zeroBytes, zeroBytes
if attrs, replacee := tb.Attributes(svg.X1, svg.Y1, svg.X2, svg.Y2); replacee != nil {
if attrs[0] != nil {
x1 = minify.Number(attrs[0].AttrVal, o.Decimals)
attrs[0].Text = nil
}
if attrs[1] != nil {
y1 = minify.Number(attrs[1].AttrVal, o.Decimals)
attrs[1].Text = nil
}
if attrs[2] != nil {
x2 = minify.Number(attrs[2].AttrVal, o.Decimals)
attrs[2].Text = nil
}
if attrs[3] != nil {
y2 = minify.Number(attrs[3].AttrVal, o.Decimals)
attrs[3].Text = nil
}
d := make([]byte, 0, 5+len(x1)+len(y1)+len(x2)+len(y2))
d = append(d, 'M')
d = append(d, x1...)
d = append(d, ' ')
d = append(d, y1...)
d = append(d, 'L')
d = append(d, x2...)
d = append(d, ' ')
d = append(d, y2...)
d = append(d, 'z')
d = p.ShortenPathData(d)
t.Data = pathBytes
replacee.Text = dBytes
replacee.AttrVal = d
}
}
func (o *Minifier) shortenRect(tb *TokenBuffer, t *Token, p *PathData) bool {
if attrs, replacee := tb.Attributes(svg.X, svg.Y, svg.Width, svg.Height, svg.Rx, svg.Ry); replacee != nil && attrs[4] == nil && attrs[5] == nil {
x, y, w, h := zeroBytes, zeroBytes, zeroBytes, zeroBytes
if attrs[0] != nil {
x = minify.Number(attrs[0].AttrVal, o.Decimals)
attrs[0].Text = nil
}
if attrs[1] != nil {
y = minify.Number(attrs[1].AttrVal, o.Decimals)
attrs[1].Text = nil
}
if attrs[2] != nil {
w = minify.Number(attrs[2].AttrVal, o.Decimals)
attrs[2].Text = nil
}
if attrs[3] != nil {
h = minify.Number(attrs[3].AttrVal, o.Decimals)
attrs[3].Text = nil
}
if len(w) == 0 || w[0] == '0' || len(h) == 0 || h[0] == '0' {
return false
}
d := make([]byte, 0, 6+2*len(x)+len(y)+len(w)+len(h))
d = append(d, 'M')
d = append(d, x...)
d = append(d, ' ')
d = append(d, y...)
d = append(d, 'h')
d = append(d, w...)
d = append(d, 'v')
d = append(d, h...)
d = append(d, 'H')
d = append(d, x...)
d = append(d, 'z')
d = p.ShortenPathData(d)
t.Data = pathBytes
replacee.Text = dBytes
replacee.AttrVal = d
}
return true
}
func (o *Minifier) shortenPoly(tb *TokenBuffer, t *Token, p *PathData) {
if attrs, replacee := tb.Attributes(svg.Points); replacee != nil && attrs[0] != nil {
points := attrs[0].AttrVal
i := 0
for i < len(points) && !(points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
for i < len(points) && (points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
for i < len(points) && !(points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
endMoveTo := i
for i < len(points) && (points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
startLineTo := i
if i == len(points) {
return
}
d := make([]byte, 0, len(points)+3)
d = append(d, 'M')
d = append(d, points[:endMoveTo]...)
d = append(d, 'L')
d = append(d, points[startLineTo:]...)
if t.Hash == svg.Polygon {
d = append(d, 'z')
}
d = p.ShortenPathData(d)
t.Data = pathBytes
replacee.Text = dBytes
replacee.AttrVal = d
}
}
////////////////////////////////////////////////////////////////
func skipTag(tb *TokenBuffer, tag svg.Hash) {
for {
if t := *tb.Shift(); (t.TokenType == xml.EndTagToken || t.TokenType == xml.StartTagCloseVoidToken) && t.Hash == tag || t.TokenType == xml.ErrorToken {
break
}
}
}

199
vendor/github.com/tdewolff/minify/svg/svg_test.go generated vendored Normal file
View file

@ -0,0 +1,199 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"bytes"
"fmt"
"io"
"os"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/test"
)
func TestSVG(t *testing.T) {
svgTests := []struct {
svg string
expected string
}{
{`<!-- comment -->`, ``},
{`<!DOCTYPE svg SYSTEM "foo.dtd">`, ``},
{`<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "foo.dtd" [ <!ENTITY x "bar"> ]>`, `<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "foo.dtd" [ <!ENTITY x "bar"> ]>`},
{`<!DOCTYPE svg SYSTEM "foo.dtd">`, ``},
{`<?xml version="1.0" ?>`, ``},
{`<style> <![CDATA[ x ]]> </style>`, `<style>x</style>`},
{`<style> <![CDATA[ <<<< ]]> </style>`, `<style>&lt;&lt;&lt;&lt;</style>`},
{`<style> <![CDATA[ <<<<< ]]> </style>`, `<style><![CDATA[ <<<<< ]]></style>`},
{`<style/><![CDATA[ <<<<< ]]>`, `<style/><![CDATA[ <<<<< ]]>`},
{`<svg version="1.0"></svg>`, `<svg version="1.0"/>`},
{`<svg version="1.1" x="0" y="0px" width="100%" height="100%"><path/></svg>`, `<svg><path/></svg>`},
{`<path x="a"> </path>`, `<path x="a"/>`},
{`<path x=" a "/>`, `<path x="a"/>`},
{"<path x=\" a \n b \"/>", `<path x="a b"/>`},
{`<path x="5.0px" y="0%"/>`, `<path x="5" y="0"/>`},
{`<svg viewBox="5.0px 5px 240IN px"><path/></svg>`, `<svg viewBox="5 5 240in px"><path/></svg>`},
{`<svg viewBox="5.0!5px"><path/></svg>`, `<svg viewBox="5!5px"><path/></svg>`},
{`<path d="M 100 100 L 300 100 L 200 100 z"/>`, `<path d="M1e2 1e2H3e2 2e2z"/>`},
{`<path d="M100 -100M200 300z"/>`, `<path d="M1e2-1e2M2e2 3e2z"/>`},
{`<path d="M0.5 0.6 M -100 0.5z"/>`, `<path d="M.5.6M-1e2.5z"/>`},
{`<path d="M01.0 0.6 z"/>`, `<path d="M1 .6z"/>`},
{`<path d="M20 20l-10-10z"/>`, `<path d="M20 20 10 10z"/>`},
{`<?xml version="1.0" encoding="utf-8"?>`, ``},
{`<svg viewbox="0 0 16 16"><path/></svg>`, `<svg viewbox="0 0 16 16"><path/></svg>`},
{`<g></g>`, ``},
{`<g><path/></g>`, `<path/>`},
{`<g id="a"><g><path/></g></g>`, `<g id="a"><path/></g>`},
{`<path fill="#ffffff"/>`, `<path fill="#fff"/>`},
{`<path fill="#fff"/>`, `<path fill="#fff"/>`},
{`<path fill="white"/>`, `<path fill="#fff"/>`},
{`<path fill="#ff0000"/>`, `<path fill="red"/>`},
{`<line x1="5" y1="10" x2="20" y2="40"/>`, `<path d="M5 10 20 40z"/>`},
{`<rect x="5" y="10" width="20" height="40"/>`, `<path d="M5 10h20v40H5z"/>`},
{`<rect x="-5.669" y="147.402" fill="#843733" width="252.279" height="14.177"/>`, `<path fill="#843733" d="M-5.669 147.402h252.279v14.177H-5.669z"/>`},
{`<rect x="5" y="10" rx="2" ry="3"/>`, `<rect x="5" y="10" rx="2" ry="3"/>`},
{`<rect x="5" y="10" height="40"/>`, ``},
{`<rect x="5" y="10" width="30" height="0"/>`, ``},
{`<polygon points="1,2 3,4"/>`, `<path d="M1 2 3 4z"/>`},
{`<polyline points="1,2 3,4"/>`, `<path d="M1 2 3 4"/>`},
{`<svg contentStyleType="text/json ; charset=iso-8859-1"><style>{a : true}</style></svg>`, `<svg contentStyleType="text/json;charset=iso-8859-1"><style>{a : true}</style></svg>`},
{`<metadata><dc:title /></metadata>`, ``},
// from SVGO
{`<!DOCTYPE bla><?xml?><!-- comment --><metadata/>`, ``},
{`<polygon fill="none" stroke="#000" points="-0.1,"/>`, `<polygon fill="none" stroke="#000" points="-0.1,"/>`}, // #45
{`<path stroke="url(#UPPERCASE)"/>`, `<path stroke="url(#UPPERCASE)"/>`}, // #117
// go fuzz
{`<0 d=09e9.6e-9e0`, `<0 d=""`},
{`<line`, `<line`},
}
m := minify.New()
for _, tt := range svgTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.svg, err, w.String(), tt.expected)
})
}
}
func TestSVGStyle(t *testing.T) {
svgTests := []struct {
svg string
expected string
}{
{`<style> a > b {} </style>`, `<style>a>b{}</style>`},
{`<style> <![CDATA[ @media x < y {} ]]> </style>`, `<style>@media x &lt; y{}</style>`},
{`<style> <![CDATA[ * { content: '<<<<<'; } ]]> </style>`, `<style><![CDATA[*{content:'<<<<<'}]]></style>`},
{`<style/><![CDATA[ * { content: '<<<<<'; ]]>`, `<style/><![CDATA[ * { content: '<<<<<'; ]]>`},
{`<path style="fill: black; stroke: #ff0000;"/>`, `<path style="fill:#000;stroke:red"/>`},
}
m := minify.New()
m.AddFunc("text/css", css.Minify)
for _, tt := range svgTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.svg, err, w.String(), tt.expected)
})
}
}
func TestSVGDecimals(t *testing.T) {
var svgTests = []struct {
svg string
expected string
}{
{`<svg x="1.234" y="0.001" width="1.001"><path/></svg>`, `<svg x="1.2" width="1"><path/></svg>`},
}
m := minify.New()
o := &Minifier{Decimals: 1}
for _, tt := range svgTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := o.Minify(m, w, r, nil)
test.Minify(t, tt.svg, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
svg string
n []int
}{
{`<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "foo.dtd" [ <!ENTITY x "bar"> ]>`, []int{0}},
{`abc`, []int{0}},
{`<style>abc</style>`, []int{2}},
{`<![CDATA[ <<<< ]]>`, []int{0}},
{`<![CDATA[ <<<<< ]]>`, []int{0}},
{`<path d="x"/>`, []int{0, 1, 2, 3, 4, 5}},
{`<path></path>`, []int{1}},
{`<svg>x</svg>`, []int{1, 3}},
{`<svg>x</svg >`, []int{3}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.svg, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
func TestMinifyErrors(t *testing.T) {
errorTests := []struct {
svg string
err error
}{
{`<style>abc</style>`, test.ErrPlain},
{`<style><![CDATA[abc]]></style>`, test.ErrPlain},
{`<path style="abc"/>`, test.ErrPlain},
}
m := minify.New()
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
for _, tt := range errorTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.T(t, err, tt.err)
})
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("image/svg+xml", Minify)
m.AddFunc("text/css", css.Minify)
if err := m.Minify("image/svg+xml", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

96
vendor/github.com/tdewolff/minify/svg/table.go generated vendored Normal file
View file

@ -0,0 +1,96 @@
package svg // import "github.com/tdewolff/minify/svg"
import "github.com/tdewolff/parse/svg"
var containerTagMap = map[svg.Hash]bool{
svg.A: true,
svg.Defs: true,
svg.G: true,
svg.Marker: true,
svg.Mask: true,
svg.Missing_Glyph: true,
svg.Pattern: true,
svg.Switch: true,
svg.Symbol: true,
}
var colorAttrMap = map[svg.Hash]bool{
svg.Color: true,
svg.Fill: true,
svg.Stroke: true,
svg.Stop_Color: true,
svg.Flood_Color: true,
svg.Lighting_Color: true,
}
// var styleAttrMap = map[svg.Hash]bool{
// svg.Font: true,
// svg.Font_Family: true,
// svg.Font_Size: true,
// svg.Font_Size_Adjust: true,
// svg.Font_Stretch: true,
// svg.Font_Style: true,
// svg.Font_Variant: true,
// svg.Font_Weight: true,
// svg.Direction: true,
// svg.Letter_Spacing: true,
// svg.Text_Decoration: true,
// svg.Unicode_Bidi: true,
// svg.White_Space: true,
// svg.Word_Spacing: true,
// svg.Clip: true,
// svg.Color: true,
// svg.Cursor: true,
// svg.Display: true,
// svg.Overflow: true,
// svg.Visibility: true,
// svg.Clip_Path: true,
// svg.Clip_Rule: true,
// svg.Mask: true,
// svg.Opacity: true,
// svg.Enable_Background: true,
// svg.Filter: true,
// svg.Flood_Color: true,
// svg.Flood_Opacity: true,
// svg.Lighting_Color: true,
// svg.Solid_Color: true,
// svg.Solid_Opacity: true,
// svg.Stop_Color: true,
// svg.Stop_Opacity: true,
// svg.Pointer_Events: true,
// svg.Buffered_Rendering: true,
// svg.Color_Interpolation: true,
// svg.Color_Interpolation_Filters: true,
// svg.Color_Profile: true,
// svg.Color_Rendering: true,
// svg.Fill: true,
// svg.Fill_Opacity: true,
// svg.Fill_Rule: true,
// svg.Image_Rendering: true,
// svg.Marker: true,
// svg.Marker_End: true,
// svg.Marker_Mid: true,
// svg.Marker_Start: true,
// svg.Shape_Rendering: true,
// svg.Stroke: true,
// svg.Stroke_Dasharray: true,
// svg.Stroke_Dashoffset: true,
// svg.Stroke_Linecap: true,
// svg.Stroke_Linejoin: true,
// svg.Stroke_Miterlimit: true,
// svg.Stroke_Opacity: true,
// svg.Stroke_Width: true,
// svg.Paint_Order: true,
// svg.Vector_Effect: true,
// svg.Viewport_Fill: true,
// svg.Viewport_Fill_Opacity: true,
// svg.Text_Rendering: true,
// svg.Alignment_Baseline: true,
// svg.Baseline_Shift: true,
// svg.Dominant_Baseline: true,
// svg.Glyph_Orientation_Horizontal: true,
// svg.Glyph_Orientation_Vertical: true,
// svg.Kerning: true,
// svg.Text_Anchor: true,
// svg.Writing_Mode: true,
// }

84
vendor/github.com/tdewolff/minify/xml/buffer.go generated vendored Normal file
View file

@ -0,0 +1,84 @@
package xml // import "github.com/tdewolff/minify/xml"
import "github.com/tdewolff/parse/xml"
// Token is a single token unit with an attribute value (if given) and hash of the data.
type Token struct {
xml.TokenType
Data []byte
Text []byte
AttrVal []byte
}
// TokenBuffer is a buffer that allows for token look-ahead.
type TokenBuffer struct {
l *xml.Lexer
buf []Token
pos int
}
// NewTokenBuffer returns a new TokenBuffer.
func NewTokenBuffer(l *xml.Lexer) *TokenBuffer {
return &TokenBuffer{
l: l,
buf: make([]Token, 0, 8),
}
}
func (z *TokenBuffer) read(t *Token) {
t.TokenType, t.Data = z.l.Next()
t.Text = z.l.Text()
if t.TokenType == xml.AttributeToken {
t.AttrVal = z.l.AttrVal()
} else {
t.AttrVal = nil
}
}
// Peek returns the ith element and possibly does an allocation.
// Peeking past an error will panic.
func (z *TokenBuffer) Peek(pos int) *Token {
pos += z.pos
if pos >= len(z.buf) {
if len(z.buf) > 0 && z.buf[len(z.buf)-1].TokenType == xml.ErrorToken {
return &z.buf[len(z.buf)-1]
}
c := cap(z.buf)
d := len(z.buf) - z.pos
p := pos - z.pos + 1 // required peek length
var buf []Token
if 2*p > c {
buf = make([]Token, 0, 2*c+p)
} else {
buf = z.buf
}
copy(buf[:d], z.buf[z.pos:])
buf = buf[:p]
pos -= z.pos
for i := d; i < p; i++ {
z.read(&buf[i])
if buf[i].TokenType == xml.ErrorToken {
buf = buf[:i+1]
pos = i
break
}
}
z.pos, z.buf = 0, buf
}
return &z.buf[pos]
}
// Shift returns the first element and advances position.
func (z *TokenBuffer) Shift() *Token {
if z.pos >= len(z.buf) {
t := &z.buf[:1][0]
z.read(t)
return t
}
t := &z.buf[z.pos]
z.pos++
return t
}

37
vendor/github.com/tdewolff/minify/xml/buffer_test.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package xml // import "github.com/tdewolff/minify/xml"
import (
"bytes"
"testing"
"github.com/tdewolff/parse/xml"
"github.com/tdewolff/test"
)
func TestBuffer(t *testing.T) {
// 0 12 3 45 6 7 8 9 0
s := `<p><a href="//url">text</a>text<!--comment--></p>`
z := NewTokenBuffer(xml.NewLexer(bytes.NewBufferString(s)))
tok := z.Shift()
test.That(t, string(tok.Text) == "p", "first token is <p>")
test.That(t, z.pos == 0, "shift first token and restore position")
test.That(t, len(z.buf) == 0, "shift first token and restore length")
test.That(t, string(z.Peek(2).Text) == "href", "third token is href")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 3, "two tokens after peeking")
test.That(t, string(z.Peek(8).Text) == "p", "ninth token is <p>")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 9, "nine tokens after peeking")
test.That(t, z.Peek(9).TokenType == xml.ErrorToken, "tenth token is an error")
test.That(t, z.Peek(9) == z.Peek(10), "tenth and eleventh token are EOF")
test.That(t, len(z.buf) == 10, "ten tokens after peeking")
_ = z.Shift()
tok = z.Shift()
test.That(t, string(tok.Text) == "a", "third token is <a>")
test.That(t, z.pos == 2, "don't change position after peeking")
}

193
vendor/github.com/tdewolff/minify/xml/xml.go generated vendored Normal file
View file

@ -0,0 +1,193 @@
// Package xml minifies XML1.0 following the specifications at http://www.w3.org/TR/xml/.
package xml // import "github.com/tdewolff/minify/xml"
import (
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/xml"
)
var (
isBytes = []byte("=")
spaceBytes = []byte(" ")
voidBytes = []byte("/>")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is an XML minifier.
type Minifier struct {
KeepWhitespace bool
}
// Minify minifies XML data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies XML data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
omitSpace := true // on true the next text token must not start with a space
attrByteBuffer := make([]byte, 0, 64)
l := xml.NewLexer(r)
defer l.Restore()
tb := NewTokenBuffer(l)
for {
t := *tb.Shift()
if t.TokenType == xml.CDATAToken {
if text, useText := xml.EscapeCDATAVal(&attrByteBuffer, t.Text); useText {
t.TokenType = xml.TextToken
t.Data = text
}
}
switch t.TokenType {
case xml.ErrorToken:
if l.Err() == io.EOF {
return nil
}
return l.Err()
case xml.DOCTYPEToken:
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.CDATAToken:
if _, err := w.Write(t.Data); err != nil {
return err
}
if len(t.Text) > 0 && parse.IsWhitespace(t.Text[len(t.Text)-1]) {
omitSpace = true
}
case xml.TextToken:
t.Data = parse.ReplaceMultipleWhitespace(t.Data)
// whitespace removal; trim left
if omitSpace && (t.Data[0] == ' ' || t.Data[0] == '\n') {
t.Data = t.Data[1:]
}
// whitespace removal; trim right
omitSpace = false
if len(t.Data) == 0 {
omitSpace = true
} else if t.Data[len(t.Data)-1] == ' ' || t.Data[len(t.Data)-1] == '\n' {
omitSpace = true
i := 0
for {
next := tb.Peek(i)
// trim if EOF, text token with whitespace begin or block token
if next.TokenType == xml.ErrorToken {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
break
} else if next.TokenType == xml.TextToken {
// this only happens when a comment, doctype, cdata startpi tag was in between
// remove if the text token starts with a whitespace
if len(next.Data) > 0 && parse.IsWhitespace(next.Data[0]) {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
} else if next.TokenType == xml.CDATAToken {
if len(next.Text) > 0 && parse.IsWhitespace(next.Text[0]) {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
} else if next.TokenType == xml.StartTagToken || next.TokenType == xml.EndTagToken {
if !o.KeepWhitespace {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
}
i++
}
}
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.StartTagToken:
if o.KeepWhitespace {
omitSpace = false
}
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.StartTagPIToken:
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.AttributeToken:
if _, err := w.Write(spaceBytes); err != nil {
return err
}
if _, err := w.Write(t.Text); err != nil {
return err
}
if _, err := w.Write(isBytes); err != nil {
return err
}
if len(t.AttrVal) < 2 {
if _, err := w.Write(t.AttrVal); err != nil {
return err
}
} else {
// prefer single or double quotes depending on what occurs more often in value
val := xml.EscapeAttrVal(&attrByteBuffer, t.AttrVal[1:len(t.AttrVal)-1])
if _, err := w.Write(val); err != nil {
return err
}
}
case xml.StartTagCloseToken:
next := tb.Peek(0)
skipExtra := false
if next.TokenType == xml.TextToken && parse.IsAllWhitespace(next.Data) {
next = tb.Peek(1)
skipExtra = true
}
if next.TokenType == xml.EndTagToken {
// collapse empty tags to single void tag
tb.Shift()
if skipExtra {
tb.Shift()
}
if _, err := w.Write(voidBytes); err != nil {
return err
}
} else {
if _, err := w.Write(t.Text); err != nil {
return err
}
}
case xml.StartTagCloseVoidToken:
if _, err := w.Write(t.Text); err != nil {
return err
}
case xml.StartTagClosePIToken:
if _, err := w.Write(t.Text); err != nil {
return err
}
case xml.EndTagToken:
if o.KeepWhitespace {
omitSpace = false
}
if len(t.Data) > 3+len(t.Text) {
t.Data[2+len(t.Text)] = '>'
t.Data = t.Data[:3+len(t.Text)]
}
if _, err := w.Write(t.Data); err != nil {
return err
}
}
}
}

129
vendor/github.com/tdewolff/minify/xml/xml_test.go generated vendored Normal file
View file

@ -0,0 +1,129 @@
package xml // import "github.com/tdewolff/minify/xml"
import (
"bytes"
"fmt"
"os"
"regexp"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestXML(t *testing.T) {
xmlTests := []struct {
xml string
expected string
}{
{"<!-- comment -->", ""},
{"<A>x</A>", "<A>x</A>"},
{"<a><b>x</b></a>", "<a><b>x</b></a>"},
{"<a><b>x\ny</b></a>", "<a><b>x\ny</b></a>"},
{"<a> <![CDATA[ a ]]> </a>", "<a>a</a>"},
{"<a >a</a >", "<a>a</a>"},
{"<?xml version=\"1.0\" ?>", "<?xml version=\"1.0\"?>"},
{"<x></x>", "<x/>"},
{"<x> </x>", "<x/>"},
{"<x a=\"b\"></x>", "<x a=\"b\"/>"},
{"<x a=\"\"></x>", "<x a=\"\"/>"},
{"<x a=a></x>", "<x a=a/>"},
{"<x a=\" a \n\r\t b \"/>", "<x a=\" a b \"/>"},
{"<x a=\"&apos;b&quot;\"></x>", "<x a=\"'b&#34;\"/>"},
{"<x a=\"&quot;&quot;'\"></x>", "<x a='\"\"&#39;'/>"},
{"<!DOCTYPE foo SYSTEM \"Foo.dtd\">", "<!DOCTYPE foo SYSTEM \"Foo.dtd\">"},
{"text <!--comment--> text", "text text"},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"<!doctype html>", "<!doctype html=>"}, // bad formatted, doctype must be uppercase and html must have attribute value
{"<x>\n<!--y-->\n</x>", "<x></x>"},
{"<style>lala{color:red}</style>", "<style>lala{color:red}</style>"},
{`cats and dogs `, `cats and dogs`},
{`</0`, `</0`}, // go fuzz
}
m := minify.New()
for _, tt := range xmlTests {
t.Run(tt.xml, func(t *testing.T) {
r := bytes.NewBufferString(tt.xml)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.xml, err, w.String(), tt.expected)
})
}
}
func TestXMLKeepWhitespace(t *testing.T) {
xmlTests := []struct {
xml string
expected string
}{
{`cats and dogs `, `cats and dogs`},
{` <div> <i> test </i> <b> test </b> </div> `, `<div> <i> test </i> <b> test </b> </div>`},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"text\n<!--comment-->text<!--comment--> text", "text\ntext text"},
{"<x>\n<!--y-->\n</x>", "<x>\n</x>"},
{"<style>lala{color:red}</style>", "<style>lala{color:red}</style>"},
{"<x> <?xml?> </x>", "<x><?xml?> </x>"},
{"<x> <![CDATA[ x ]]> </x>", "<x> x </x>"},
{"<x> <![CDATA[ <<<<< ]]> </x>", "<x><![CDATA[ <<<<< ]]></x>"},
}
m := minify.New()
xmlMinifier := &Minifier{KeepWhitespace: true}
for _, tt := range xmlTests {
t.Run(tt.xml, func(t *testing.T) {
r := bytes.NewBufferString(tt.xml)
w := &bytes.Buffer{}
err := xmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.xml, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
xml string
n []int
}{
{`<!DOCTYPE foo>`, []int{0}},
{`<?xml?>`, []int{0, 1}},
{`<a x=y z="val">`, []int{0, 1, 2, 3, 4, 8, 9}},
{`<foo/>`, []int{1}},
{`</foo>`, []int{0}},
{`<foo></foo>`, []int{1}},
{`<![CDATA[data<<<<<]]>`, []int{0}},
{`text`, []int{0}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.xml, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.xml)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), Minify)
if err := m.Minify("text/xml", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

5
vendor/github.com/tdewolff/parse/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,5 @@
language: go
before_install:
- go get github.com/mattn/goveralls
script:
- goveralls -v -service travis-ci -repotoken $COVERALLS_TOKEN || go test -v ./...

22
vendor/github.com/tdewolff/parse/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,22 @@
Copyright (c) 2015 Taco de Wolff
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.

66
vendor/github.com/tdewolff/parse/README.md generated vendored Normal file
View file

@ -0,0 +1,66 @@
# Parse [![Build Status](https://travis-ci.org/tdewolff/parse.svg?branch=master)](https://travis-ci.org/tdewolff/parse) [![GoDoc](http://godoc.org/github.com/tdewolff/parse?status.svg)](http://godoc.org/github.com/tdewolff/parse) [![Coverage Status](https://coveralls.io/repos/github/tdewolff/parse/badge.svg?branch=master)](https://coveralls.io/github/tdewolff/parse?branch=master)
This package contains several lexers and parsers written in [Go][1]. All subpackages are built to be streaming, high performance and to be in accordance with the official (latest) specifications.
The lexers are implemented using `buffer.Lexer` in https://github.com/tdewolff/parse/buffer and the parsers work on top of the lexers. Some subpackages have hashes defined (using [Hasher](https://github.com/tdewolff/hasher)) that speed up common byte-slice comparisons.
## Buffer
### Reader
Reader is a wrapper around a `[]byte` that implements the `io.Reader` interface. It is a much thinner layer than `bytes.Buffer` provides and is therefore faster.
### Writer
Writer is a buffer that implements the `io.Writer` interface. It is a much thinner layer than `bytes.Buffer` provides and is therefore faster. It will expand the buffer when needed.
The reset functionality allows for better memory reuse. After calling `Reset`, it will overwrite the current buffer and thus reduce allocations.
### Lexer
Lexer is a read buffer specifically designed for building lexers. It keeps track of two positions: a start and end position. The start position is the beginning of the current token being parsed, the end position is being moved forward until a valid token is found. Calling `Shift` will collapse the positions to the end and return the parsed `[]byte`.
Moving the end position can go through `Move(int)` which also accepts negative integers. One can also use `Pos() int` to try and parse a token, and if it fails rewind with `Rewind(int)`, passing the previously saved position.
`Peek(int) byte` will peek forward (relative to the end position) and return the byte at that location. `PeekRune(int) (rune, int)` returns UTF-8 runes and its length at the given **byte** position. Upon an error `Peek` will return `0`, the **user must peek at every character** and not skip any, otherwise it may skip a `0` and panic on out-of-bounds indexing.
`Lexeme() []byte` will return the currently selected bytes, `Skip()` will collapse the selection. `Shift() []byte` is a combination of `Lexeme() []byte` and `Skip()`.
When the passed `io.Reader` returned an error, `Err() error` will return that error even if not at the end of the buffer.
### StreamLexer
StreamLexer behaves like Lexer but uses a buffer pool to read in chunks from `io.Reader`, retaining old buffers in memory that are still in use, and re-using old buffers otherwise. Calling `Free(n int)` frees up `n` bytes from the internal buffer(s). It holds an array of buffers to accommodate for keeping everything in-memory. Calling `ShiftLen() int` returns the number of bytes that have been shifted since the previous call to `ShiftLen`, which can be used to specify how many bytes need to be freed up from the buffer. If you don't need to keep returned byte slices around, call `Free(ShiftLen())` after every `Shift` call.
## Strconv
This package contains string conversion function much like the standard library's `strconv` package, but it is specifically tailored for the performance needs within the `minify` package.
For example, the floating-point to string conversion function is approximately twice as fast as the standard library, but it is not as precise.
## CSS
This package is a CSS3 lexer and parser. Both follow the specification at [CSS Syntax Module Level 3](http://www.w3.org/TR/css-syntax-3/). The lexer takes an io.Reader and converts it into tokens until the EOF. The parser returns a parse tree of the full io.Reader input stream, but the low-level `Next` function can be used for stream parsing to returns grammar units until the EOF.
[See README here](https://github.com/tdewolff/parse/tree/master/css).
## HTML
This package is an HTML5 lexer. It follows the specification at [The HTML syntax](http://www.w3.org/TR/html5/syntax.html). The lexer takes an io.Reader and converts it into tokens until the EOF.
[See README here](https://github.com/tdewolff/parse/tree/master/html).
## JS
This package is a JS lexer (ECMA-262, edition 6.0). It follows the specification at [ECMAScript Language Specification](http://www.ecma-international.org/ecma-262/6.0/). The lexer takes an io.Reader and converts it into tokens until the EOF.
[See README here](https://github.com/tdewolff/parse/tree/master/js).
## JSON
This package is a JSON parser (ECMA-404). It follows the specification at [JSON](http://json.org/). The parser takes an io.Reader and converts it into tokens until the EOF.
[See README here](https://github.com/tdewolff/parse/tree/master/json).
## SVG
This package contains common hashes for SVG1.1 tags and attributes.
## XML
This package is an XML1.0 lexer. It follows the specification at [Extensible Markup Language (XML) 1.0 (Fifth Edition)](http://www.w3.org/TR/xml/). The lexer takes an io.Reader and converts it into tokens until the EOF.
[See README here](https://github.com/tdewolff/parse/tree/master/xml).
## License
Released under the [MIT license](LICENSE.md).
[1]: http://golang.org/ "Go Language"

15
vendor/github.com/tdewolff/parse/buffer/buffer.go generated vendored Normal file
View file

@ -0,0 +1,15 @@
/*
Package buffer contains buffer and wrapper types for byte slices. It is useful for writing lexers or other high-performance byte slice handling.
The `Reader` and `Writer` types implement the `io.Reader` and `io.Writer` respectively and provide a thinner and faster interface than `bytes.Buffer`.
The `Lexer` type is useful for building lexers because it keeps track of the start and end position of a byte selection, and shifts the bytes whenever a valid token is found.
The `StreamLexer` does the same, but keeps a buffer pool so that it reads a limited amount at a time, allowing to parse from streaming sources.
*/
package buffer // import "github.com/tdewolff/parse/buffer"
// defaultBufSize specifies the default initial length of internal buffers.
var defaultBufSize = 4096
// MinBuf specifies the default initial length of internal buffers.
// Solely here to support old versions of parse.
var MinBuf = defaultBufSize

153
vendor/github.com/tdewolff/parse/buffer/lexer.go generated vendored Normal file
View file

@ -0,0 +1,153 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"io"
"io/ioutil"
)
var nullBuffer = []byte{0}
// Lexer is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
// It keeps data in-memory until Free, taking a byte length, is called to move beyond the data.
type Lexer struct {
buf []byte
pos int // index in buf
start int // index in buf
err error
restore func()
}
// NewLexerBytes returns a new Lexer for a given io.Reader, and uses ioutil.ReadAll to read it into a byte slice.
// If the io.Reader implements Bytes, that is used instead.
// It will append a NULL at the end of the buffer.
func NewLexer(r io.Reader) *Lexer {
var b []byte
if r != nil {
if buffer, ok := r.(interface {
Bytes() []byte
}); ok {
b = buffer.Bytes()
} else {
var err error
b, err = ioutil.ReadAll(r)
if err != nil {
return &Lexer{
buf: []byte{0},
err: err,
}
}
}
}
return NewLexerBytes(b)
}
// NewLexerBytes returns a new Lexer for a given byte slice, and appends NULL at the end.
// To avoid reallocation, make sure the capacity has room for one more byte.
func NewLexerBytes(b []byte) *Lexer {
z := &Lexer{
buf: b,
}
n := len(b)
if n == 0 {
z.buf = nullBuffer
} else if b[n-1] != 0 {
// Append NULL to buffer, but try to avoid reallocation
if cap(b) > n {
// Overwrite next byte but restore when done
b = b[:n+1]
c := b[n]
b[n] = 0
z.buf = b
z.restore = func() {
b[n] = c
}
} else {
z.buf = append(b, 0)
}
}
return z
}
// Restore restores the replaced byte past the end of the buffer by NULL.
func (z *Lexer) Restore() {
if z.restore != nil {
z.restore()
z.restore = nil
}
}
// Err returns the error returned from io.Reader or io.EOF when the end has been reached.
func (z *Lexer) Err() error {
if z.err != nil {
return z.err
} else if z.pos >= len(z.buf)-1 {
return io.EOF
}
return nil
}
// Peek returns the ith byte relative to the end position.
// Peek returns 0 when an error has occurred, Err returns the error.
func (z *Lexer) Peek(pos int) byte {
pos += z.pos
return z.buf[pos]
}
// PeekRune returns the rune and rune length of the ith byte relative to the end position.
func (z *Lexer) PeekRune(pos int) (rune, int) {
// from unicode/utf8
c := z.Peek(pos)
if c < 0xC0 || z.Peek(pos+1) == 0 {
return rune(c), 1
} else if c < 0xE0 || z.Peek(pos+2) == 0 {
return rune(c&0x1F)<<6 | rune(z.Peek(pos+1)&0x3F), 2
} else if c < 0xF0 || z.Peek(pos+3) == 0 {
return rune(c&0x0F)<<12 | rune(z.Peek(pos+1)&0x3F)<<6 | rune(z.Peek(pos+2)&0x3F), 3
}
return rune(c&0x07)<<18 | rune(z.Peek(pos+1)&0x3F)<<12 | rune(z.Peek(pos+2)&0x3F)<<6 | rune(z.Peek(pos+3)&0x3F), 4
}
// Move advances the position.
func (z *Lexer) Move(n int) {
z.pos += n
}
// Pos returns a mark to which can be rewinded.
func (z *Lexer) Pos() int {
return z.pos - z.start
}
// Rewind rewinds the position to the given position.
func (z *Lexer) Rewind(pos int) {
z.pos = z.start + pos
}
// Lexeme returns the bytes of the current selection.
func (z *Lexer) Lexeme() []byte {
return z.buf[z.start:z.pos]
}
// Skip collapses the position to the end of the selection.
func (z *Lexer) Skip() {
z.start = z.pos
}
// Shift returns the bytes of the current selection and collapses the position to the end of the selection.
func (z *Lexer) Shift() []byte {
b := z.buf[z.start:z.pos]
z.start = z.pos
return b
}
// Offset returns the character position in the buffer.
func (z *Lexer) Offset() int {
return z.pos
}
// Bytes returns the underlying buffer.
func (z *Lexer) Bytes() []byte {
return z.buf
}

91
vendor/github.com/tdewolff/parse/buffer/lexer_test.go generated vendored Normal file
View file

@ -0,0 +1,91 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"bytes"
"io"
"testing"
"github.com/tdewolff/test"
)
func TestLexer(t *testing.T) {
s := `Lorem ipsum dolor sit amet, consectetur adipiscing elit.`
z := NewLexer(bytes.NewBufferString(s))
test.T(t, z.err, nil, "buffer has no error")
test.T(t, z.Err(), nil, "buffer is at EOF but must not return EOF until we reach that")
test.That(t, z.Pos() == 0, "buffer must start at position 0")
test.That(t, z.Peek(0) == 'L', "first character must be 'L'")
test.That(t, z.Peek(1) == 'o', "second character must be 'o'")
z.Move(1)
test.That(t, z.Peek(0) == 'o', "must be 'o' at position 1")
test.That(t, z.Peek(1) == 'r', "must be 'r' at position 1")
z.Rewind(6)
test.That(t, z.Peek(0) == 'i', "must be 'i' at position 6")
test.That(t, z.Peek(1) == 'p', "must be 'p' at position 7")
test.Bytes(t, z.Lexeme(), []byte("Lorem "), "buffered string must now read 'Lorem ' when at position 6")
test.Bytes(t, z.Shift(), []byte("Lorem "), "shift must return the buffered string")
test.That(t, z.Pos() == 0, "after shifting position must be 0")
test.That(t, z.Peek(0) == 'i', "must be 'i' at position 0 after shifting")
test.That(t, z.Peek(1) == 'p', "must be 'p' at position 1 after shifting")
test.T(t, z.Err(), nil, "error must be nil at this point")
z.Move(len(s) - len("Lorem ") - 1)
test.T(t, z.Err(), nil, "error must be nil just before the end of the buffer")
z.Skip()
test.That(t, z.Pos() == 0, "after skipping position must be 0")
z.Move(1)
test.T(t, z.Err(), io.EOF, "error must be EOF when past the buffer")
z.Move(-1)
test.T(t, z.Err(), nil, "error must be nil just before the end of the buffer, even when it has been past the buffer")
}
func TestLexerRunes(t *testing.T) {
z := NewLexer(bytes.NewBufferString("aæ†\U00100000"))
r, n := z.PeekRune(0)
test.That(t, n == 1, "first character must be length 1")
test.That(t, r == 'a', "first character must be rune 'a'")
r, n = z.PeekRune(1)
test.That(t, n == 2, "second character must be length 2")
test.That(t, r == 'æ', "second character must be rune 'æ'")
r, n = z.PeekRune(3)
test.That(t, n == 3, "fourth character must be length 3")
test.That(t, r == '†', "fourth character must be rune '†'")
r, n = z.PeekRune(6)
test.That(t, n == 4, "seventh character must be length 4")
test.That(t, r == '\U00100000', "seventh character must be rune '\U00100000'")
}
func TestLexerBadRune(t *testing.T) {
z := NewLexer(bytes.NewBufferString("\xF0")) // expect four byte rune
r, n := z.PeekRune(0)
test.T(t, n, 1, "length")
test.T(t, r, rune(0xF0), "rune")
}
func TestLexerZeroLen(t *testing.T) {
z := NewLexer(test.NewPlainReader(bytes.NewBufferString("")))
test.That(t, z.Peek(0) == 0, "first character must yield error")
}
func TestLexerEmptyReader(t *testing.T) {
z := NewLexer(test.NewEmptyReader())
test.That(t, z.Peek(0) == 0, "first character must yield error")
test.T(t, z.Err(), io.EOF, "error must be EOF")
test.That(t, z.Peek(0) == 0, "second peek must also yield error")
}
func TestLexerErrorReader(t *testing.T) {
z := NewLexer(test.NewErrorReader(0))
test.That(t, z.Peek(0) == 0, "first character must yield error")
test.T(t, z.Err(), test.ErrPlain, "error must be ErrPlain")
test.That(t, z.Peek(0) == 0, "second peek must also yield error")
}
func TestLexerBytes(t *testing.T) {
b := []byte{'t', 'e', 's', 't'}
z := NewLexerBytes(b)
test.That(t, z.Peek(4) == 0, "fifth character must yield NULL")
}

44
vendor/github.com/tdewolff/parse/buffer/reader.go generated vendored Normal file
View file

@ -0,0 +1,44 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import "io"
// Reader implements an io.Reader over a byte slice.
type Reader struct {
buf []byte
pos int
}
// NewReader returns a new Reader for a given byte slice.
func NewReader(buf []byte) *Reader {
return &Reader{
buf: buf,
}
}
// Read reads bytes into the given byte slice and returns the number of bytes read and an error if occurred.
func (r *Reader) Read(b []byte) (n int, err error) {
if len(b) == 0 {
return 0, nil
}
if r.pos >= len(r.buf) {
return 0, io.EOF
}
n = copy(b, r.buf[r.pos:])
r.pos += n
return
}
// Bytes returns the underlying byte slice.
func (r *Reader) Bytes() []byte {
return r.buf
}
// Reset resets the position of the read pointer to the beginning of the underlying byte slice.
func (r *Reader) Reset() {
r.pos = 0
}
// Len returns the length of the buffer.
func (r *Reader) Len() int {
return len(r.buf)
}

49
vendor/github.com/tdewolff/parse/buffer/reader_test.go generated vendored Normal file
View file

@ -0,0 +1,49 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"bytes"
"fmt"
"io"
"testing"
"github.com/tdewolff/test"
)
func TestReader(t *testing.T) {
s := []byte("abcde")
r := NewReader(s)
test.Bytes(t, r.Bytes(), s, "reader must return bytes stored")
buf := make([]byte, 3)
n, err := r.Read(buf)
test.T(t, err, nil, "error")
test.That(t, n == 3, "first read must read 3 characters")
test.Bytes(t, buf, []byte("abc"), "first read must match 'abc'")
n, err = r.Read(buf)
test.T(t, err, nil, "error")
test.That(t, n == 2, "second read must read 2 characters")
test.Bytes(t, buf[:n], []byte("de"), "second read must match 'de'")
n, err = r.Read(buf)
test.T(t, err, io.EOF, "error")
test.That(t, n == 0, "third read must read 0 characters")
n, err = r.Read(nil)
test.T(t, err, nil, "error")
test.That(t, n == 0, "read to nil buffer must return 0 characters read")
r.Reset()
n, err = r.Read(buf)
test.T(t, err, nil, "error")
test.That(t, n == 3, "read after reset must read 3 characters")
test.Bytes(t, buf, []byte("abc"), "read after reset must match 'abc'")
}
func ExampleNewReader() {
r := NewReader([]byte("Lorem ipsum"))
w := &bytes.Buffer{}
io.Copy(w, r)
fmt.Println(w.String())
// Output: Lorem ipsum
}

223
vendor/github.com/tdewolff/parse/buffer/streamlexer.go generated vendored Normal file
View file

@ -0,0 +1,223 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"io"
)
type block struct {
buf []byte
next int // index in pool plus one
active bool
}
type bufferPool struct {
pool []block
head int // index in pool plus one
tail int // index in pool plus one
pos int // byte pos in tail
}
func (z *bufferPool) swap(oldBuf []byte, size int) []byte {
// find new buffer that can be reused
swap := -1
for i := 0; i < len(z.pool); i++ {
if !z.pool[i].active && size <= cap(z.pool[i].buf) {
swap = i
break
}
}
if swap == -1 { // no free buffer found for reuse
if z.tail == 0 && z.pos >= len(oldBuf) && size <= cap(oldBuf) { // but we can reuse the current buffer!
z.pos -= len(oldBuf)
return oldBuf[:0]
}
// allocate new
z.pool = append(z.pool, block{make([]byte, 0, size), 0, true})
swap = len(z.pool) - 1
}
newBuf := z.pool[swap].buf
// put current buffer into pool
z.pool[swap] = block{oldBuf, 0, true}
if z.head != 0 {
z.pool[z.head-1].next = swap + 1
}
z.head = swap + 1
if z.tail == 0 {
z.tail = swap + 1
}
return newBuf[:0]
}
func (z *bufferPool) free(n int) {
z.pos += n
// move the tail over to next buffers
for z.tail != 0 && z.pos >= len(z.pool[z.tail-1].buf) {
z.pos -= len(z.pool[z.tail-1].buf)
newTail := z.pool[z.tail-1].next
z.pool[z.tail-1].active = false // after this, any thread may pick up the inactive buffer, so it can't be used anymore
z.tail = newTail
}
if z.tail == 0 {
z.head = 0
}
}
// StreamLexer is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
// It keeps data in-memory until Free, taking a byte length, is called to move beyond the data.
type StreamLexer struct {
r io.Reader
err error
pool bufferPool
buf []byte
start int // index in buf
pos int // index in buf
prevStart int
free int
}
// NewStreamLexer returns a new StreamLexer for a given io.Reader with a 4kB estimated buffer size.
// If the io.Reader implements Bytes, that buffer is used instead.
func NewStreamLexer(r io.Reader) *StreamLexer {
return NewStreamLexerSize(r, defaultBufSize)
}
// NewStreamLexerSize returns a new StreamLexer for a given io.Reader and estimated required buffer size.
// If the io.Reader implements Bytes, that buffer is used instead.
func NewStreamLexerSize(r io.Reader, size int) *StreamLexer {
// if reader has the bytes in memory already, use that instead
if buffer, ok := r.(interface {
Bytes() []byte
}); ok {
return &StreamLexer{
err: io.EOF,
buf: buffer.Bytes(),
}
}
return &StreamLexer{
r: r,
buf: make([]byte, 0, size),
}
}
func (z *StreamLexer) read(pos int) byte {
if z.err != nil {
return 0
}
// free unused bytes
z.pool.free(z.free)
z.free = 0
// get new buffer
c := cap(z.buf)
p := pos - z.start + 1
if 2*p > c { // if the token is larger than half the buffer, increase buffer size
c = 2*c + p
}
d := len(z.buf) - z.start
buf := z.pool.swap(z.buf[:z.start], c)
copy(buf[:d], z.buf[z.start:]) // copy the left-overs (unfinished token) from the old buffer
// read in new data for the rest of the buffer
var n int
for pos-z.start >= d && z.err == nil {
n, z.err = z.r.Read(buf[d:cap(buf)])
d += n
}
pos -= z.start
z.pos -= z.start
z.start, z.buf = 0, buf[:d]
if pos >= d {
return 0
}
return z.buf[pos]
}
// Err returns the error returned from io.Reader. It may still return valid bytes for a while though.
func (z *StreamLexer) Err() error {
if z.err == io.EOF && z.pos < len(z.buf) {
return nil
}
return z.err
}
// Free frees up bytes of length n from previously shifted tokens.
// Each call to Shift should at one point be followed by a call to Free with a length returned by ShiftLen.
func (z *StreamLexer) Free(n int) {
z.free += n
}
// Peek returns the ith byte relative to the end position and possibly does an allocation.
// Peek returns zero when an error has occurred, Err returns the error.
// TODO: inline function
func (z *StreamLexer) Peek(pos int) byte {
pos += z.pos
if uint(pos) < uint(len(z.buf)) { // uint for BCE
return z.buf[pos]
}
return z.read(pos)
}
// PeekRune returns the rune and rune length of the ith byte relative to the end position.
func (z *StreamLexer) PeekRune(pos int) (rune, int) {
// from unicode/utf8
c := z.Peek(pos)
if c < 0xC0 {
return rune(c), 1
} else if c < 0xE0 {
return rune(c&0x1F)<<6 | rune(z.Peek(pos+1)&0x3F), 2
} else if c < 0xF0 {
return rune(c&0x0F)<<12 | rune(z.Peek(pos+1)&0x3F)<<6 | rune(z.Peek(pos+2)&0x3F), 3
}
return rune(c&0x07)<<18 | rune(z.Peek(pos+1)&0x3F)<<12 | rune(z.Peek(pos+2)&0x3F)<<6 | rune(z.Peek(pos+3)&0x3F), 4
}
// Move advances the position.
func (z *StreamLexer) Move(n int) {
z.pos += n
}
// Pos returns a mark to which can be rewinded.
func (z *StreamLexer) Pos() int {
return z.pos - z.start
}
// Rewind rewinds the position to the given position.
func (z *StreamLexer) Rewind(pos int) {
z.pos = z.start + pos
}
// Lexeme returns the bytes of the current selection.
func (z *StreamLexer) Lexeme() []byte {
return z.buf[z.start:z.pos]
}
// Skip collapses the position to the end of the selection.
func (z *StreamLexer) Skip() {
z.start = z.pos
}
// Shift returns the bytes of the current selection and collapses the position to the end of the selection.
// It also returns the number of bytes we moved since the last call to Shift. This can be used in calls to Free.
func (z *StreamLexer) Shift() []byte {
if z.pos > len(z.buf) { // make sure we peeked at least as much as we shift
z.read(z.pos - 1)
}
b := z.buf[z.start:z.pos]
z.start = z.pos
return b
}
// ShiftLen returns the number of bytes moved since the last call to ShiftLen. This can be used in calls to Free because it takes into account multiple Shifts or Skips.
func (z *StreamLexer) ShiftLen() int {
n := z.start - z.prevStart
z.prevStart = z.start
return n
}

View file

@ -0,0 +1,148 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"bytes"
"io"
"testing"
"github.com/tdewolff/test"
)
func TestBufferPool(t *testing.T) {
z := &bufferPool{}
lorem := []byte("Lorem ipsum")
dolor := []byte("dolor sit amet")
consectetur := []byte("consectetur adipiscing elit")
// set lorem as first buffer and get new dolor buffer
b := z.swap(lorem, len(dolor))
test.That(t, len(b) == 0)
test.That(t, cap(b) == len(dolor))
b = append(b, dolor...)
// free first buffer so it will be reused
z.free(len(lorem))
b = z.swap(b, len(lorem))
b = b[:len(lorem)]
test.Bytes(t, b, lorem)
b = z.swap(b, len(consectetur))
b = append(b, consectetur...)
// free in advance to reuse the same buffer
z.free(len(dolor) + len(lorem) + len(consectetur))
test.That(t, z.head == 0)
b = z.swap(b, len(consectetur))
b = b[:len(consectetur)]
test.Bytes(t, b, consectetur)
// free in advance but request larger buffer
z.free(len(consectetur))
b = z.swap(b, len(consectetur)+1)
b = append(b, consectetur...)
b = append(b, '.')
test.That(t, cap(b) == len(consectetur)+1)
}
func TestStreamLexer(t *testing.T) {
s := `Lorem ipsum dolor sit amet, consectetur adipiscing elit.`
z := NewStreamLexer(bytes.NewBufferString(s))
test.T(t, z.err, io.EOF, "buffer must be fully in memory")
test.T(t, z.Err(), nil, "buffer is at EOF but must not return EOF until we reach that")
test.That(t, z.Pos() == 0, "buffer must start at position 0")
test.That(t, z.Peek(0) == 'L', "first character must be 'L'")
test.That(t, z.Peek(1) == 'o', "second character must be 'o'")
z.Move(1)
test.That(t, z.Peek(0) == 'o', "must be 'o' at position 1")
test.That(t, z.Peek(1) == 'r', "must be 'r' at position 1")
z.Rewind(6)
test.That(t, z.Peek(0) == 'i', "must be 'i' at position 6")
test.That(t, z.Peek(1) == 'p', "must be 'p' at position 7")
test.Bytes(t, z.Lexeme(), []byte("Lorem "), "buffered string must now read 'Lorem ' when at position 6")
test.Bytes(t, z.Shift(), []byte("Lorem "), "shift must return the buffered string")
test.That(t, z.ShiftLen() == len("Lorem "), "shifted length must equal last shift")
test.That(t, z.Pos() == 0, "after shifting position must be 0")
test.That(t, z.Peek(0) == 'i', "must be 'i' at position 0 after shifting")
test.That(t, z.Peek(1) == 'p', "must be 'p' at position 1 after shifting")
test.T(t, z.Err(), nil, "error must be nil at this point")
z.Move(len(s) - len("Lorem ") - 1)
test.T(t, z.Err(), nil, "error must be nil just before the end of the buffer")
z.Skip()
test.That(t, z.Pos() == 0, "after skipping position must be 0")
z.Move(1)
test.T(t, z.Err(), io.EOF, "error must be EOF when past the buffer")
z.Move(-1)
test.T(t, z.Err(), nil, "error must be nil just before the end of the buffer, even when it has been past the buffer")
z.Free(0) // has already been tested
}
func TestStreamLexerShift(t *testing.T) {
s := `Lorem ipsum dolor sit amet, consectetur adipiscing elit.`
z := NewStreamLexerSize(test.NewPlainReader(bytes.NewBufferString(s)), 5)
z.Move(len("Lorem "))
test.Bytes(t, z.Shift(), []byte("Lorem "), "shift must return the buffered string")
test.That(t, z.ShiftLen() == len("Lorem "), "shifted length must equal last shift")
}
func TestStreamLexerSmall(t *testing.T) {
s := `abcdefghijklm`
z := NewStreamLexerSize(test.NewPlainReader(bytes.NewBufferString(s)), 4)
test.That(t, z.Peek(8) == 'i', "first character must be 'i' at position 8")
z = NewStreamLexerSize(test.NewPlainReader(bytes.NewBufferString(s)), 4)
test.That(t, z.Peek(12) == 'm', "first character must be 'm' at position 12")
z = NewStreamLexerSize(test.NewPlainReader(bytes.NewBufferString(s)), 0)
test.That(t, z.Peek(4) == 'e', "first character must be 'e' at position 4")
z = NewStreamLexerSize(test.NewPlainReader(bytes.NewBufferString(s)), 13)
test.That(t, z.Peek(13) == 0, "must yield error at position 13")
}
func TestStreamLexerSingle(t *testing.T) {
z := NewStreamLexer(test.NewInfiniteReader())
test.That(t, z.Peek(0) == '.')
test.That(t, z.Peek(1) == '.')
test.That(t, z.Peek(3) == '.', "required two successful reads")
}
func TestStreamLexerRunes(t *testing.T) {
z := NewStreamLexer(bytes.NewBufferString("aæ†\U00100000"))
r, n := z.PeekRune(0)
test.That(t, n == 1, "first character must be length 1")
test.That(t, r == 'a', "first character must be rune 'a'")
r, n = z.PeekRune(1)
test.That(t, n == 2, "second character must be length 2")
test.That(t, r == 'æ', "second character must be rune 'æ'")
r, n = z.PeekRune(3)
test.That(t, n == 3, "fourth character must be length 3")
test.That(t, r == '†', "fourth character must be rune '†'")
r, n = z.PeekRune(6)
test.That(t, n == 4, "seventh character must be length 4")
test.That(t, r == '\U00100000', "seventh character must be rune '\U00100000'")
}
func TestStreamLexerBadRune(t *testing.T) {
z := NewStreamLexer(bytes.NewBufferString("\xF0")) // expect four byte rune
r, n := z.PeekRune(0)
test.T(t, n, 4, "length")
test.T(t, r, rune(0), "rune")
}
func TestStreamLexerZeroLen(t *testing.T) {
z := NewStreamLexer(test.NewPlainReader(bytes.NewBufferString("")))
test.That(t, z.Peek(0) == 0, "first character must yield error")
}
func TestStreamLexerEmptyReader(t *testing.T) {
z := NewStreamLexer(test.NewEmptyReader())
test.That(t, z.Peek(0) == 0, "first character must yield error")
test.T(t, z.Err(), io.EOF, "error must be EOF")
test.That(t, z.Peek(0) == 0, "second peek must also yield error")
}

41
vendor/github.com/tdewolff/parse/buffer/writer.go generated vendored Normal file
View file

@ -0,0 +1,41 @@
package buffer // import "github.com/tdewolff/parse/buffer"
// Writer implements an io.Writer over a byte slice.
type Writer struct {
buf []byte
}
// NewWriter returns a new Writer for a given byte slice.
func NewWriter(buf []byte) *Writer {
return &Writer{
buf: buf,
}
}
// Write writes bytes from the given byte slice and returns the number of bytes written and an error if occurred. When err != nil, n == 0.
func (w *Writer) Write(b []byte) (int, error) {
n := len(b)
end := len(w.buf)
if end+n > cap(w.buf) {
buf := make([]byte, end, 2*cap(w.buf)+n)
copy(buf, w.buf)
w.buf = buf
}
w.buf = w.buf[:end+n]
return copy(w.buf[end:], b), nil
}
// Len returns the length of the underlying byte slice.
func (w *Writer) Len() int {
return len(w.buf)
}
// Bytes returns the underlying byte slice.
func (w *Writer) Bytes() []byte {
return w.buf
}
// Reset empties and reuses the current buffer. Subsequent writes will overwrite the buffer, so any reference to the underlying slice is invalidated after this call.
func (w *Writer) Reset() {
w.buf = w.buf[:0]
}

46
vendor/github.com/tdewolff/parse/buffer/writer_test.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"fmt"
"testing"
"github.com/tdewolff/test"
)
func TestWriter(t *testing.T) {
w := NewWriter(make([]byte, 0, 3))
test.That(t, w.Len() == 0, "buffer must initially have zero length")
n, _ := w.Write([]byte("abc"))
test.That(t, n == 3, "first write must write 3 characters")
test.Bytes(t, w.Bytes(), []byte("abc"), "first write must match 'abc'")
test.That(t, w.Len() == 3, "buffer must have length 3 after first write")
n, _ = w.Write([]byte("def"))
test.That(t, n == 3, "second write must write 3 characters")
test.Bytes(t, w.Bytes(), []byte("abcdef"), "second write must match 'abcdef'")
w.Reset()
test.Bytes(t, w.Bytes(), []byte(""), "reset must match ''")
n, _ = w.Write([]byte("ghijkl"))
test.That(t, n == 6, "third write must write 6 characters")
test.Bytes(t, w.Bytes(), []byte("ghijkl"), "third write must match 'ghijkl'")
}
func ExampleNewWriter() {
w := NewWriter(make([]byte, 0, 11)) // initial buffer length is 11
w.Write([]byte("Lorem ipsum"))
fmt.Println(string(w.Bytes()))
// Output: Lorem ipsum
}
func ExampleWriter_Reset() {
w := NewWriter(make([]byte, 0, 11)) // initial buffer length is 10
w.Write([]byte("garbage that will be overwritten")) // does reallocation
w.Reset()
w.Write([]byte("Lorem ipsum"))
fmt.Println(string(w.Bytes()))
// Output: Lorem ipsum
}

231
vendor/github.com/tdewolff/parse/common.go generated vendored Normal file
View file

@ -0,0 +1,231 @@
// Package parse contains a collection of parsers for various formats in its subpackages.
package parse // import "github.com/tdewolff/parse"
import (
"bytes"
"encoding/base64"
"errors"
"net/url"
)
// ErrBadDataURI is returned by DataURI when the byte slice does not start with 'data:' or is too short.
var ErrBadDataURI = errors.New("not a data URI")
// Number returns the number of bytes that parse as a number of the regex format (+|-)?([0-9]+(\.[0-9]+)?|\.[0-9]+)((e|E)(+|-)?[0-9]+)?.
func Number(b []byte) int {
if len(b) == 0 {
return 0
}
i := 0
if b[i] == '+' || b[i] == '-' {
i++
if i >= len(b) {
return 0
}
}
firstDigit := (b[i] >= '0' && b[i] <= '9')
if firstDigit {
i++
for i < len(b) && b[i] >= '0' && b[i] <= '9' {
i++
}
}
if i < len(b) && b[i] == '.' {
i++
if i < len(b) && b[i] >= '0' && b[i] <= '9' {
i++
for i < len(b) && b[i] >= '0' && b[i] <= '9' {
i++
}
} else if firstDigit {
// . could belong to the next token
i--
return i
} else {
return 0
}
} else if !firstDigit {
return 0
}
iOld := i
if i < len(b) && (b[i] == 'e' || b[i] == 'E') {
i++
if i < len(b) && (b[i] == '+' || b[i] == '-') {
i++
}
if i >= len(b) || b[i] < '0' || b[i] > '9' {
// e could belong to next token
return iOld
}
for i < len(b) && b[i] >= '0' && b[i] <= '9' {
i++
}
}
return i
}
// Dimension parses a byte-slice and returns the length of the number and its unit.
func Dimension(b []byte) (int, int) {
num := Number(b)
if num == 0 || num == len(b) {
return num, 0
} else if b[num] == '%' {
return num, 1
} else if b[num] >= 'a' && b[num] <= 'z' || b[num] >= 'A' && b[num] <= 'Z' {
i := num + 1
for i < len(b) && (b[i] >= 'a' && b[i] <= 'z' || b[i] >= 'A' && b[i] <= 'Z') {
i++
}
return num, i - num
}
return num, 0
}
// Mediatype parses a given mediatype and splits the mimetype from the parameters.
// It works similar to mime.ParseMediaType but is faster.
func Mediatype(b []byte) ([]byte, map[string]string) {
i := 0
for i < len(b) && b[i] == ' ' {
i++
}
b = b[i:]
n := len(b)
mimetype := b
var params map[string]string
for i := 3; i < n; i++ { // mimetype is at least three characters long
if b[i] == ';' || b[i] == ' ' {
mimetype = b[:i]
if b[i] == ' ' {
i++
for i < n && b[i] == ' ' {
i++
}
if i < n && b[i] != ';' {
break
}
}
params = map[string]string{}
s := string(b)
PARAM:
i++
for i < n && s[i] == ' ' {
i++
}
start := i
for i < n && s[i] != '=' && s[i] != ';' && s[i] != ' ' {
i++
}
key := s[start:i]
for i < n && s[i] == ' ' {
i++
}
if i < n && s[i] == '=' {
i++
for i < n && s[i] == ' ' {
i++
}
start = i
for i < n && s[i] != ';' && s[i] != ' ' {
i++
}
} else {
start = i
}
params[key] = s[start:i]
for i < n && s[i] == ' ' {
i++
}
if i < n && s[i] == ';' {
goto PARAM
}
break
}
}
return mimetype, params
}
// DataURI parses the given data URI and returns the mediatype, data and ok.
func DataURI(dataURI []byte) ([]byte, []byte, error) {
if len(dataURI) > 5 && bytes.Equal(dataURI[:5], []byte("data:")) {
dataURI = dataURI[5:]
inBase64 := false
var mediatype []byte
i := 0
for j := 0; j < len(dataURI); j++ {
c := dataURI[j]
if c == '=' || c == ';' || c == ',' {
if c != '=' && bytes.Equal(TrimWhitespace(dataURI[i:j]), []byte("base64")) {
if len(mediatype) > 0 {
mediatype = mediatype[:len(mediatype)-1]
}
inBase64 = true
i = j
} else if c != ',' {
mediatype = append(append(mediatype, TrimWhitespace(dataURI[i:j])...), c)
i = j + 1
} else {
mediatype = append(mediatype, TrimWhitespace(dataURI[i:j])...)
}
if c == ',' {
if len(mediatype) == 0 || mediatype[0] == ';' {
mediatype = []byte("text/plain")
}
data := dataURI[j+1:]
if inBase64 {
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
n, err := base64.StdEncoding.Decode(decoded, data)
if err != nil {
return nil, nil, err
}
data = decoded[:n]
} else if unescaped, err := url.QueryUnescape(string(data)); err == nil {
data = []byte(unescaped)
}
return mediatype, data, nil
}
}
}
}
return nil, nil, ErrBadDataURI
}
// QuoteEntity parses the given byte slice and returns the quote that got matched (' or ") and its entity length.
func QuoteEntity(b []byte) (quote byte, n int) {
if len(b) < 5 || b[0] != '&' {
return 0, 0
}
if b[1] == '#' {
if b[2] == 'x' {
i := 3
for i < len(b) && b[i] == '0' {
i++
}
if i+2 < len(b) && b[i] == '2' && b[i+2] == ';' {
if b[i+1] == '2' {
return '"', i + 3 // &#x22;
} else if b[i+1] == '7' {
return '\'', i + 3 // &#x27;
}
}
} else {
i := 2
for i < len(b) && b[i] == '0' {
i++
}
if i+2 < len(b) && b[i] == '3' && b[i+2] == ';' {
if b[i+1] == '4' {
return '"', i + 3 // &#34;
} else if b[i+1] == '9' {
return '\'', i + 3 // &#39;
}
}
}
} else if len(b) >= 6 && b[5] == ';' {
if EqualFold(b[1:5], []byte{'q', 'u', 'o', 't'}) {
return '"', 6 // &quot;
} else if EqualFold(b[1:5], []byte{'a', 'p', 'o', 's'}) {
return '\'', 6 // &apos;
}
}
return 0, 0
}

172
vendor/github.com/tdewolff/parse/common_test.go generated vendored Normal file
View file

@ -0,0 +1,172 @@
package parse // import "github.com/tdewolff/parse"
import (
"encoding/base64"
"mime"
"testing"
"github.com/tdewolff/test"
)
func TestParseNumber(t *testing.T) {
var numberTests = []struct {
number string
expected int
}{
{"5", 1},
{"0.51", 4},
{"0.5e-99", 7},
{"0.5e-", 3},
{"+50.0", 5},
{".0", 2},
{"0.", 1},
{"", 0},
{"+", 0},
{".", 0},
{"a", 0},
}
for _, tt := range numberTests {
t.Run(tt.number, func(t *testing.T) {
n := Number([]byte(tt.number))
test.T(t, n, tt.expected)
})
}
}
func TestParseDimension(t *testing.T) {
var dimensionTests = []struct {
dimension string
expectedNum int
expectedUnit int
}{
{"5px", 1, 2},
{"5px ", 1, 2},
{"5%", 1, 1},
{"5em", 1, 2},
{"px", 0, 0},
{"1", 1, 0},
{"1~", 1, 0},
}
for _, tt := range dimensionTests {
t.Run(tt.dimension, func(t *testing.T) {
num, unit := Dimension([]byte(tt.dimension))
test.T(t, num, tt.expectedNum, "number")
test.T(t, unit, tt.expectedUnit, "unit")
})
}
}
func TestMediatype(t *testing.T) {
var mediatypeTests = []struct {
mediatype string
expectedMimetype string
expectedParams map[string]string
}{
{"text/plain", "text/plain", nil},
{"text/plain;charset=US-ASCII", "text/plain", map[string]string{"charset": "US-ASCII"}},
{" text/plain ; charset = US-ASCII ", "text/plain", map[string]string{"charset": "US-ASCII"}},
{" text/plain a", "text/plain", nil},
{"text/plain;base64", "text/plain", map[string]string{"base64": ""}},
{"text/plain;inline=;base64", "text/plain", map[string]string{"inline": "", "base64": ""}},
}
for _, tt := range mediatypeTests {
t.Run(tt.mediatype, func(t *testing.T) {
mimetype, _ := Mediatype([]byte(tt.mediatype))
test.String(t, string(mimetype), tt.expectedMimetype, "mimetype")
//test.T(t, params, tt.expectedParams, "parameters") // TODO
})
}
}
func TestParseDataURI(t *testing.T) {
var dataURITests = []struct {
dataURI string
expectedMimetype string
expectedData string
expectedErr error
}{
{"www.domain.com", "", "", ErrBadDataURI},
{"data:,", "text/plain", "", nil},
{"data:text/xml,", "text/xml", "", nil},
{"data:,text", "text/plain", "text", nil},
{"data:;base64,dGV4dA==", "text/plain", "text", nil},
{"data:image/svg+xml,", "image/svg+xml", "", nil},
{"data:;base64,()", "", "", base64.CorruptInputError(0)},
}
for _, tt := range dataURITests {
t.Run(tt.dataURI, func(t *testing.T) {
mimetype, data, err := DataURI([]byte(tt.dataURI))
test.T(t, err, tt.expectedErr)
test.String(t, string(mimetype), tt.expectedMimetype, "mimetype")
test.String(t, string(data), tt.expectedData, "data")
})
}
}
func TestParseQuoteEntity(t *testing.T) {
var quoteEntityTests = []struct {
quoteEntity string
expectedQuote byte
expectedN int
}{
{"&#34;", '"', 5},
{"&#039;", '\'', 6},
{"&#x0022;", '"', 8},
{"&#x27;", '\'', 6},
{"&quot;", '"', 6},
{"&apos;", '\'', 6},
{"&gt;", 0x00, 0},
{"&amp;", 0x00, 0},
}
for _, tt := range quoteEntityTests {
t.Run(tt.quoteEntity, func(t *testing.T) {
quote, n := QuoteEntity([]byte(tt.quoteEntity))
test.T(t, quote, tt.expectedQuote, "quote")
test.T(t, n, tt.expectedN, "quote length")
})
}
}
////////////////////////////////////////////////////////////////
func BenchmarkParseMediatypeStd(b *testing.B) {
mediatype := "text/plain"
for i := 0; i < b.N; i++ {
mime.ParseMediaType(mediatype)
}
}
func BenchmarkParseMediatypeParamStd(b *testing.B) {
mediatype := "text/plain;inline=1"
for i := 0; i < b.N; i++ {
mime.ParseMediaType(mediatype)
}
}
func BenchmarkParseMediatypeParamsStd(b *testing.B) {
mediatype := "text/plain;charset=US-ASCII;language=US-EN;compression=gzip;base64"
for i := 0; i < b.N; i++ {
mime.ParseMediaType(mediatype)
}
}
func BenchmarkParseMediatypeParse(b *testing.B) {
mediatype := []byte("text/plain")
for i := 0; i < b.N; i++ {
Mediatype(mediatype)
}
}
func BenchmarkParseMediatypeParamParse(b *testing.B) {
mediatype := []byte("text/plain;inline=1")
for i := 0; i < b.N; i++ {
Mediatype(mediatype)
}
}
func BenchmarkParseMediatypeParamsParse(b *testing.B) {
mediatype := []byte("text/plain;charset=US-ASCII;language=US-EN;compression=gzip;base64")
for i := 0; i < b.N; i++ {
Mediatype(mediatype)
}
}

171
vendor/github.com/tdewolff/parse/css/README.md generated vendored Normal file
View file

@ -0,0 +1,171 @@
# CSS [![GoDoc](http://godoc.org/github.com/tdewolff/parse/css?status.svg)](http://godoc.org/github.com/tdewolff/parse/css) [![GoCover](http://gocover.io/_badge/github.com/tdewolff/parse/css)](http://gocover.io/github.com/tdewolff/parse/css)
This package is a CSS3 lexer and parser written in [Go][1]. Both follow the specification at [CSS Syntax Module Level 3](http://www.w3.org/TR/css-syntax-3/). The lexer takes an io.Reader and converts it into tokens until the EOF. The parser returns a parse tree of the full io.Reader input stream, but the low-level `Next` function can be used for stream parsing to returns grammar units until the EOF.
## Installation
Run the following command
go get github.com/tdewolff/parse/css
or add the following import and run project with `go get`
import "github.com/tdewolff/parse/css"
## Lexer
### Usage
The following initializes a new Lexer with io.Reader `r`:
``` go
l := css.NewLexer(r)
```
To tokenize until EOF an error, use:
``` go
for {
tt, text := l.Next()
switch tt {
case css.ErrorToken:
// error or EOF set in l.Err()
return
// ...
}
}
```
All tokens (see [CSS Syntax Module Level 3](http://www.w3.org/TR/css3-syntax/)):
``` go
ErrorToken // non-official token, returned when errors occur
IdentToken
FunctionToken // rgb( rgba( ...
AtKeywordToken // @abc
HashToken // #abc
StringToken
BadStringToken
UrlToken // url(
BadUrlToken
DelimToken // any unmatched character
NumberToken // 5
PercentageToken // 5%
DimensionToken // 5em
UnicodeRangeToken
IncludeMatchToken // ~=
DashMatchToken // |=
PrefixMatchToken // ^=
SuffixMatchToken // $=
SubstringMatchToken // *=
ColumnToken // ||
WhitespaceToken
CDOToken // <!--
CDCToken // -->
ColonToken
SemicolonToken
CommaToken
BracketToken // ( ) [ ] { }, all bracket tokens use this, Data() can distinguish between the brackets
CommentToken // non-official token
```
### Examples
``` go
package main
import (
"os"
"github.com/tdewolff/parse/css"
)
// Tokenize CSS3 from stdin.
func main() {
l := css.NewLexer(os.Stdin)
for {
tt, text := l.Next()
switch tt {
case css.ErrorToken:
if l.Err() != io.EOF {
fmt.Println("Error on line", l.Line(), ":", l.Err())
}
return
case css.IdentToken:
fmt.Println("Identifier", string(text))
case css.NumberToken:
fmt.Println("Number", string(text))
// ...
}
}
}
```
## Parser
### Usage
The following creates a new Parser.
``` go
// true because this is the content of an inline style attribute
p := css.NewParser(bytes.NewBufferString("color: red;"), true)
```
To iterate over the stylesheet, use:
``` go
for {
gt, _, data := p.Next()
if gt == css.ErrorGrammar {
break
}
// ...
}
```
All grammar units returned by `Next`:
``` go
ErrorGrammar
AtRuleGrammar
EndAtRuleGrammar
RulesetGrammar
EndRulesetGrammar
DeclarationGrammar
TokenGrammar
```
### Examples
``` go
package main
import (
"bytes"
"fmt"
"github.com/tdewolff/parse/css"
)
func main() {
// true because this is the content of an inline style attribute
p := css.NewParser(bytes.NewBufferString("color: red;"), true)
out := ""
for {
gt, _, data := p.Next()
if gt == css.ErrorGrammar {
break
} else if gt == css.AtRuleGrammar || gt == css.BeginAtRuleGrammar || gt == css.BeginRulesetGrammar || gt == css.DeclarationGrammar {
out += string(data)
if gt == css.DeclarationGrammar {
out += ":"
}
for _, val := range p.Values() {
out += string(val.Data)
}
if gt == css.BeginAtRuleGrammar || gt == css.BeginRulesetGrammar {
out += "{"
} else if gt == css.AtRuleGrammar || gt == css.DeclarationGrammar {
out += ";"
}
} else {
out += string(data)
}
}
fmt.Println(out)
}
```
## License
Released under the [MIT license](https://github.com/tdewolff/parse/blob/master/LICENSE.md).
[1]: http://golang.org/ "Go Language"

676
vendor/github.com/tdewolff/parse/css/hash.go generated vendored Normal file
View file

@ -0,0 +1,676 @@
package css
// generated by hasher -type=Hash -file=hash.go; DO NOT EDIT, except for adding more constants to the list and rerun go generate
// uses github.com/tdewolff/hasher
//go:generate hasher -type=Hash -file=hash.go
// Hash defines perfect hashes for a predefined list of strings
type Hash uint32
// Unique hash definitions to be used instead of strings
const (
Accelerator Hash = 0x47f0b // accelerator
Aliceblue Hash = 0x52509 // aliceblue
Alpha Hash = 0x5af05 // alpha
Antiquewhite Hash = 0x45c0c // antiquewhite
Aquamarine Hash = 0x7020a // aquamarine
Azimuth Hash = 0x5b307 // azimuth
Background Hash = 0xa // background
Background_Attachment Hash = 0x3a15 // background-attachment
Background_Color Hash = 0x11c10 // background-color
Background_Image Hash = 0x99210 // background-image
Background_Position Hash = 0x13 // background-position
Background_Position_X Hash = 0x80815 // background-position-x
Background_Position_Y Hash = 0x15 // background-position-y
Background_Repeat Hash = 0x1511 // background-repeat
Behavior Hash = 0x3108 // behavior
Black Hash = 0x6005 // black
Blanchedalmond Hash = 0x650e // blanchedalmond
Blueviolet Hash = 0x52a0a // blueviolet
Bold Hash = 0x7a04 // bold
Border Hash = 0x8506 // border
Border_Bottom Hash = 0x850d // border-bottom
Border_Bottom_Color Hash = 0x8513 // border-bottom-color
Border_Bottom_Style Hash = 0xbe13 // border-bottom-style
Border_Bottom_Width Hash = 0xe113 // border-bottom-width
Border_Collapse Hash = 0x1020f // border-collapse
Border_Color Hash = 0x1350c // border-color
Border_Left Hash = 0x15c0b // border-left
Border_Left_Color Hash = 0x15c11 // border-left-color
Border_Left_Style Hash = 0x17911 // border-left-style
Border_Left_Width Hash = 0x18a11 // border-left-width
Border_Right Hash = 0x19b0c // border-right
Border_Right_Color Hash = 0x19b12 // border-right-color
Border_Right_Style Hash = 0x1ad12 // border-right-style
Border_Right_Width Hash = 0x1bf12 // border-right-width
Border_Spacing Hash = 0x1d10e // border-spacing
Border_Style Hash = 0x1f40c // border-style
Border_Top Hash = 0x2000a // border-top
Border_Top_Color Hash = 0x20010 // border-top-color
Border_Top_Style Hash = 0x21010 // border-top-style
Border_Top_Width Hash = 0x22010 // border-top-width
Border_Width Hash = 0x2300c // border-width
Bottom Hash = 0x8c06 // bottom
Burlywood Hash = 0x23c09 // burlywood
Cadetblue Hash = 0x25809 // cadetblue
Caption_Side Hash = 0x2610c // caption-side
Charset Hash = 0x44207 // charset
Chartreuse Hash = 0x2730a // chartreuse
Chocolate Hash = 0x27d09 // chocolate
Clear Hash = 0x2ab05 // clear
Clip Hash = 0x2b004 // clip
Color Hash = 0x9305 // color
Content Hash = 0x2e507 // content
Cornflowerblue Hash = 0x2ff0e // cornflowerblue
Cornsilk Hash = 0x30d08 // cornsilk
Counter_Increment Hash = 0x31511 // counter-increment
Counter_Reset Hash = 0x3540d // counter-reset
Cue Hash = 0x36103 // cue
Cue_After Hash = 0x36109 // cue-after
Cue_Before Hash = 0x36a0a // cue-before
Cursive Hash = 0x37b07 // cursive
Cursor Hash = 0x38e06 // cursor
Darkblue Hash = 0x7208 // darkblue
Darkcyan Hash = 0x7d08 // darkcyan
Darkgoldenrod Hash = 0x2440d // darkgoldenrod
Darkgray Hash = 0x25008 // darkgray
Darkgreen Hash = 0x79209 // darkgreen
Darkkhaki Hash = 0x88509 // darkkhaki
Darkmagenta Hash = 0x4f40b // darkmagenta
Darkolivegreen Hash = 0x7210e // darkolivegreen
Darkorange Hash = 0x7860a // darkorange
Darkorchid Hash = 0x87c0a // darkorchid
Darksalmon Hash = 0x8c00a // darksalmon
Darkseagreen Hash = 0x9240c // darkseagreen
Darkslateblue Hash = 0x3940d // darkslateblue
Darkslategray Hash = 0x3a10d // darkslategray
Darkturquoise Hash = 0x3ae0d // darkturquoise
Darkviolet Hash = 0x3bb0a // darkviolet
Deeppink Hash = 0x26b08 // deeppink
Deepskyblue Hash = 0x8930b // deepskyblue
Default Hash = 0x57b07 // default
Direction Hash = 0x9f109 // direction
Display Hash = 0x3c507 // display
Document Hash = 0x3d308 // document
Dodgerblue Hash = 0x3db0a // dodgerblue
Elevation Hash = 0x4a009 // elevation
Empty_Cells Hash = 0x4c20b // empty-cells
Fantasy Hash = 0x5ce07 // fantasy
Filter Hash = 0x59806 // filter
Firebrick Hash = 0x3e509 // firebrick
Float Hash = 0x3ee05 // float
Floralwhite Hash = 0x3f30b // floralwhite
Font Hash = 0xd804 // font
Font_Face Hash = 0xd809 // font-face
Font_Family Hash = 0x41d0b // font-family
Font_Size Hash = 0x42809 // font-size
Font_Size_Adjust Hash = 0x42810 // font-size-adjust
Font_Stretch Hash = 0x4380c // font-stretch
Font_Style Hash = 0x4490a // font-style
Font_Variant Hash = 0x4530c // font-variant
Font_Weight Hash = 0x46e0b // font-weight
Forestgreen Hash = 0x3700b // forestgreen
Fuchsia Hash = 0x47907 // fuchsia
Gainsboro Hash = 0x14c09 // gainsboro
Ghostwhite Hash = 0x1de0a // ghostwhite
Goldenrod Hash = 0x24809 // goldenrod
Greenyellow Hash = 0x7960b // greenyellow
Height Hash = 0x68506 // height
Honeydew Hash = 0x5b908 // honeydew
Hsl Hash = 0xf303 // hsl
Hsla Hash = 0xf304 // hsla
Ime_Mode Hash = 0x88d08 // ime-mode
Import Hash = 0x4e306 // import
Important Hash = 0x4e309 // important
Include_Source Hash = 0x7f20e // include-source
Indianred Hash = 0x4ec09 // indianred
Inherit Hash = 0x51907 // inherit
Initial Hash = 0x52007 // initial
Keyframes Hash = 0x40109 // keyframes
Lavender Hash = 0xf508 // lavender
Lavenderblush Hash = 0xf50d // lavenderblush
Lawngreen Hash = 0x4da09 // lawngreen
Layer_Background_Color Hash = 0x11616 // layer-background-color
Layer_Background_Image Hash = 0x98c16 // layer-background-image
Layout_Flow Hash = 0x5030b // layout-flow
Layout_Grid Hash = 0x53f0b // layout-grid
Layout_Grid_Char Hash = 0x53f10 // layout-grid-char
Layout_Grid_Char_Spacing Hash = 0x53f18 // layout-grid-char-spacing
Layout_Grid_Line Hash = 0x55710 // layout-grid-line
Layout_Grid_Mode Hash = 0x56d10 // layout-grid-mode
Layout_Grid_Type Hash = 0x58210 // layout-grid-type
Left Hash = 0x16304 // left
Lemonchiffon Hash = 0xcf0c // lemonchiffon
Letter_Spacing Hash = 0x5310e // letter-spacing
Lightblue Hash = 0x59e09 // lightblue
Lightcoral Hash = 0x5a70a // lightcoral
Lightcyan Hash = 0x5d509 // lightcyan
Lightgoldenrodyellow Hash = 0x5de14 // lightgoldenrodyellow
Lightgray Hash = 0x60509 // lightgray
Lightgreen Hash = 0x60e0a // lightgreen
Lightpink Hash = 0x61809 // lightpink
Lightsalmon Hash = 0x6210b // lightsalmon
Lightseagreen Hash = 0x62c0d // lightseagreen
Lightskyblue Hash = 0x6390c // lightskyblue
Lightslateblue Hash = 0x6450e // lightslateblue
Lightsteelblue Hash = 0x6530e // lightsteelblue
Lightyellow Hash = 0x6610b // lightyellow
Limegreen Hash = 0x67709 // limegreen
Line_Break Hash = 0x5630a // line-break
Line_Height Hash = 0x6800b // line-height
List_Style Hash = 0x68b0a // list-style
List_Style_Image Hash = 0x68b10 // list-style-image
List_Style_Position Hash = 0x69b13 // list-style-position
List_Style_Type Hash = 0x6ae0f // list-style-type
Magenta Hash = 0x4f807 // magenta
Margin Hash = 0x2c006 // margin
Margin_Bottom Hash = 0x2c00d // margin-bottom
Margin_Left Hash = 0x2cc0b // margin-left
Margin_Right Hash = 0x3320c // margin-right
Margin_Top Hash = 0x7cd0a // margin-top
Marker_Offset Hash = 0x6bd0d // marker-offset
Marks Hash = 0x6ca05 // marks
Max_Height Hash = 0x6e90a // max-height
Max_Width Hash = 0x6f309 // max-width
Media Hash = 0xa1405 // media
Mediumaquamarine Hash = 0x6fc10 // mediumaquamarine
Mediumblue Hash = 0x70c0a // mediumblue
Mediumorchid Hash = 0x7160c // mediumorchid
Mediumpurple Hash = 0x72f0c // mediumpurple
Mediumseagreen Hash = 0x73b0e // mediumseagreen
Mediumslateblue Hash = 0x7490f // mediumslateblue
Mediumspringgreen Hash = 0x75811 // mediumspringgreen
Mediumturquoise Hash = 0x7690f // mediumturquoise
Mediumvioletred Hash = 0x7780f // mediumvioletred
Midnightblue Hash = 0x7a60c // midnightblue
Min_Height Hash = 0x7b20a // min-height
Min_Width Hash = 0x7bc09 // min-width
Mintcream Hash = 0x7c509 // mintcream
Mistyrose Hash = 0x7e309 // mistyrose
Moccasin Hash = 0x7ec08 // moccasin
Monospace Hash = 0x8c709 // monospace
Namespace Hash = 0x49809 // namespace
Navajowhite Hash = 0x4a80b // navajowhite
None Hash = 0x4bf04 // none
Normal Hash = 0x4d506 // normal
Olivedrab Hash = 0x80009 // olivedrab
Orangered Hash = 0x78a09 // orangered
Orphans Hash = 0x48807 // orphans
Outline Hash = 0x81d07 // outline
Outline_Color Hash = 0x81d0d // outline-color
Outline_Style Hash = 0x82a0d // outline-style
Outline_Width Hash = 0x8370d // outline-width
Overflow Hash = 0x2db08 // overflow
Overflow_X Hash = 0x2db0a // overflow-x
Overflow_Y Hash = 0x8440a // overflow-y
Padding Hash = 0x2b307 // padding
Padding_Bottom Hash = 0x2b30e // padding-bottom
Padding_Left Hash = 0x5f90c // padding-left
Padding_Right Hash = 0x7d60d // padding-right
Padding_Top Hash = 0x8d90b // padding-top
Page Hash = 0x84e04 // page
Page_Break_After Hash = 0x8e310 // page-break-after
Page_Break_Before Hash = 0x84e11 // page-break-before
Page_Break_Inside Hash = 0x85f11 // page-break-inside
Palegoldenrod Hash = 0x8700d // palegoldenrod
Palegreen Hash = 0x89e09 // palegreen
Paleturquoise Hash = 0x8a70d // paleturquoise
Palevioletred Hash = 0x8b40d // palevioletred
Papayawhip Hash = 0x8d00a // papayawhip
Pause Hash = 0x8f305 // pause
Pause_After Hash = 0x8f30b // pause-after
Pause_Before Hash = 0x8fe0c // pause-before
Peachpuff Hash = 0x59009 // peachpuff
Pitch Hash = 0x90a05 // pitch
Pitch_Range Hash = 0x90a0b // pitch-range
Play_During Hash = 0x3c80b // play-during
Position Hash = 0xb08 // position
Powderblue Hash = 0x9150a // powderblue
Progid Hash = 0x91f06 // progid
Quotes Hash = 0x93006 // quotes
Rgb Hash = 0x3803 // rgb
Rgba Hash = 0x3804 // rgba
Richness Hash = 0x9708 // richness
Right Hash = 0x1a205 // right
Rosybrown Hash = 0x15309 // rosybrown
Royalblue Hash = 0xb509 // royalblue
Ruby_Align Hash = 0x12b0a // ruby-align
Ruby_Overhang Hash = 0x1400d // ruby-overhang
Ruby_Position Hash = 0x16c0d // ruby-position
Saddlebrown Hash = 0x48e0b // saddlebrown
Sandybrown Hash = 0x4cc0a // sandybrown
Sans_Serif Hash = 0x5c50a // sans-serif
Scrollbar_3d_Light_Color Hash = 0x9e18 // scrollbar-3d-light-color
Scrollbar_Arrow_Color Hash = 0x29615 // scrollbar-arrow-color
Scrollbar_Base_Color Hash = 0x40914 // scrollbar-base-color
Scrollbar_Dark_Shadow_Color Hash = 0x6ce1b // scrollbar-dark-shadow-color
Scrollbar_Face_Color Hash = 0x93514 // scrollbar-face-color
Scrollbar_Highlight_Color Hash = 0x9ce19 // scrollbar-highlight-color
Scrollbar_Shadow_Color Hash = 0x94916 // scrollbar-shadow-color
Scrollbar_Track_Color Hash = 0x95f15 // scrollbar-track-color
Seagreen Hash = 0x63108 // seagreen
Seashell Hash = 0x10f08 // seashell
Serif Hash = 0x5ca05 // serif
Size Hash = 0x42d04 // size
Slateblue Hash = 0x39809 // slateblue
Slategray Hash = 0x3a509 // slategray
Speak Hash = 0x97405 // speak
Speak_Header Hash = 0x9740c // speak-header
Speak_Numeral Hash = 0x9800d // speak-numeral
Speak_Punctuation Hash = 0x9a211 // speak-punctuation
Speech_Rate Hash = 0x9b30b // speech-rate
Springgreen Hash = 0x75e0b // springgreen
Steelblue Hash = 0x65809 // steelblue
Stress Hash = 0x29106 // stress
Supports Hash = 0x9c708 // supports
Table_Layout Hash = 0x4fd0c // table-layout
Text_Align Hash = 0x2840a // text-align
Text_Align_Last Hash = 0x2840f // text-align-last
Text_Autospace Hash = 0x1e60e // text-autospace
Text_Decoration Hash = 0x4b10f // text-decoration
Text_Indent Hash = 0x9bc0b // text-indent
Text_Justify Hash = 0x250c // text-justify
Text_Kashida_Space Hash = 0x4e12 // text-kashida-space
Text_Overflow Hash = 0x2d60d // text-overflow
Text_Shadow Hash = 0x2eb0b // text-shadow
Text_Transform Hash = 0x3250e // text-transform
Text_Underline_Position Hash = 0x33d17 // text-underline-position
Top Hash = 0x20703 // top
Turquoise Hash = 0x3b209 // turquoise
Unicode_Bidi Hash = 0x9e70c // unicode-bidi
Vertical_Align Hash = 0x3800e // vertical-align
Visibility Hash = 0x9fa0a // visibility
Voice_Family Hash = 0xa040c // voice-family
Volume Hash = 0xa1006 // volume
White Hash = 0x1e305 // white
White_Space Hash = 0x4630b // white-space
Whitesmoke Hash = 0x3f90a // whitesmoke
Widows Hash = 0x5c006 // widows
Width Hash = 0xef05 // width
Word_Break Hash = 0x2f50a // word-break
Word_Spacing Hash = 0x50d0c // word-spacing
Word_Wrap Hash = 0x5f109 // word-wrap
Writing_Mode Hash = 0x66b0c // writing-mode
Yellow Hash = 0x5ec06 // yellow
Yellowgreen Hash = 0x79b0b // yellowgreen
Z_Index Hash = 0xa1907 // z-index
)
// String returns the hash' name.
func (i Hash) String() string {
start := uint32(i >> 8)
n := uint32(i & 0xff)
if start+n > uint32(len(_Hash_text)) {
return ""
}
return _Hash_text[start : start+n]
}
// ToHash returns the hash whose name is s. It returns zero if there is no
// such hash. It is case sensitive.
func ToHash(s []byte) Hash {
if len(s) == 0 || len(s) > _Hash_maxLen {
return 0
}
h := uint32(_Hash_hash0)
for i := 0; i < len(s); i++ {
h ^= uint32(s[i])
h *= 16777619
}
if i := _Hash_table[h&uint32(len(_Hash_table)-1)]; int(i&0xff) == len(s) {
t := _Hash_text[i>>8 : i>>8+i&0xff]
for i := 0; i < len(s); i++ {
if t[i] != s[i] {
goto NEXT
}
}
return i
}
NEXT:
if i := _Hash_table[(h>>16)&uint32(len(_Hash_table)-1)]; int(i&0xff) == len(s) {
t := _Hash_text[i>>8 : i>>8+i&0xff]
for i := 0; i < len(s); i++ {
if t[i] != s[i] {
return 0
}
}
return i
}
return 0
}
const _Hash_hash0 = 0x700e0976
const _Hash_maxLen = 27
const _Hash_text = "background-position-ybackground-repeatext-justifybehaviorgba" +
"ckground-attachmentext-kashida-spaceblackblanchedalmondarkbl" +
"ueboldarkcyanborder-bottom-colorichnesscrollbar-3d-light-col" +
"oroyalblueborder-bottom-stylemonchiffont-faceborder-bottom-w" +
"idthslavenderblushborder-collapseashellayer-background-color" +
"uby-alignborder-coloruby-overhangainsborosybrownborder-left-" +
"coloruby-positionborder-left-styleborder-left-widthborder-ri" +
"ght-colorborder-right-styleborder-right-widthborder-spacingh" +
"ostwhitext-autospaceborder-styleborder-top-colorborder-top-s" +
"tyleborder-top-widthborder-widthburlywoodarkgoldenrodarkgray" +
"cadetbluecaption-sideeppinkchartreusechocolatext-align-lastr" +
"esscrollbar-arrow-colorclearclipadding-bottomargin-bottomarg" +
"in-leftext-overflow-xcontentext-shadoword-breakcornflowerblu" +
"ecornsilkcounter-incrementext-transformargin-rightext-underl" +
"ine-positioncounter-resetcue-aftercue-beforestgreencursivert" +
"ical-aligncursordarkslatebluedarkslategraydarkturquoisedarkv" +
"ioletdisplay-duringdocumentdodgerbluefirebrickfloatfloralwhi" +
"tesmokeyframescrollbar-base-colorfont-familyfont-size-adjust" +
"font-stretcharsetfont-stylefont-variantiquewhite-spacefont-w" +
"eightfuchsiacceleratorphansaddlebrownamespacelevationavajowh" +
"itext-decorationonempty-cellsandybrownormalawngreenimportant" +
"indianredarkmagentable-layout-floword-spacinginheritinitiali" +
"cebluevioletter-spacinglayout-grid-char-spacinglayout-grid-l" +
"ine-breaklayout-grid-modefaultlayout-grid-typeachpuffilterli" +
"ghtbluelightcoralphazimuthoneydewidowsans-serifantasylightcy" +
"anlightgoldenrodyelloword-wrapadding-leftlightgraylightgreen" +
"lightpinklightsalmonlightseagreenlightskybluelightslatebluel" +
"ightsteelbluelightyellowriting-modelimegreenline-heightlist-" +
"style-imagelist-style-positionlist-style-typemarker-offsetma" +
"rkscrollbar-dark-shadow-colormax-heightmax-widthmediumaquama" +
"rinemediumbluemediumorchidarkolivegreenmediumpurplemediumsea" +
"greenmediumslatebluemediumspringgreenmediumturquoisemediumvi" +
"oletredarkorangeredarkgreenyellowgreenmidnightbluemin-height" +
"min-widthmintcreamargin-topadding-rightmistyrosemoccasinclud" +
"e-sourceolivedrabackground-position-xoutline-coloroutline-st" +
"yleoutline-widthoverflow-ypage-break-beforepage-break-inside" +
"palegoldenrodarkorchidarkkhakime-modeepskybluepalegreenpalet" +
"urquoisepalevioletredarksalmonospacepapayawhipadding-topage-" +
"break-afterpause-afterpause-beforepitch-rangepowderblueprogi" +
"darkseagreenquotescrollbar-face-colorscrollbar-shadow-colors" +
"crollbar-track-colorspeak-headerspeak-numeralayer-background" +
"-imagespeak-punctuationspeech-ratext-indentsupportscrollbar-" +
"highlight-colorunicode-bidirectionvisibilityvoice-familyvolu" +
"mediaz-index"
var _Hash_table = [1 << 9]Hash{
0x0: 0x4cc0a, // sandybrown
0x1: 0x20703, // top
0x4: 0xb509, // royalblue
0x6: 0x4b10f, // text-decoration
0xb: 0x5030b, // layout-flow
0xc: 0x11c10, // background-color
0xd: 0x8c06, // bottom
0x10: 0x62c0d, // lightseagreen
0x11: 0x8930b, // deepskyblue
0x12: 0x39809, // slateblue
0x13: 0x4c20b, // empty-cells
0x14: 0x2b004, // clip
0x15: 0x70c0a, // mediumblue
0x16: 0x49809, // namespace
0x18: 0x2c00d, // margin-bottom
0x1a: 0x1350c, // border-color
0x1b: 0x5b908, // honeydew
0x1d: 0x2300c, // border-width
0x1e: 0x9740c, // speak-header
0x1f: 0x8b40d, // palevioletred
0x20: 0x1d10e, // border-spacing
0x22: 0x2b307, // padding
0x23: 0x3320c, // margin-right
0x27: 0x7bc09, // min-width
0x29: 0x60509, // lightgray
0x2a: 0x6610b, // lightyellow
0x2c: 0x8e310, // page-break-after
0x2d: 0x2e507, // content
0x30: 0x250c, // text-justify
0x32: 0x2840f, // text-align-last
0x34: 0x93514, // scrollbar-face-color
0x35: 0x40109, // keyframes
0x37: 0x4f807, // magenta
0x38: 0x3a509, // slategray
0x3a: 0x99210, // background-image
0x3c: 0x7f20e, // include-source
0x3d: 0x65809, // steelblue
0x3e: 0x81d0d, // outline-color
0x40: 0x1020f, // border-collapse
0x41: 0xf508, // lavender
0x42: 0x9c708, // supports
0x44: 0x6800b, // line-height
0x45: 0x9a211, // speak-punctuation
0x46: 0x9fa0a, // visibility
0x47: 0x2ab05, // clear
0x4b: 0x52a0a, // blueviolet
0x4e: 0x57b07, // default
0x50: 0x6bd0d, // marker-offset
0x52: 0x31511, // counter-increment
0x53: 0x6450e, // lightslateblue
0x54: 0x10f08, // seashell
0x56: 0x16c0d, // ruby-position
0x57: 0x82a0d, // outline-style
0x58: 0x63108, // seagreen
0x59: 0x9305, // color
0x5c: 0x2610c, // caption-side
0x5d: 0x68506, // height
0x5e: 0x7490f, // mediumslateblue
0x5f: 0x8fe0c, // pause-before
0x60: 0xcf0c, // lemonchiffon
0x63: 0x37b07, // cursive
0x66: 0x4a80b, // navajowhite
0x67: 0xa040c, // voice-family
0x68: 0x2440d, // darkgoldenrod
0x69: 0x3e509, // firebrick
0x6a: 0x4490a, // font-style
0x6b: 0x9f109, // direction
0x6d: 0x7860a, // darkorange
0x6f: 0x4530c, // font-variant
0x70: 0x2c006, // margin
0x71: 0x84e11, // page-break-before
0x73: 0x2d60d, // text-overflow
0x74: 0x4e12, // text-kashida-space
0x75: 0x30d08, // cornsilk
0x76: 0x46e0b, // font-weight
0x77: 0x42d04, // size
0x78: 0x53f0b, // layout-grid
0x79: 0x8d90b, // padding-top
0x7a: 0x44207, // charset
0x7d: 0x7e309, // mistyrose
0x7e: 0x5b307, // azimuth
0x7f: 0x8f30b, // pause-after
0x84: 0x38e06, // cursor
0x85: 0xf303, // hsl
0x86: 0x5310e, // letter-spacing
0x8b: 0x3d308, // document
0x8d: 0x36109, // cue-after
0x8f: 0x36a0a, // cue-before
0x91: 0x5ce07, // fantasy
0x94: 0x1400d, // ruby-overhang
0x95: 0x2b30e, // padding-bottom
0x9a: 0x59e09, // lightblue
0x9c: 0x8c00a, // darksalmon
0x9d: 0x42810, // font-size-adjust
0x9e: 0x61809, // lightpink
0xa0: 0x9240c, // darkseagreen
0xa2: 0x85f11, // page-break-inside
0xa4: 0x24809, // goldenrod
0xa6: 0xa1405, // media
0xa7: 0x53f18, // layout-grid-char-spacing
0xa9: 0x4e309, // important
0xaa: 0x7b20a, // min-height
0xb0: 0x15c11, // border-left-color
0xb1: 0x84e04, // page
0xb2: 0x98c16, // layer-background-image
0xb5: 0x55710, // layout-grid-line
0xb6: 0x1511, // background-repeat
0xb7: 0x8513, // border-bottom-color
0xb9: 0x25008, // darkgray
0xbb: 0x5f90c, // padding-left
0xbc: 0x1a205, // right
0xc0: 0x40914, // scrollbar-base-color
0xc1: 0x6530e, // lightsteelblue
0xc2: 0xef05, // width
0xc5: 0x3b209, // turquoise
0xc8: 0x3ee05, // float
0xca: 0x12b0a, // ruby-align
0xcb: 0xb08, // position
0xcc: 0x7cd0a, // margin-top
0xce: 0x2cc0b, // margin-left
0xcf: 0x2eb0b, // text-shadow
0xd0: 0x2f50a, // word-break
0xd4: 0x3f90a, // whitesmoke
0xd6: 0x33d17, // text-underline-position
0xd7: 0x1bf12, // border-right-width
0xd8: 0x80009, // olivedrab
0xd9: 0x89e09, // palegreen
0xdb: 0x4e306, // import
0xdc: 0x6ca05, // marks
0xdd: 0x3bb0a, // darkviolet
0xde: 0x13, // background-position
0xe0: 0x6fc10, // mediumaquamarine
0xe1: 0x7a04, // bold
0xe2: 0x7690f, // mediumturquoise
0xe4: 0x8700d, // palegoldenrod
0xe5: 0x4f40b, // darkmagenta
0xe6: 0x15309, // rosybrown
0xe7: 0x18a11, // border-left-width
0xe8: 0x88509, // darkkhaki
0xea: 0x650e, // blanchedalmond
0xeb: 0x52007, // initial
0xec: 0x6ce1b, // scrollbar-dark-shadow-color
0xee: 0x48e0b, // saddlebrown
0xef: 0x8a70d, // paleturquoise
0xf1: 0x19b12, // border-right-color
0xf3: 0x1e305, // white
0xf7: 0x9ce19, // scrollbar-highlight-color
0xf9: 0x56d10, // layout-grid-mode
0xfc: 0x1f40c, // border-style
0xfe: 0x69b13, // list-style-position
0x100: 0x11616, // layer-background-color
0x102: 0x58210, // layout-grid-type
0x103: 0x15c0b, // border-left
0x104: 0x2db08, // overflow
0x105: 0x7a60c, // midnightblue
0x10b: 0x2840a, // text-align
0x10e: 0x21010, // border-top-style
0x110: 0x5de14, // lightgoldenrodyellow
0x114: 0x8506, // border
0x119: 0xd804, // font
0x11c: 0x7020a, // aquamarine
0x11d: 0x60e0a, // lightgreen
0x11e: 0x5ec06, // yellow
0x120: 0x97405, // speak
0x121: 0x4630b, // white-space
0x123: 0x3940d, // darkslateblue
0x125: 0x1e60e, // text-autospace
0x128: 0xf50d, // lavenderblush
0x12c: 0x6210b, // lightsalmon
0x12d: 0x51907, // inherit
0x131: 0x87c0a, // darkorchid
0x132: 0x2000a, // border-top
0x133: 0x3c80b, // play-during
0x137: 0x22010, // border-top-width
0x139: 0x48807, // orphans
0x13a: 0x41d0b, // font-family
0x13d: 0x3db0a, // dodgerblue
0x13f: 0x8d00a, // papayawhip
0x140: 0x8f305, // pause
0x143: 0x2ff0e, // cornflowerblue
0x144: 0x3c507, // display
0x146: 0x52509, // aliceblue
0x14a: 0x7208, // darkblue
0x14b: 0x3108, // behavior
0x14c: 0x3540d, // counter-reset
0x14d: 0x7960b, // greenyellow
0x14e: 0x75811, // mediumspringgreen
0x14f: 0x9150a, // powderblue
0x150: 0x53f10, // layout-grid-char
0x158: 0x81d07, // outline
0x159: 0x23c09, // burlywood
0x15b: 0xe113, // border-bottom-width
0x15c: 0x4bf04, // none
0x15e: 0x36103, // cue
0x15f: 0x4fd0c, // table-layout
0x160: 0x90a0b, // pitch-range
0x161: 0xa1907, // z-index
0x162: 0x29106, // stress
0x163: 0x80815, // background-position-x
0x165: 0x4d506, // normal
0x167: 0x72f0c, // mediumpurple
0x169: 0x5a70a, // lightcoral
0x16c: 0x6e90a, // max-height
0x16d: 0x3804, // rgba
0x16e: 0x68b10, // list-style-image
0x170: 0x26b08, // deeppink
0x173: 0x91f06, // progid
0x175: 0x75e0b, // springgreen
0x176: 0x3700b, // forestgreen
0x179: 0x7ec08, // moccasin
0x17a: 0x7780f, // mediumvioletred
0x17e: 0x9bc0b, // text-indent
0x181: 0x6ae0f, // list-style-type
0x182: 0x14c09, // gainsboro
0x183: 0x3ae0d, // darkturquoise
0x184: 0x3a10d, // darkslategray
0x189: 0x2db0a, // overflow-x
0x18b: 0x93006, // quotes
0x18c: 0x3a15, // background-attachment
0x18f: 0x19b0c, // border-right
0x191: 0x6005, // black
0x192: 0x79b0b, // yellowgreen
0x194: 0x59009, // peachpuff
0x197: 0x3f30b, // floralwhite
0x19c: 0x7210e, // darkolivegreen
0x19d: 0x5f109, // word-wrap
0x19e: 0x17911, // border-left-style
0x1a0: 0x9b30b, // speech-rate
0x1a1: 0x8370d, // outline-width
0x1a2: 0x9e70c, // unicode-bidi
0x1a3: 0x68b0a, // list-style
0x1a4: 0x90a05, // pitch
0x1a5: 0x95f15, // scrollbar-track-color
0x1a6: 0x47907, // fuchsia
0x1a8: 0x3800e, // vertical-align
0x1ad: 0x5af05, // alpha
0x1ae: 0x6f309, // max-width
0x1af: 0x9708, // richness
0x1b0: 0x3803, // rgb
0x1b1: 0x7d60d, // padding-right
0x1b2: 0x29615, // scrollbar-arrow-color
0x1b3: 0x16304, // left
0x1b5: 0x4a009, // elevation
0x1b6: 0x5630a, // line-break
0x1ba: 0x27d09, // chocolate
0x1bb: 0x9800d, // speak-numeral
0x1bd: 0x47f0b, // accelerator
0x1be: 0x67709, // limegreen
0x1c1: 0x7d08, // darkcyan
0x1c3: 0x6390c, // lightskyblue
0x1c5: 0x5c50a, // sans-serif
0x1c6: 0x850d, // border-bottom
0x1c7: 0xa, // background
0x1c8: 0xa1006, // volume
0x1ca: 0x66b0c, // writing-mode
0x1cb: 0x9e18, // scrollbar-3d-light-color
0x1cc: 0x5c006, // widows
0x1cf: 0x42809, // font-size
0x1d0: 0x15, // background-position-y
0x1d1: 0x5d509, // lightcyan
0x1d4: 0x4ec09, // indianred
0x1d7: 0x1de0a, // ghostwhite
0x1db: 0x78a09, // orangered
0x1dc: 0x45c0c, // antiquewhite
0x1dd: 0x4da09, // lawngreen
0x1df: 0x73b0e, // mediumseagreen
0x1e0: 0x20010, // border-top-color
0x1e2: 0xf304, // hsla
0x1e4: 0x3250e, // text-transform
0x1e6: 0x7160c, // mediumorchid
0x1e9: 0x8c709, // monospace
0x1ec: 0x94916, // scrollbar-shadow-color
0x1ed: 0x79209, // darkgreen
0x1ef: 0x25809, // cadetblue
0x1f0: 0x59806, // filter
0x1f1: 0x1ad12, // border-right-style
0x1f6: 0x8440a, // overflow-y
0x1f7: 0xd809, // font-face
0x1f8: 0x50d0c, // word-spacing
0x1fa: 0xbe13, // border-bottom-style
0x1fb: 0x4380c, // font-stretch
0x1fc: 0x7c509, // mintcream
0x1fd: 0x88d08, // ime-mode
0x1fe: 0x2730a, // chartreuse
0x1ff: 0x5ca05, // serif
}

16
vendor/github.com/tdewolff/parse/css/hash_test.go generated vendored Normal file
View file

@ -0,0 +1,16 @@
package css // import "github.com/tdewolff/parse/css"
import (
"testing"
"github.com/tdewolff/test"
)
func TestHashTable(t *testing.T) {
test.T(t, ToHash([]byte("font")), Font, "'font' must resolve to hash.Font")
test.T(t, Font.String(), "font")
test.T(t, Margin_Left.String(), "margin-left")
test.T(t, ToHash([]byte("")), Hash(0), "empty string must resolve to zero")
test.T(t, Hash(0xffffff).String(), "")
test.T(t, ToHash([]byte("fonts")), Hash(0), "'fonts' must resolve to zero")
}

710
vendor/github.com/tdewolff/parse/css/lex.go generated vendored Normal file
View file

@ -0,0 +1,710 @@
// Package css is a CSS3 lexer and parser following the specifications at http://www.w3.org/TR/css-syntax-3/.
package css // import "github.com/tdewolff/parse/css"
// TODO: \uFFFD replacement character for NULL bytes in strings for example, or atleast don't end the string early
import (
"bytes"
"io"
"strconv"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
)
// TokenType determines the type of token, eg. a number or a semicolon.
type TokenType uint32
// TokenType values.
const (
ErrorToken TokenType = iota // extra token when errors occur
IdentToken
FunctionToken // rgb( rgba( ...
AtKeywordToken // @abc
HashToken // #abc
StringToken
BadStringToken
URLToken
BadURLToken
DelimToken // any unmatched character
NumberToken // 5
PercentageToken // 5%
DimensionToken // 5em
UnicodeRangeToken // U+554A
IncludeMatchToken // ~=
DashMatchToken // |=
PrefixMatchToken // ^=
SuffixMatchToken // $=
SubstringMatchToken // *=
ColumnToken // ||
WhitespaceToken // space \t \r \n \f
CDOToken // <!--
CDCToken // -->
ColonToken // :
SemicolonToken // ;
CommaToken // ,
LeftBracketToken // [
RightBracketToken // ]
LeftParenthesisToken // (
RightParenthesisToken // )
LeftBraceToken // {
RightBraceToken // }
CommentToken // extra token for comments
EmptyToken
CustomPropertyNameToken
CustomPropertyValueToken
)
// String returns the string representation of a TokenType.
func (tt TokenType) String() string {
switch tt {
case ErrorToken:
return "Error"
case IdentToken:
return "Ident"
case FunctionToken:
return "Function"
case AtKeywordToken:
return "AtKeyword"
case HashToken:
return "Hash"
case StringToken:
return "String"
case BadStringToken:
return "BadString"
case URLToken:
return "URL"
case BadURLToken:
return "BadURL"
case DelimToken:
return "Delim"
case NumberToken:
return "Number"
case PercentageToken:
return "Percentage"
case DimensionToken:
return "Dimension"
case UnicodeRangeToken:
return "UnicodeRange"
case IncludeMatchToken:
return "IncludeMatch"
case DashMatchToken:
return "DashMatch"
case PrefixMatchToken:
return "PrefixMatch"
case SuffixMatchToken:
return "SuffixMatch"
case SubstringMatchToken:
return "SubstringMatch"
case ColumnToken:
return "Column"
case WhitespaceToken:
return "Whitespace"
case CDOToken:
return "CDO"
case CDCToken:
return "CDC"
case ColonToken:
return "Colon"
case SemicolonToken:
return "Semicolon"
case CommaToken:
return "Comma"
case LeftBracketToken:
return "LeftBracket"
case RightBracketToken:
return "RightBracket"
case LeftParenthesisToken:
return "LeftParenthesis"
case RightParenthesisToken:
return "RightParenthesis"
case LeftBraceToken:
return "LeftBrace"
case RightBraceToken:
return "RightBrace"
case CommentToken:
return "Comment"
case EmptyToken:
return "Empty"
case CustomPropertyNameToken:
return "CustomPropertyName"
case CustomPropertyValueToken:
return "CustomPropertyValue"
}
return "Invalid(" + strconv.Itoa(int(tt)) + ")"
}
////////////////////////////////////////////////////////////////
// Lexer is the state for the lexer.
type Lexer struct {
r *buffer.Lexer
}
// NewLexer returns a new Lexer for a given io.Reader.
func NewLexer(r io.Reader) *Lexer {
return &Lexer{
buffer.NewLexer(r),
}
}
// Err returns the error encountered during lexing, this is often io.EOF but also other errors can be returned.
func (l *Lexer) Err() error {
return l.r.Err()
}
// Restore restores the NULL byte at the end of the buffer.
func (l *Lexer) Restore() {
l.r.Restore()
}
// Next returns the next Token. It returns ErrorToken when an error was encountered. Using Err() one can retrieve the error message.
func (l *Lexer) Next() (TokenType, []byte) {
switch l.r.Peek(0) {
case ' ', '\t', '\n', '\r', '\f':
l.r.Move(1)
for l.consumeWhitespace() {
}
return WhitespaceToken, l.r.Shift()
case ':':
l.r.Move(1)
return ColonToken, l.r.Shift()
case ';':
l.r.Move(1)
return SemicolonToken, l.r.Shift()
case ',':
l.r.Move(1)
return CommaToken, l.r.Shift()
case '(', ')', '[', ']', '{', '}':
if t := l.consumeBracket(); t != ErrorToken {
return t, l.r.Shift()
}
case '#':
if l.consumeHashToken() {
return HashToken, l.r.Shift()
}
case '"', '\'':
if t := l.consumeString(); t != ErrorToken {
return t, l.r.Shift()
}
case '.', '+':
if t := l.consumeNumeric(); t != ErrorToken {
return t, l.r.Shift()
}
case '-':
if t := l.consumeNumeric(); t != ErrorToken {
return t, l.r.Shift()
} else if t := l.consumeIdentlike(); t != ErrorToken {
return t, l.r.Shift()
} else if l.consumeCDCToken() {
return CDCToken, l.r.Shift()
} else if l.consumeCustomVariableToken() {
return CustomPropertyNameToken, l.r.Shift()
}
case '@':
if l.consumeAtKeywordToken() {
return AtKeywordToken, l.r.Shift()
}
case '$', '*', '^', '~':
if t := l.consumeMatch(); t != ErrorToken {
return t, l.r.Shift()
}
case '/':
if l.consumeComment() {
return CommentToken, l.r.Shift()
}
case '<':
if l.consumeCDOToken() {
return CDOToken, l.r.Shift()
}
case '\\':
if t := l.consumeIdentlike(); t != ErrorToken {
return t, l.r.Shift()
}
case 'u', 'U':
if l.consumeUnicodeRangeToken() {
return UnicodeRangeToken, l.r.Shift()
} else if t := l.consumeIdentlike(); t != ErrorToken {
return t, l.r.Shift()
}
case '|':
if t := l.consumeMatch(); t != ErrorToken {
return t, l.r.Shift()
} else if l.consumeColumnToken() {
return ColumnToken, l.r.Shift()
}
case 0:
if l.Err() != nil {
return ErrorToken, nil
}
default:
if t := l.consumeNumeric(); t != ErrorToken {
return t, l.r.Shift()
} else if t := l.consumeIdentlike(); t != ErrorToken {
return t, l.r.Shift()
}
}
// can't be rune because consumeIdentlike consumes that as an identifier
l.r.Move(1)
return DelimToken, l.r.Shift()
}
////////////////////////////////////////////////////////////////
/*
The following functions follow the railroad diagrams in http://www.w3.org/TR/css3-syntax/
*/
func (l *Lexer) consumeByte(c byte) bool {
if l.r.Peek(0) == c {
l.r.Move(1)
return true
}
return false
}
func (l *Lexer) consumeComment() bool {
if l.r.Peek(0) != '/' || l.r.Peek(1) != '*' {
return false
}
l.r.Move(2)
for {
c := l.r.Peek(0)
if c == 0 && l.Err() != nil {
break
} else if c == '*' && l.r.Peek(1) == '/' {
l.r.Move(2)
return true
}
l.r.Move(1)
}
return true
}
func (l *Lexer) consumeNewline() bool {
c := l.r.Peek(0)
if c == '\n' || c == '\f' {
l.r.Move(1)
return true
} else if c == '\r' {
if l.r.Peek(1) == '\n' {
l.r.Move(2)
} else {
l.r.Move(1)
}
return true
}
return false
}
func (l *Lexer) consumeWhitespace() bool {
c := l.r.Peek(0)
if c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' {
l.r.Move(1)
return true
}
return false
}
func (l *Lexer) consumeDigit() bool {
c := l.r.Peek(0)
if c >= '0' && c <= '9' {
l.r.Move(1)
return true
}
return false
}
func (l *Lexer) consumeHexDigit() bool {
c := l.r.Peek(0)
if (c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F') {
l.r.Move(1)
return true
}
return false
}
func (l *Lexer) consumeEscape() bool {
if l.r.Peek(0) != '\\' {
return false
}
mark := l.r.Pos()
l.r.Move(1)
if l.consumeNewline() {
l.r.Rewind(mark)
return false
} else if l.consumeHexDigit() {
for k := 1; k < 6; k++ {
if !l.consumeHexDigit() {
break
}
}
l.consumeWhitespace()
return true
} else {
c := l.r.Peek(0)
if c >= 0xC0 {
_, n := l.r.PeekRune(0)
l.r.Move(n)
return true
} else if c == 0 && l.r.Err() != nil {
return true
}
}
l.r.Move(1)
return true
}
func (l *Lexer) consumeIdentToken() bool {
mark := l.r.Pos()
if l.r.Peek(0) == '-' {
l.r.Move(1)
}
c := l.r.Peek(0)
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '_' || c >= 0x80) {
if c != '\\' || !l.consumeEscape() {
l.r.Rewind(mark)
return false
}
} else {
l.r.Move(1)
}
for {
c := l.r.Peek(0)
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '_' || c == '-' || c >= 0x80) {
if c != '\\' || !l.consumeEscape() {
break
}
} else {
l.r.Move(1)
}
}
return true
}
// support custom variables, https://www.w3.org/TR/css-variables-1/
func (l *Lexer) consumeCustomVariableToken() bool {
// expect to be on a '-'
l.r.Move(1)
if l.r.Peek(0) != '-' {
l.r.Move(-1)
return false
}
if !l.consumeIdentToken() {
l.r.Move(-1)
return false
}
return true
}
func (l *Lexer) consumeAtKeywordToken() bool {
// expect to be on an '@'
l.r.Move(1)
if !l.consumeIdentToken() {
l.r.Move(-1)
return false
}
return true
}
func (l *Lexer) consumeHashToken() bool {
// expect to be on a '#'
mark := l.r.Pos()
l.r.Move(1)
c := l.r.Peek(0)
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '_' || c == '-' || c >= 0x80) {
if c != '\\' || !l.consumeEscape() {
l.r.Rewind(mark)
return false
}
} else {
l.r.Move(1)
}
for {
c := l.r.Peek(0)
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '_' || c == '-' || c >= 0x80) {
if c != '\\' || !l.consumeEscape() {
break
}
} else {
l.r.Move(1)
}
}
return true
}
func (l *Lexer) consumeNumberToken() bool {
mark := l.r.Pos()
c := l.r.Peek(0)
if c == '+' || c == '-' {
l.r.Move(1)
}
firstDigit := l.consumeDigit()
if firstDigit {
for l.consumeDigit() {
}
}
if l.r.Peek(0) == '.' {
l.r.Move(1)
if l.consumeDigit() {
for l.consumeDigit() {
}
} else if firstDigit {
// . could belong to the next token
l.r.Move(-1)
return true
} else {
l.r.Rewind(mark)
return false
}
} else if !firstDigit {
l.r.Rewind(mark)
return false
}
mark = l.r.Pos()
c = l.r.Peek(0)
if c == 'e' || c == 'E' {
l.r.Move(1)
c = l.r.Peek(0)
if c == '+' || c == '-' {
l.r.Move(1)
}
if !l.consumeDigit() {
// e could belong to next token
l.r.Rewind(mark)
return true
}
for l.consumeDigit() {
}
}
return true
}
func (l *Lexer) consumeUnicodeRangeToken() bool {
c := l.r.Peek(0)
if (c != 'u' && c != 'U') || l.r.Peek(1) != '+' {
return false
}
mark := l.r.Pos()
l.r.Move(2)
if l.consumeHexDigit() {
// consume up to 6 hexDigits
k := 1
for ; k < 6; k++ {
if !l.consumeHexDigit() {
break
}
}
// either a minus or a question mark or the end is expected
if l.consumeByte('-') {
// consume another up to 6 hexDigits
if l.consumeHexDigit() {
for k := 1; k < 6; k++ {
if !l.consumeHexDigit() {
break
}
}
} else {
l.r.Rewind(mark)
return false
}
} else {
// could be filled up to 6 characters with question marks or else regular hexDigits
if l.consumeByte('?') {
k++
for ; k < 6; k++ {
if !l.consumeByte('?') {
l.r.Rewind(mark)
return false
}
}
}
}
} else {
// consume 6 question marks
for k := 0; k < 6; k++ {
if !l.consumeByte('?') {
l.r.Rewind(mark)
return false
}
}
}
return true
}
func (l *Lexer) consumeColumnToken() bool {
if l.r.Peek(0) == '|' && l.r.Peek(1) == '|' {
l.r.Move(2)
return true
}
return false
}
func (l *Lexer) consumeCDOToken() bool {
if l.r.Peek(0) == '<' && l.r.Peek(1) == '!' && l.r.Peek(2) == '-' && l.r.Peek(3) == '-' {
l.r.Move(4)
return true
}
return false
}
func (l *Lexer) consumeCDCToken() bool {
if l.r.Peek(0) == '-' && l.r.Peek(1) == '-' && l.r.Peek(2) == '>' {
l.r.Move(3)
return true
}
return false
}
////////////////////////////////////////////////////////////////
// consumeMatch consumes any MatchToken.
func (l *Lexer) consumeMatch() TokenType {
if l.r.Peek(1) == '=' {
switch l.r.Peek(0) {
case '~':
l.r.Move(2)
return IncludeMatchToken
case '|':
l.r.Move(2)
return DashMatchToken
case '^':
l.r.Move(2)
return PrefixMatchToken
case '$':
l.r.Move(2)
return SuffixMatchToken
case '*':
l.r.Move(2)
return SubstringMatchToken
}
}
return ErrorToken
}
// consumeBracket consumes any bracket token.
func (l *Lexer) consumeBracket() TokenType {
switch l.r.Peek(0) {
case '(':
l.r.Move(1)
return LeftParenthesisToken
case ')':
l.r.Move(1)
return RightParenthesisToken
case '[':
l.r.Move(1)
return LeftBracketToken
case ']':
l.r.Move(1)
return RightBracketToken
case '{':
l.r.Move(1)
return LeftBraceToken
case '}':
l.r.Move(1)
return RightBraceToken
}
return ErrorToken
}
// consumeNumeric consumes NumberToken, PercentageToken or DimensionToken.
func (l *Lexer) consumeNumeric() TokenType {
if l.consumeNumberToken() {
if l.consumeByte('%') {
return PercentageToken
} else if l.consumeIdentToken() {
return DimensionToken
}
return NumberToken
}
return ErrorToken
}
// consumeString consumes a string and may return BadStringToken when a newline is encountered.
func (l *Lexer) consumeString() TokenType {
// assume to be on " or '
delim := l.r.Peek(0)
l.r.Move(1)
for {
c := l.r.Peek(0)
if c == 0 && l.Err() != nil {
break
} else if c == '\n' || c == '\r' || c == '\f' {
l.r.Move(1)
return BadStringToken
} else if c == delim {
l.r.Move(1)
break
} else if c == '\\' {
if !l.consumeEscape() {
l.r.Move(1)
l.consumeNewline()
}
} else {
l.r.Move(1)
}
}
return StringToken
}
func (l *Lexer) consumeUnquotedURL() bool {
for {
c := l.r.Peek(0)
if c == 0 && l.Err() != nil || c == ')' {
break
} else if c == '"' || c == '\'' || c == '(' || c == '\\' || c == ' ' || c <= 0x1F || c == 0x7F {
if c != '\\' || !l.consumeEscape() {
return false
}
} else {
l.r.Move(1)
}
}
return true
}
// consumeRemnantsBadUrl consumes bytes of a BadUrlToken so that normal tokenization may continue.
func (l *Lexer) consumeRemnantsBadURL() {
for {
if l.consumeByte(')') || l.Err() != nil {
break
} else if !l.consumeEscape() {
l.r.Move(1)
}
}
}
// consumeIdentlike consumes IdentToken, FunctionToken or UrlToken.
func (l *Lexer) consumeIdentlike() TokenType {
if l.consumeIdentToken() {
if l.r.Peek(0) != '(' {
return IdentToken
} else if !parse.EqualFold(bytes.Replace(l.r.Lexeme(), []byte{'\\'}, nil, -1), []byte{'u', 'r', 'l'}) {
l.r.Move(1)
return FunctionToken
}
l.r.Move(1)
// consume url
for l.consumeWhitespace() {
}
if c := l.r.Peek(0); c == '"' || c == '\'' {
if l.consumeString() == BadStringToken {
l.consumeRemnantsBadURL()
return BadURLToken
}
} else if !l.consumeUnquotedURL() && !l.consumeWhitespace() {
l.consumeRemnantsBadURL()
return BadURLToken
}
for l.consumeWhitespace() {
}
if !l.consumeByte(')') && l.Err() != io.EOF {
l.consumeRemnantsBadURL()
return BadURLToken
}
return URLToken
}
return ErrorToken
}

143
vendor/github.com/tdewolff/parse/css/lex_test.go generated vendored Normal file
View file

@ -0,0 +1,143 @@
package css // import "github.com/tdewolff/parse/css"
import (
"bytes"
"fmt"
"io"
"testing"
"github.com/tdewolff/test"
)
type TTs []TokenType
func TestTokens(t *testing.T) {
var tokenTests = []struct {
css string
expected []TokenType
}{
{" ", TTs{}},
{"5.2 .4", TTs{NumberToken, NumberToken}},
{"color: red;", TTs{IdentToken, ColonToken, IdentToken, SemicolonToken}},
{"background: url(\"http://x\");", TTs{IdentToken, ColonToken, URLToken, SemicolonToken}},
{"background: URL(x.png);", TTs{IdentToken, ColonToken, URLToken, SemicolonToken}},
{"color: rgb(4, 0%, 5em);", TTs{IdentToken, ColonToken, FunctionToken, NumberToken, CommaToken, PercentageToken, CommaToken, DimensionToken, RightParenthesisToken, SemicolonToken}},
{"body { \"string\" }", TTs{IdentToken, LeftBraceToken, StringToken, RightBraceToken}},
{"body { \"str\\\"ing\" }", TTs{IdentToken, LeftBraceToken, StringToken, RightBraceToken}},
{".class { }", TTs{DelimToken, IdentToken, LeftBraceToken, RightBraceToken}},
{"#class { }", TTs{HashToken, LeftBraceToken, RightBraceToken}},
{"#class\\#withhash { }", TTs{HashToken, LeftBraceToken, RightBraceToken}},
{"@media print { }", TTs{AtKeywordToken, IdentToken, LeftBraceToken, RightBraceToken}},
{"/*comment*/", TTs{CommentToken}},
{"/*com* /ment*/", TTs{CommentToken}},
{"~= |= ^= $= *=", TTs{IncludeMatchToken, DashMatchToken, PrefixMatchToken, SuffixMatchToken, SubstringMatchToken}},
{"||", TTs{ColumnToken}},
{"<!-- -->", TTs{CDOToken, CDCToken}},
{"U+1234", TTs{UnicodeRangeToken}},
{"5.2 .4 4e-22", TTs{NumberToken, NumberToken, NumberToken}},
{"--custom-variable", TTs{CustomPropertyNameToken}},
// unexpected ending
{"ident", TTs{IdentToken}},
{"123.", TTs{NumberToken, DelimToken}},
{"\"string", TTs{StringToken}},
{"123/*comment", TTs{NumberToken, CommentToken}},
{"U+1-", TTs{IdentToken, NumberToken, DelimToken}},
// unicode
{"fooδbar􀀀", TTs{IdentToken}},
{"foo\\æ\\†", TTs{IdentToken}},
// {"foo\x00bar", TTs{IdentToken}},
{"'foo\u554abar'", TTs{StringToken}},
{"\\000026B", TTs{IdentToken}},
{"\\26 B", TTs{IdentToken}},
// hacks
{`\-\mo\z\-b\i\nd\in\g:\url(//business\i\nfo.co.uk\/labs\/xbl\/xbl\.xml\#xss);`, TTs{IdentToken, ColonToken, URLToken, SemicolonToken}},
{"width/**/:/**/ 40em;", TTs{IdentToken, CommentToken, ColonToken, CommentToken, DimensionToken, SemicolonToken}},
{":root *> #quince", TTs{ColonToken, IdentToken, DelimToken, DelimToken, HashToken}},
{"html[xmlns*=\"\"]:root", TTs{IdentToken, LeftBracketToken, IdentToken, SubstringMatchToken, StringToken, RightBracketToken, ColonToken, IdentToken}},
{"body:nth-of-type(1)", TTs{IdentToken, ColonToken, FunctionToken, NumberToken, RightParenthesisToken}},
{"color/*\\**/: blue\\9;", TTs{IdentToken, CommentToken, ColonToken, IdentToken, SemicolonToken}},
{"color: blue !ie;", TTs{IdentToken, ColonToken, IdentToken, DelimToken, IdentToken, SemicolonToken}},
// escapes, null and replacement character
{"c\\\x00olor: white;", TTs{IdentToken, ColonToken, IdentToken, SemicolonToken}},
{"null\\0", TTs{IdentToken}},
{"eof\\", TTs{IdentToken}},
{"\"a\x00b\"", TTs{StringToken}},
{"a\\\x00b", TTs{IdentToken}},
{"url(a\x00b)", TTs{BadURLToken}}, // null character cannot be unquoted
{"/*a\x00b*/", TTs{CommentToken}},
// coverage
{" \n\r\n\r\"\\\r\n\\\r\"", TTs{StringToken}},
{"U+?????? U+ABCD?? U+ABC-DEF", TTs{UnicodeRangeToken, UnicodeRangeToken, UnicodeRangeToken}},
{"U+? U+A?", TTs{IdentToken, DelimToken, DelimToken, IdentToken, DelimToken, IdentToken, DelimToken}},
{"-5.23 -moz", TTs{NumberToken, IdentToken}},
{"()", TTs{LeftParenthesisToken, RightParenthesisToken}},
{"url( //url )", TTs{URLToken}},
{"url( ", TTs{URLToken}},
{"url( //url", TTs{URLToken}},
{"url(\")a", TTs{URLToken}},
{"url(a'\\\n)a", TTs{BadURLToken, IdentToken}},
{"url(\"\n)a", TTs{BadURLToken, IdentToken}},
{"url(a h)a", TTs{BadURLToken, IdentToken}},
{"<!- | @4 ## /2", TTs{DelimToken, DelimToken, DelimToken, DelimToken, DelimToken, NumberToken, DelimToken, DelimToken, DelimToken, NumberToken}},
{"\"s\\\n\"", TTs{StringToken}},
{"\"a\\\"b\"", TTs{StringToken}},
{"\"s\n", TTs{BadStringToken}},
// small
{"\"abcd", TTs{StringToken}},
{"/*comment", TTs{CommentToken}},
{"U+A-B", TTs{UnicodeRangeToken}},
{"url((", TTs{BadURLToken}},
{"id\u554a", TTs{IdentToken}},
}
for _, tt := range tokenTests {
t.Run(tt.css, func(t *testing.T) {
l := NewLexer(bytes.NewBufferString(tt.css))
i := 0
for {
token, _ := l.Next()
if token == ErrorToken {
test.T(t, l.Err(), io.EOF)
test.T(t, i, len(tt.expected), "when error occurred we must be at the end")
break
} else if token == WhitespaceToken {
continue
}
test.That(t, i < len(tt.expected), "index", i, "must not exceed expected token types size", len(tt.expected))
if i < len(tt.expected) {
test.T(t, token, tt.expected[i], "token types must match")
}
i++
}
})
}
test.T(t, WhitespaceToken.String(), "Whitespace")
test.T(t, EmptyToken.String(), "Empty")
test.T(t, CustomPropertyValueToken.String(), "CustomPropertyValue")
test.T(t, TokenType(100).String(), "Invalid(100)")
test.T(t, NewLexer(bytes.NewBufferString("x")).consumeBracket(), ErrorToken, "consumeBracket on 'x' must return error")
}
////////////////////////////////////////////////////////////////
func ExampleNewLexer() {
l := NewLexer(bytes.NewBufferString("color: red;"))
out := ""
for {
tt, data := l.Next()
if tt == ErrorToken {
break
} else if tt == WhitespaceToken || tt == CommentToken {
continue
}
out += string(data)
}
fmt.Println(out)
// Output: color:red;
}

398
vendor/github.com/tdewolff/parse/css/parse.go generated vendored Normal file
View file

@ -0,0 +1,398 @@
package css // import "github.com/tdewolff/parse/css"
import (
"bytes"
"io"
"strconv"
"github.com/tdewolff/parse"
)
var wsBytes = []byte(" ")
var endBytes = []byte("}")
var emptyBytes = []byte("")
// GrammarType determines the type of grammar.
type GrammarType uint32
// GrammarType values.
const (
ErrorGrammar GrammarType = iota // extra token when errors occur
CommentGrammar
AtRuleGrammar
BeginAtRuleGrammar
EndAtRuleGrammar
QualifiedRuleGrammar
BeginRulesetGrammar
EndRulesetGrammar
DeclarationGrammar
TokenGrammar
CustomPropertyGrammar
)
// String returns the string representation of a GrammarType.
func (tt GrammarType) String() string {
switch tt {
case ErrorGrammar:
return "Error"
case CommentGrammar:
return "Comment"
case AtRuleGrammar:
return "AtRule"
case BeginAtRuleGrammar:
return "BeginAtRule"
case EndAtRuleGrammar:
return "EndAtRule"
case QualifiedRuleGrammar:
return "QualifiedRule"
case BeginRulesetGrammar:
return "BeginRuleset"
case EndRulesetGrammar:
return "EndRuleset"
case DeclarationGrammar:
return "Declaration"
case TokenGrammar:
return "Token"
case CustomPropertyGrammar:
return "CustomProperty"
}
return "Invalid(" + strconv.Itoa(int(tt)) + ")"
}
////////////////////////////////////////////////////////////////
// State is the state function the parser currently is in.
type State func(*Parser) GrammarType
// Token is a single TokenType and its associated data.
type Token struct {
TokenType
Data []byte
}
// Parser is the state for the parser.
type Parser struct {
l *Lexer
state []State
err error
buf []Token
level int
tt TokenType
data []byte
prevWS bool
prevEnd bool
}
// NewParser returns a new CSS parser from an io.Reader. isInline specifies whether this is an inline style attribute.
func NewParser(r io.Reader, isInline bool) *Parser {
l := NewLexer(r)
p := &Parser{
l: l,
state: make([]State, 0, 4),
}
if isInline {
p.state = append(p.state, (*Parser).parseDeclarationList)
} else {
p.state = append(p.state, (*Parser).parseStylesheet)
}
return p
}
// Err returns the error encountered during parsing, this is often io.EOF but also other errors can be returned.
func (p *Parser) Err() error {
if p.err != nil {
return p.err
}
return p.l.Err()
}
// Restore restores the NULL byte at the end of the buffer.
func (p *Parser) Restore() {
p.l.Restore()
}
// Next returns the next Grammar. It returns ErrorGrammar when an error was encountered. Using Err() one can retrieve the error message.
func (p *Parser) Next() (GrammarType, TokenType, []byte) {
p.err = nil
if p.prevEnd {
p.tt, p.data = RightBraceToken, endBytes
p.prevEnd = false
} else {
p.tt, p.data = p.popToken(true)
}
gt := p.state[len(p.state)-1](p)
return gt, p.tt, p.data
}
// Values returns a slice of Tokens for the last Grammar. Only AtRuleGrammar, BeginAtRuleGrammar, BeginRulesetGrammar and Declaration will return the at-rule components, ruleset selector and declaration values respectively.
func (p *Parser) Values() []Token {
return p.buf
}
func (p *Parser) popToken(allowComment bool) (TokenType, []byte) {
p.prevWS = false
tt, data := p.l.Next()
for tt == WhitespaceToken || tt == CommentToken {
if tt == WhitespaceToken {
p.prevWS = true
} else if allowComment && len(p.state) == 1 {
break
}
tt, data = p.l.Next()
}
return tt, data
}
func (p *Parser) initBuf() {
p.buf = p.buf[:0]
}
func (p *Parser) pushBuf(tt TokenType, data []byte) {
p.buf = append(p.buf, Token{tt, data})
}
////////////////////////////////////////////////////////////////
func (p *Parser) parseStylesheet() GrammarType {
if p.tt == CDOToken || p.tt == CDCToken {
return TokenGrammar
} else if p.tt == AtKeywordToken {
return p.parseAtRule()
} else if p.tt == CommentToken {
return CommentGrammar
} else if p.tt == ErrorToken {
return ErrorGrammar
}
return p.parseQualifiedRule()
}
func (p *Parser) parseDeclarationList() GrammarType {
if p.tt == CommentToken {
p.tt, p.data = p.popToken(false)
}
for p.tt == SemicolonToken {
p.tt, p.data = p.popToken(false)
}
if p.tt == ErrorToken {
return ErrorGrammar
} else if p.tt == AtKeywordToken {
return p.parseAtRule()
} else if p.tt == IdentToken {
return p.parseDeclaration()
} else if p.tt == CustomPropertyNameToken {
return p.parseCustomProperty()
}
// parse error
p.initBuf()
p.err = parse.NewErrorLexer("unexpected token in declaration", p.l.r)
for {
tt, data := p.popToken(false)
if (tt == SemicolonToken || tt == RightBraceToken) && p.level == 0 || tt == ErrorToken {
p.prevEnd = (tt == RightBraceToken)
return ErrorGrammar
}
p.pushBuf(tt, data)
}
}
////////////////////////////////////////////////////////////////
func (p *Parser) parseAtRule() GrammarType {
p.initBuf()
parse.ToLower(p.data)
atRuleName := p.data
if len(atRuleName) > 0 && atRuleName[1] == '-' {
if i := bytes.IndexByte(atRuleName[2:], '-'); i != -1 {
atRuleName = atRuleName[i+2:] // skip vendor specific prefix
}
}
atRule := ToHash(atRuleName[1:])
first := true
skipWS := false
for {
tt, data := p.popToken(false)
if tt == LeftBraceToken && p.level == 0 {
if atRule == Font_Face || atRule == Page {
p.state = append(p.state, (*Parser).parseAtRuleDeclarationList)
} else if atRule == Document || atRule == Keyframes || atRule == Media || atRule == Supports {
p.state = append(p.state, (*Parser).parseAtRuleRuleList)
} else {
p.state = append(p.state, (*Parser).parseAtRuleUnknown)
}
return BeginAtRuleGrammar
} else if (tt == SemicolonToken || tt == RightBraceToken) && p.level == 0 || tt == ErrorToken {
p.prevEnd = (tt == RightBraceToken)
return AtRuleGrammar
} else if tt == LeftParenthesisToken || tt == LeftBraceToken || tt == LeftBracketToken || tt == FunctionToken {
p.level++
} else if tt == RightParenthesisToken || tt == RightBraceToken || tt == RightBracketToken {
p.level--
}
if first {
if tt == LeftParenthesisToken || tt == LeftBracketToken {
p.prevWS = false
}
first = false
}
if len(data) == 1 && (data[0] == ',' || data[0] == ':') {
skipWS = true
} else if p.prevWS && !skipWS && tt != RightParenthesisToken {
p.pushBuf(WhitespaceToken, wsBytes)
} else {
skipWS = false
}
if tt == LeftParenthesisToken {
skipWS = true
}
p.pushBuf(tt, data)
}
}
func (p *Parser) parseAtRuleRuleList() GrammarType {
if p.tt == RightBraceToken || p.tt == ErrorToken {
p.state = p.state[:len(p.state)-1]
return EndAtRuleGrammar
} else if p.tt == AtKeywordToken {
return p.parseAtRule()
} else {
return p.parseQualifiedRule()
}
}
func (p *Parser) parseAtRuleDeclarationList() GrammarType {
for p.tt == SemicolonToken {
p.tt, p.data = p.popToken(false)
}
if p.tt == RightBraceToken || p.tt == ErrorToken {
p.state = p.state[:len(p.state)-1]
return EndAtRuleGrammar
}
return p.parseDeclarationList()
}
func (p *Parser) parseAtRuleUnknown() GrammarType {
if p.tt == RightBraceToken && p.level == 0 || p.tt == ErrorToken {
p.state = p.state[:len(p.state)-1]
return EndAtRuleGrammar
}
if p.tt == LeftParenthesisToken || p.tt == LeftBraceToken || p.tt == LeftBracketToken || p.tt == FunctionToken {
p.level++
} else if p.tt == RightParenthesisToken || p.tt == RightBraceToken || p.tt == RightBracketToken {
p.level--
}
return TokenGrammar
}
func (p *Parser) parseQualifiedRule() GrammarType {
p.initBuf()
first := true
inAttrSel := false
skipWS := true
var tt TokenType
var data []byte
for {
if first {
tt, data = p.tt, p.data
p.tt = WhitespaceToken
p.data = emptyBytes
first = false
} else {
tt, data = p.popToken(false)
}
if tt == LeftBraceToken && p.level == 0 {
p.state = append(p.state, (*Parser).parseQualifiedRuleDeclarationList)
return BeginRulesetGrammar
} else if tt == ErrorToken {
p.err = parse.NewErrorLexer("unexpected ending in qualified rule, expected left brace token", p.l.r)
return ErrorGrammar
} else if tt == LeftParenthesisToken || tt == LeftBraceToken || tt == LeftBracketToken || tt == FunctionToken {
p.level++
} else if tt == RightParenthesisToken || tt == RightBraceToken || tt == RightBracketToken {
p.level--
}
if len(data) == 1 && (data[0] == ',' || data[0] == '>' || data[0] == '+' || data[0] == '~') {
if data[0] == ',' {
return QualifiedRuleGrammar
}
skipWS = true
} else if p.prevWS && !skipWS && !inAttrSel {
p.pushBuf(WhitespaceToken, wsBytes)
} else {
skipWS = false
}
if tt == LeftBracketToken {
inAttrSel = true
} else if tt == RightBracketToken {
inAttrSel = false
}
p.pushBuf(tt, data)
}
}
func (p *Parser) parseQualifiedRuleDeclarationList() GrammarType {
for p.tt == SemicolonToken {
p.tt, p.data = p.popToken(false)
}
if p.tt == RightBraceToken || p.tt == ErrorToken {
p.state = p.state[:len(p.state)-1]
return EndRulesetGrammar
}
return p.parseDeclarationList()
}
func (p *Parser) parseDeclaration() GrammarType {
p.initBuf()
parse.ToLower(p.data)
if tt, _ := p.popToken(false); tt != ColonToken {
p.err = parse.NewErrorLexer("unexpected token in declaration", p.l.r)
return ErrorGrammar
}
skipWS := true
for {
tt, data := p.popToken(false)
if (tt == SemicolonToken || tt == RightBraceToken) && p.level == 0 || tt == ErrorToken {
p.prevEnd = (tt == RightBraceToken)
return DeclarationGrammar
} else if tt == LeftParenthesisToken || tt == LeftBraceToken || tt == LeftBracketToken || tt == FunctionToken {
p.level++
} else if tt == RightParenthesisToken || tt == RightBraceToken || tt == RightBracketToken {
p.level--
}
if len(data) == 1 && (data[0] == ',' || data[0] == '/' || data[0] == ':' || data[0] == '!' || data[0] == '=') {
skipWS = true
} else if p.prevWS && !skipWS {
p.pushBuf(WhitespaceToken, wsBytes)
} else {
skipWS = false
}
p.pushBuf(tt, data)
}
}
func (p *Parser) parseCustomProperty() GrammarType {
p.initBuf()
if tt, _ := p.popToken(false); tt != ColonToken {
p.err = parse.NewErrorLexer("unexpected token in declaration", p.l.r)
return ErrorGrammar
}
val := []byte{}
for {
tt, data := p.l.Next()
if (tt == SemicolonToken || tt == RightBraceToken) && p.level == 0 || tt == ErrorToken {
p.prevEnd = (tt == RightBraceToken)
p.pushBuf(CustomPropertyValueToken, val)
return CustomPropertyGrammar
} else if tt == LeftParenthesisToken || tt == LeftBraceToken || tt == LeftBracketToken || tt == FunctionToken {
p.level++
} else if tt == RightParenthesisToken || tt == RightBraceToken || tt == RightBracketToken {
p.level--
}
val = append(val, data...)
}
}

248
vendor/github.com/tdewolff/parse/css/parse_test.go generated vendored Normal file
View file

@ -0,0 +1,248 @@
package css // import "github.com/tdewolff/parse/css"
import (
"bytes"
"fmt"
"io"
"testing"
"github.com/tdewolff/parse"
"github.com/tdewolff/test"
)
////////////////////////////////////////////////////////////////
func TestParse(t *testing.T) {
var parseTests = []struct {
inline bool
css string
expected string
}{
{true, " x : y ; ", "x:y;"},
{true, "color: red;", "color:red;"},
{true, "color : red;", "color:red;"},
{true, "color: red; border: 0;", "color:red;border:0;"},
{true, "color: red !important;", "color:red!important;"},
{true, "color: red ! important;", "color:red!important;"},
{true, "white-space: -moz-pre-wrap;", "white-space:-moz-pre-wrap;"},
{true, "display: -moz-inline-stack;", "display:-moz-inline-stack;"},
{true, "x: 10px / 1em;", "x:10px/1em;"},
{true, "x: 1em/1.5em \"Times New Roman\", Times, serif;", "x:1em/1.5em \"Times New Roman\",Times,serif;"},
{true, "x: hsla(100,50%, 75%, 0.5);", "x:hsla(100,50%,75%,0.5);"},
{true, "x: hsl(100,50%, 75%);", "x:hsl(100,50%,75%);"},
{true, "x: rgba(255, 238 , 221, 0.3);", "x:rgba(255,238,221,0.3);"},
{true, "x: 50vmax;", "x:50vmax;"},
{true, "color: linear-gradient(to right, black, white);", "color:linear-gradient(to right,black,white);"},
{true, "color: calc(100%/2 - 1em);", "color:calc(100%/2 - 1em);"},
{true, "color: calc(100%/2--1em);", "color:calc(100%/2--1em);"},
{false, "<!-- @charset; -->", "<!--@charset;-->"},
{false, "@media print, screen { }", "@media print,screen{}"},
{false, "@media { @viewport ; }", "@media{@viewport;}"},
{false, "@keyframes 'diagonal-slide' { from { left: 0; top: 0; } to { left: 100px; top: 100px; } }", "@keyframes 'diagonal-slide'{from{left:0;top:0;}to{left:100px;top:100px;}}"},
{false, "@keyframes movingbox{0%{left:90%;}50%{left:10%;}100%{left:90%;}}", "@keyframes movingbox{0%{left:90%;}50%{left:10%;}100%{left:90%;}}"},
{false, ".foo { color: #fff;}", ".foo{color:#fff;}"},
{false, ".foo { ; _color: #fff;}", ".foo{_color:#fff;}"},
{false, "a { color: red; border: 0; }", "a{color:red;border:0;}"},
{false, "a { color: red; border: 0; } b { padding: 0; }", "a{color:red;border:0;}b{padding:0;}"},
{false, "/* comment */", "/* comment */"},
// extraordinary
{true, "color: red;;", "color:red;"},
{true, "color:#c0c0c0", "color:#c0c0c0;"},
{true, "background:URL(x.png);", "background:URL(x.png);"},
{true, "filter: progid : DXImageTransform.Microsoft.BasicImage(rotation=1);", "filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=1);"},
{true, "/*a*/\n/*c*/\nkey: value;", "key:value;"},
{true, "@-moz-charset;", "@-moz-charset;"},
{true, "--custom-variable: (0;) ;", "--custom-variable: (0;) ;"},
{false, "@import;@import;", "@import;@import;"},
{false, ".a .b#c, .d<.e { x:y; }", ".a .b#c,.d<.e{x:y;}"},
{false, ".a[b~=c]d { x:y; }", ".a[b~=c]d{x:y;}"},
// {false, "{x:y;}", "{x:y;}"},
{false, "a{}", "a{}"},
{false, "a,.b/*comment*/ {x:y;}", "a,.b{x:y;}"},
{false, "a,.b/*comment*/.c {x:y;}", "a,.b.c{x:y;}"},
{false, "a{x:; z:q;}", "a{x:;z:q;}"},
{false, "@font-face { x:y; }", "@font-face{x:y;}"},
{false, "a:not([controls]){x:y;}", "a:not([controls]){x:y;}"},
{false, "@document regexp('https:.*') { p { color: red; } }", "@document regexp('https:.*'){p{color:red;}}"},
{false, "@media all and ( max-width:400px ) { }", "@media all and (max-width:400px){}"},
{false, "@media (max-width:400px) { }", "@media(max-width:400px){}"},
{false, "@media (max-width:400px)", "@media(max-width:400px);"},
{false, "@font-face { ; font:x; }", "@font-face{font:x;}"},
{false, "@-moz-font-face { ; font:x; }", "@-moz-font-face{font:x;}"},
{false, "@unknown abc { {} lala }", "@unknown abc{{}lala}"},
{false, "a[x={}]{x:y;}", "a[x={}]{x:y;}"},
{false, "a[x=,]{x:y;}", "a[x=,]{x:y;}"},
{false, "a[x=+]{x:y;}", "a[x=+]{x:y;}"},
{false, ".cla .ss > #id { x:y; }", ".cla .ss>#id{x:y;}"},
{false, ".cla /*a*/ /*b*/ .ss{}", ".cla .ss{}"},
{false, "a{x:f(a(),b);}", "a{x:f(a(),b);}"},
{false, "a{x:y!z;}", "a{x:y!z;}"},
{false, "[class*=\"column\"]+[class*=\"column\"]:last-child{a:b;}", "[class*=\"column\"]+[class*=\"column\"]:last-child{a:b;}"},
{false, "@media { @viewport }", "@media{@viewport;}"},
{false, "table { @unknown }", "table{@unknown;}"},
// early endings
{false, "selector{", "selector{"},
{false, "@media{selector{", "@media{selector{"},
// bad grammar
{true, "~color:red", "~color:red;"},
{false, ".foo { *color: #fff;}", ".foo{*color:#fff;}"},
{true, "*color: red; font-size: 12pt;", "*color:red;font-size:12pt;"},
{true, "_color: red; font-size: 12pt;", "_color:red;font-size:12pt;"},
// issues
{false, "@media print {.class{width:5px;}}", "@media print{.class{width:5px;}}"}, // #6
{false, ".class{width:calc((50% + 2em)/2 + 14px);}", ".class{width:calc((50% + 2em)/2 + 14px);}"}, // #7
{false, ".class [c=y]{}", ".class [c=y]{}"}, // tdewolff/minify#16
{false, "table{font-family:Verdana}", "table{font-family:Verdana;}"}, // tdewolff/minify#22
// go-fuzz
{false, "@-webkit-", "@-webkit-;"},
}
for _, tt := range parseTests {
t.Run(tt.css, func(t *testing.T) {
output := ""
p := NewParser(bytes.NewBufferString(tt.css), tt.inline)
for {
grammar, _, data := p.Next()
data = parse.Copy(data)
if grammar == ErrorGrammar {
if err := p.Err(); err != io.EOF {
for _, val := range p.Values() {
data = append(data, val.Data...)
}
if perr, ok := err.(*parse.Error); ok && perr.Message == "unexpected token in declaration" {
data = append(data, ";"...)
}
} else {
test.T(t, err, io.EOF)
break
}
} else if grammar == AtRuleGrammar || grammar == BeginAtRuleGrammar || grammar == QualifiedRuleGrammar || grammar == BeginRulesetGrammar || grammar == DeclarationGrammar || grammar == CustomPropertyGrammar {
if grammar == DeclarationGrammar || grammar == CustomPropertyGrammar {
data = append(data, ":"...)
}
for _, val := range p.Values() {
data = append(data, val.Data...)
}
if grammar == BeginAtRuleGrammar || grammar == BeginRulesetGrammar {
data = append(data, "{"...)
} else if grammar == AtRuleGrammar || grammar == DeclarationGrammar || grammar == CustomPropertyGrammar {
data = append(data, ";"...)
} else if grammar == QualifiedRuleGrammar {
data = append(data, ","...)
}
}
output += string(data)
}
test.String(t, output, tt.expected)
})
}
test.T(t, ErrorGrammar.String(), "Error")
test.T(t, AtRuleGrammar.String(), "AtRule")
test.T(t, BeginAtRuleGrammar.String(), "BeginAtRule")
test.T(t, EndAtRuleGrammar.String(), "EndAtRule")
test.T(t, BeginRulesetGrammar.String(), "BeginRuleset")
test.T(t, EndRulesetGrammar.String(), "EndRuleset")
test.T(t, DeclarationGrammar.String(), "Declaration")
test.T(t, TokenGrammar.String(), "Token")
test.T(t, CommentGrammar.String(), "Comment")
test.T(t, CustomPropertyGrammar.String(), "CustomProperty")
test.T(t, GrammarType(100).String(), "Invalid(100)")
}
func TestParseError(t *testing.T) {
var parseErrorTests = []struct {
inline bool
css string
col int
}{
{false, "selector", 9},
{true, "color 0", 8},
{true, "--color 0", 10},
{true, "--custom-variable:0", 0},
}
for _, tt := range parseErrorTests {
t.Run(tt.css, func(t *testing.T) {
p := NewParser(bytes.NewBufferString(tt.css), tt.inline)
for {
grammar, _, _ := p.Next()
if grammar == ErrorGrammar {
if tt.col == 0 {
test.T(t, p.Err(), io.EOF)
} else if perr, ok := p.Err().(*parse.Error); ok {
test.T(t, perr.Col, tt.col)
} else {
test.Fail(t, "bad error:", p.Err())
}
break
}
}
})
}
}
func TestReader(t *testing.T) {
input := "x:a;"
p := NewParser(test.NewPlainReader(bytes.NewBufferString(input)), true)
for {
grammar, _, _ := p.Next()
if grammar == ErrorGrammar {
break
}
}
}
////////////////////////////////////////////////////////////////
type Obj struct{}
func (*Obj) F() {}
var f1 func(*Obj)
func BenchmarkFuncPtr(b *testing.B) {
for i := 0; i < b.N; i++ {
f1 = (*Obj).F
}
}
var f2 func()
func BenchmarkMemFuncPtr(b *testing.B) {
obj := &Obj{}
for i := 0; i < b.N; i++ {
f2 = obj.F
}
}
func ExampleNewParser() {
p := NewParser(bytes.NewBufferString("color: red;"), true) // false because this is the content of an inline style attribute
out := ""
for {
gt, _, data := p.Next()
if gt == ErrorGrammar {
break
} else if gt == AtRuleGrammar || gt == BeginAtRuleGrammar || gt == BeginRulesetGrammar || gt == DeclarationGrammar {
out += string(data)
if gt == DeclarationGrammar {
out += ":"
}
for _, val := range p.Values() {
out += string(val.Data)
}
if gt == BeginAtRuleGrammar || gt == BeginRulesetGrammar {
out += "{"
} else if gt == AtRuleGrammar || gt == DeclarationGrammar {
out += ";"
}
} else {
out += string(data)
}
}
fmt.Println(out)
// Output: color:red;
}

47
vendor/github.com/tdewolff/parse/css/util.go generated vendored Normal file
View file

@ -0,0 +1,47 @@
package css // import "github.com/tdewolff/parse/css"
import "github.com/tdewolff/parse/buffer"
// IsIdent returns true if the bytes are a valid identifier.
func IsIdent(b []byte) bool {
l := NewLexer(buffer.NewReader(b))
l.consumeIdentToken()
l.r.Restore()
return l.r.Pos() == len(b)
}
// IsURLUnquoted returns true if the bytes are a valid unquoted URL.
func IsURLUnquoted(b []byte) bool {
l := NewLexer(buffer.NewReader(b))
l.consumeUnquotedURL()
l.r.Restore()
return l.r.Pos() == len(b)
}
// HSL2RGB converts HSL to RGB with all of range [0,1]
// from http://www.w3.org/TR/css3-color/#hsl-color
func HSL2RGB(h, s, l float64) (float64, float64, float64) {
m2 := l * (s + 1)
if l > 0.5 {
m2 = l + s - l*s
}
m1 := l*2 - m2
return hue2rgb(m1, m2, h+1.0/3.0), hue2rgb(m1, m2, h), hue2rgb(m1, m2, h-1.0/3.0)
}
func hue2rgb(m1, m2, h float64) float64 {
if h < 0.0 {
h += 1.0
}
if h > 1.0 {
h -= 1.0
}
if h*6.0 < 1.0 {
return m1 + (m2-m1)*h*6.0
} else if h*2.0 < 1.0 {
return m2
} else if h*3.0 < 2.0 {
return m1 + (m2-m1)*(2.0/3.0-h)*6.0
}
return m1
}

34
vendor/github.com/tdewolff/parse/css/util_test.go generated vendored Normal file
View file

@ -0,0 +1,34 @@
package css // import "github.com/tdewolff/parse/css"
import (
"testing"
"github.com/tdewolff/test"
)
func TestIsIdent(t *testing.T) {
test.That(t, IsIdent([]byte("color")))
test.That(t, !IsIdent([]byte("4.5")))
}
func TestIsURLUnquoted(t *testing.T) {
test.That(t, IsURLUnquoted([]byte("http://x")))
test.That(t, !IsURLUnquoted([]byte(")")))
}
func TestHsl2Rgb(t *testing.T) {
r, g, b := HSL2RGB(0.0, 1.0, 0.5)
test.T(t, r, 1.0)
test.T(t, g, 0.0)
test.T(t, b, 0.0)
r, g, b = HSL2RGB(1.0, 1.0, 0.5)
test.T(t, r, 1.0)
test.T(t, g, 0.0)
test.T(t, b, 0.0)
r, g, b = HSL2RGB(0.66, 0.0, 1.0)
test.T(t, r, 1.0)
test.T(t, g, 1.0)
test.T(t, b, 1.0)
}

35
vendor/github.com/tdewolff/parse/error.go generated vendored Normal file
View file

@ -0,0 +1,35 @@
package parse
import (
"fmt"
"io"
"github.com/tdewolff/parse/buffer"
)
type Error struct {
Message string
Line int
Col int
Context string
}
func NewError(msg string, r io.Reader, offset int) *Error {
line, col, context, _ := Position(r, offset)
return &Error{
msg,
line,
col,
context,
}
}
func NewErrorLexer(msg string, l *buffer.Lexer) *Error {
r := buffer.NewReader(l.Bytes())
offset := l.Offset()
return NewError(msg, r, offset)
}
func (e *Error) Error() string {
return fmt.Sprintf("parse error:%d:%d: %s\n%s", e.Line, e.Col, e.Message, e.Context)
}

98
vendor/github.com/tdewolff/parse/html/README.md generated vendored Normal file
View file

@ -0,0 +1,98 @@
# HTML [![GoDoc](http://godoc.org/github.com/tdewolff/parse/html?status.svg)](http://godoc.org/github.com/tdewolff/parse/html) [![GoCover](http://gocover.io/_badge/github.com/tdewolff/parse/html)](http://gocover.io/github.com/tdewolff/parse/html)
This package is an HTML5 lexer written in [Go][1]. It follows the specification at [The HTML syntax](http://www.w3.org/TR/html5/syntax.html). The lexer takes an io.Reader and converts it into tokens until the EOF.
## Installation
Run the following command
go get github.com/tdewolff/parse/html
or add the following import and run project with `go get`
import "github.com/tdewolff/parse/html"
## Lexer
### Usage
The following initializes a new Lexer with io.Reader `r`:
``` go
l := html.NewLexer(r)
```
To tokenize until EOF an error, use:
``` go
for {
tt, data := l.Next()
switch tt {
case html.ErrorToken:
// error or EOF set in l.Err()
return
case html.StartTagToken:
// ...
for {
ttAttr, dataAttr := l.Next()
if ttAttr != html.AttributeToken {
break
}
// ...
}
// ...
}
}
```
All tokens:
``` go
ErrorToken TokenType = iota // extra token when errors occur
CommentToken
DoctypeToken
StartTagToken
StartTagCloseToken
StartTagVoidToken
EndTagToken
AttributeToken
TextToken
```
### Examples
``` go
package main
import (
"os"
"github.com/tdewolff/parse/html"
)
// Tokenize HTML from stdin.
func main() {
l := html.NewLexer(os.Stdin)
for {
tt, data := l.Next()
switch tt {
case html.ErrorToken:
if l.Err() != io.EOF {
fmt.Println("Error on line", l.Line(), ":", l.Err())
}
return
case html.StartTagToken:
fmt.Println("Tag", string(data))
for {
ttAttr, dataAttr := l.Next()
if ttAttr != html.AttributeToken {
break
}
key := dataAttr
val := l.AttrVal()
fmt.Println("Attribute", string(key), "=", string(val))
}
// ...
}
}
}
```
## License
Released under the [MIT license](https://github.com/tdewolff/parse/blob/master/LICENSE.md).
[1]: http://golang.org/ "Go Language"

831
vendor/github.com/tdewolff/parse/html/hash.go generated vendored Normal file
View file

@ -0,0 +1,831 @@
package html
// generated by hasher -type=Hash -file=hash.go; DO NOT EDIT, except for adding more constants to the list and rerun go generate
// uses github.com/tdewolff/hasher
//go:generate hasher -type=Hash -file=hash.go
// Hash defines perfect hashes for a predefined list of strings
type Hash uint32
// Unique hash definitions to be used instead of strings
const (
A Hash = 0x1 // a
Abbr Hash = 0x4 // abbr
Accept Hash = 0x3206 // accept
Accept_Charset Hash = 0x320e // accept-charset
Accesskey Hash = 0x4409 // accesskey
Acronym Hash = 0xbb07 // acronym
Action Hash = 0x2ba06 // action
Address Hash = 0x67e07 // address
Align Hash = 0x1605 // align
Alink Hash = 0xd205 // alink
Allowfullscreen Hash = 0x23d0f // allowfullscreen
Alt Hash = 0xee03 // alt
Annotation Hash = 0x2070a // annotation
AnnotationXml Hash = 0x2070d // annotationXml
Applet Hash = 0x14506 // applet
Area Hash = 0x38d04 // area
Article Hash = 0x40e07 // article
Aside Hash = 0x8305 // aside
Async Hash = 0xfa05 // async
Audio Hash = 0x11605 // audio
Autocomplete Hash = 0x12e0c // autocomplete
Autofocus Hash = 0x13a09 // autofocus
Autoplay Hash = 0x14f08 // autoplay
Axis Hash = 0x15704 // axis
B Hash = 0x101 // b
Background Hash = 0x1e0a // background
Base Hash = 0x45404 // base
Basefont Hash = 0x45408 // basefont
Bdi Hash = 0xcb03 // bdi
Bdo Hash = 0x18403 // bdo
Bgcolor Hash = 0x19707 // bgcolor
Bgsound Hash = 0x19e07 // bgsound
Big Hash = 0x1a603 // big
Blink Hash = 0x1a905 // blink
Blockquote Hash = 0x1ae0a // blockquote
Body Hash = 0x4004 // body
Border Hash = 0x33806 // border
Br Hash = 0x202 // br
Button Hash = 0x1b806 // button
Canvas Hash = 0x7f06 // canvas
Caption Hash = 0x27f07 // caption
Center Hash = 0x62a06 // center
Challenge Hash = 0x1e509 // challenge
Charset Hash = 0x3907 // charset
Checked Hash = 0x3b407 // checked
Cite Hash = 0xfe04 // cite
Class Hash = 0x1c305 // class
Classid Hash = 0x1c307 // classid
Clear Hash = 0x41205 // clear
Code Hash = 0x1d604 // code
Codebase Hash = 0x45008 // codebase
Codetype Hash = 0x1d608 // codetype
Col Hash = 0x19903 // col
Colgroup Hash = 0x1ee08 // colgroup
Color Hash = 0x19905 // color
Cols Hash = 0x20204 // cols
Colspan Hash = 0x20207 // colspan
Command Hash = 0x21407 // command
Compact Hash = 0x21b07 // compact
Content Hash = 0x4a907 // content
Contenteditable Hash = 0x4a90f // contenteditable
Contextmenu Hash = 0x3bd0b // contextmenu
Controls Hash = 0x22a08 // controls
Coords Hash = 0x23606 // coords
Crossorigin Hash = 0x25b0b // crossorigin
Data Hash = 0x4c004 // data
Datalist Hash = 0x4c008 // datalist
Datetime Hash = 0x2ea08 // datetime
Dd Hash = 0x31602 // dd
Declare Hash = 0x8607 // declare
Default Hash = 0x5407 // default
DefaultChecked Hash = 0x5040e // defaultChecked
DefaultMuted Hash = 0x5650c // defaultMuted
DefaultSelected Hash = 0x540f // defaultSelected
Defer Hash = 0x6205 // defer
Del Hash = 0x7203 // del
Desc Hash = 0x7c04 // desc
Details Hash = 0x9207 // details
Dfn Hash = 0xab03 // dfn
Dialog Hash = 0xcc06 // dialog
Dir Hash = 0xd903 // dir
Dirname Hash = 0xd907 // dirname
Disabled Hash = 0x10408 // disabled
Div Hash = 0x10b03 // div
Dl Hash = 0x1a402 // dl
Download Hash = 0x48608 // download
Draggable Hash = 0x1c909 // draggable
Dropzone Hash = 0x41908 // dropzone
Dt Hash = 0x60602 // dt
Em Hash = 0x6e02 // em
Embed Hash = 0x6e05 // embed
Enabled Hash = 0x4e07 // enabled
Enctype Hash = 0x2cf07 // enctype
Face Hash = 0x62804 // face
Fieldset Hash = 0x26c08 // fieldset
Figcaption Hash = 0x27c0a // figcaption
Figure Hash = 0x29006 // figure
Font Hash = 0x45804 // font
Footer Hash = 0xf106 // footer
For Hash = 0x29c03 // for
ForeignObject Hash = 0x29c0d // foreignObject
Foreignobject Hash = 0x2a90d // foreignobject
Form Hash = 0x2b604 // form
Formaction Hash = 0x2b60a // formaction
Formenctype Hash = 0x2cb0b // formenctype
Formmethod Hash = 0x2d60a // formmethod
Formnovalidate Hash = 0x2e00e // formnovalidate
Formtarget Hash = 0x2f50a // formtarget
Frame Hash = 0xa305 // frame
Frameborder Hash = 0x3330b // frameborder
Frameset Hash = 0xa308 // frameset
H1 Hash = 0x19502 // h1
H2 Hash = 0x32402 // h2
H3 Hash = 0x34902 // h3
H4 Hash = 0x38602 // h4
H5 Hash = 0x60802 // h5
H6 Hash = 0x2ff02 // h6
Head Hash = 0x37204 // head
Header Hash = 0x37206 // header
Headers Hash = 0x37207 // headers
Height Hash = 0x30106 // height
Hgroup Hash = 0x30906 // hgroup
Hidden Hash = 0x31406 // hidden
High Hash = 0x32104 // high
Hr Hash = 0xaf02 // hr
Href Hash = 0xaf04 // href
Hreflang Hash = 0xaf08 // hreflang
Html Hash = 0x30504 // html
Http_Equiv Hash = 0x3260a // http-equiv
I Hash = 0x601 // i
Icon Hash = 0x4a804 // icon
Id Hash = 0x8502 // id
Iframe Hash = 0x33206 // iframe
Image Hash = 0x33e05 // image
Img Hash = 0x34303 // img
Inert Hash = 0x55005 // inert
Input Hash = 0x47305 // input
Ins Hash = 0x26403 // ins
Isindex Hash = 0x15907 // isindex
Ismap Hash = 0x34b05 // ismap
Itemid Hash = 0xff06 // itemid
Itemprop Hash = 0x58808 // itemprop
Itemref Hash = 0x62207 // itemref
Itemscope Hash = 0x35609 // itemscope
Itemtype Hash = 0x36008 // itemtype
Kbd Hash = 0xca03 // kbd
Keygen Hash = 0x4a06 // keygen
Keytype Hash = 0x68807 // keytype
Kind Hash = 0xd604 // kind
Label Hash = 0x7405 // label
Lang Hash = 0xb304 // lang
Language Hash = 0xb308 // language
Legend Hash = 0x1d006 // legend
Li Hash = 0x1702 // li
Link Hash = 0xd304 // link
List Hash = 0x4c404 // list
Listing Hash = 0x4c407 // listing
Longdesc Hash = 0x7808 // longdesc
Loop Hash = 0x12104 // loop
Low Hash = 0x23f03 // low
Main Hash = 0x1004 // main
Malignmark Hash = 0xc10a // malignmark
Manifest Hash = 0x65e08 // manifest
Map Hash = 0x14403 // map
Mark Hash = 0xc704 // mark
Marquee Hash = 0x36807 // marquee
Math Hash = 0x36f04 // math
Max Hash = 0x37e03 // max
Maxlength Hash = 0x37e09 // maxlength
Media Hash = 0xde05 // media
Mediagroup Hash = 0xde0a // mediagroup
Menu Hash = 0x3c404 // menu
Meta Hash = 0x4d304 // meta
Meter Hash = 0x2f005 // meter
Method Hash = 0x2da06 // method
Mglyph Hash = 0x34406 // mglyph
Mi Hash = 0x2c02 // mi
Min Hash = 0x2c03 // min
Mn Hash = 0x2e302 // mn
Mo Hash = 0x4f702 // mo
Ms Hash = 0x35902 // ms
Mtext Hash = 0x38805 // mtext
Multiple Hash = 0x39608 // multiple
Muted Hash = 0x39e05 // muted
Name Hash = 0xdc04 // name
Nav Hash = 0x1303 // nav
Nobr Hash = 0x1a04 // nobr
Noembed Hash = 0x6c07 // noembed
Noframes Hash = 0xa108 // noframes
Nohref Hash = 0xad06 // nohref
Noresize Hash = 0x24b08 // noresize
Noscript Hash = 0x31908 // noscript
Noshade Hash = 0x4ff07 // noshade
Novalidate Hash = 0x2e40a // novalidate
Nowrap Hash = 0x59106 // nowrap
Object Hash = 0x2b006 // object
Ol Hash = 0x17102 // ol
Onabort Hash = 0x1bc07 // onabort
Onafterprint Hash = 0x2840c // onafterprint
Onbeforeprint Hash = 0x2be0d // onbeforeprint
Onbeforeunload Hash = 0x6720e // onbeforeunload
Onblur Hash = 0x17e06 // onblur
Oncancel Hash = 0x11a08 // oncancel
Oncanplay Hash = 0x18609 // oncanplay
Oncanplaythrough Hash = 0x18610 // oncanplaythrough
Onchange Hash = 0x42f08 // onchange
Onclick Hash = 0x6b607 // onclick
Onclose Hash = 0x3a307 // onclose
Oncontextmenu Hash = 0x3bb0d // oncontextmenu
Oncuechange Hash = 0x3c80b // oncuechange
Ondblclick Hash = 0x3d30a // ondblclick
Ondrag Hash = 0x3dd06 // ondrag
Ondragend Hash = 0x3dd09 // ondragend
Ondragenter Hash = 0x3e60b // ondragenter
Ondragleave Hash = 0x3f10b // ondragleave
Ondragover Hash = 0x3fc0a // ondragover
Ondragstart Hash = 0x4060b // ondragstart
Ondrop Hash = 0x41706 // ondrop
Ondurationchange Hash = 0x42710 // ondurationchange
Onemptied Hash = 0x41e09 // onemptied
Onended Hash = 0x43707 // onended
Onerror Hash = 0x43e07 // onerror
Onfocus Hash = 0x44507 // onfocus
Onhashchange Hash = 0x4650c // onhashchange
Oninput Hash = 0x47107 // oninput
Oninvalid Hash = 0x47809 // oninvalid
Onkeydown Hash = 0x48109 // onkeydown
Onkeypress Hash = 0x48e0a // onkeypress
Onkeyup Hash = 0x49e07 // onkeyup
Onload Hash = 0x4b806 // onload
Onloadeddata Hash = 0x4b80c // onloadeddata
Onloadedmetadata Hash = 0x4cb10 // onloadedmetadata
Onloadstart Hash = 0x4e10b // onloadstart
Onmessage Hash = 0x4ec09 // onmessage
Onmousedown Hash = 0x4f50b // onmousedown
Onmousemove Hash = 0x5120b // onmousemove
Onmouseout Hash = 0x51d0a // onmouseout
Onmouseover Hash = 0x52a0b // onmouseover
Onmouseup Hash = 0x53509 // onmouseup
Onmousewheel Hash = 0x53e0c // onmousewheel
Onoffline Hash = 0x54a09 // onoffline
Ononline Hash = 0x55508 // ononline
Onpagehide Hash = 0x55d0a // onpagehide
Onpageshow Hash = 0x5710a // onpageshow
Onpause Hash = 0x57d07 // onpause
Onplay Hash = 0x59c06 // onplay
Onplaying Hash = 0x59c09 // onplaying
Onpopstate Hash = 0x5a50a // onpopstate
Onprogress Hash = 0x5af0a // onprogress
Onratechange Hash = 0x5be0c // onratechange
Onreset Hash = 0x5ca07 // onreset
Onresize Hash = 0x5d108 // onresize
Onscroll Hash = 0x5d908 // onscroll
Onseeked Hash = 0x5e408 // onseeked
Onseeking Hash = 0x5ec09 // onseeking
Onselect Hash = 0x5f508 // onselect
Onshow Hash = 0x5ff06 // onshow
Onstalled Hash = 0x60a09 // onstalled
Onstorage Hash = 0x61309 // onstorage
Onsubmit Hash = 0x61c08 // onsubmit
Onsuspend Hash = 0x63009 // onsuspend
Ontimeupdate Hash = 0x4590c // ontimeupdate
Onunload Hash = 0x63908 // onunload
Onvolumechange Hash = 0x6410e // onvolumechange
Onwaiting Hash = 0x64f09 // onwaiting
Open Hash = 0x58e04 // open
Optgroup Hash = 0x12308 // optgroup
Optimum Hash = 0x65807 // optimum
Option Hash = 0x66e06 // option
Output Hash = 0x52406 // output
P Hash = 0xc01 // p
Param Hash = 0xc05 // param
Pattern Hash = 0x9b07 // pattern
Pauseonexit Hash = 0x57f0b // pauseonexit
Picture Hash = 0xe707 // picture
Ping Hash = 0x12a04 // ping
Placeholder Hash = 0x16b0b // placeholder
Plaintext Hash = 0x1f509 // plaintext
Poster Hash = 0x30e06 // poster
Pre Hash = 0x34f03 // pre
Preload Hash = 0x34f07 // preload
Profile Hash = 0x66707 // profile
Progress Hash = 0x5b108 // progress
Prompt Hash = 0x59606 // prompt
Public Hash = 0x4a406 // public
Q Hash = 0x8d01 // q
Radiogroup Hash = 0x30a // radiogroup
Rb Hash = 0x1d02 // rb
Readonly Hash = 0x38e08 // readonly
Rel Hash = 0x35003 // rel
Required Hash = 0x8b08 // required
Rev Hash = 0x29403 // rev
Reversed Hash = 0x29408 // reversed
Rows Hash = 0x6604 // rows
Rowspan Hash = 0x6607 // rowspan
Rp Hash = 0x28a02 // rp
Rt Hash = 0x1c102 // rt
Rtc Hash = 0x1c103 // rtc
Ruby Hash = 0xf604 // ruby
Rules Hash = 0x17505 // rules
S Hash = 0x3d01 // s
Samp Hash = 0x9804 // samp
Sandbox Hash = 0x16307 // sandbox
Scope Hash = 0x35a05 // scope
Scoped Hash = 0x35a06 // scoped
Script Hash = 0x31b06 // script
Scrolling Hash = 0x5db09 // scrolling
Seamless Hash = 0x3a808 // seamless
Section Hash = 0x17907 // section
Select Hash = 0x5f706 // select
Selected Hash = 0x5f708 // selected
Shape Hash = 0x23105 // shape
Size Hash = 0x24f04 // size
Sizes Hash = 0x24f05 // sizes
Small Hash = 0x23b05 // small
Sortable Hash = 0x25308 // sortable
Source Hash = 0x26606 // source
Spacer Hash = 0x37806 // spacer
Span Hash = 0x6904 // span
Spellcheck Hash = 0x3af0a // spellcheck
Src Hash = 0x44b03 // src
Srcdoc Hash = 0x44b06 // srcdoc
Srclang Hash = 0x49707 // srclang
Srcset Hash = 0x5b806 // srcset
Start Hash = 0x40c05 // start
Step Hash = 0x66404 // step
Strike Hash = 0x68406 // strike
Strong Hash = 0x68f06 // strong
Style Hash = 0x69505 // style
Sub Hash = 0x61e03 // sub
Summary Hash = 0x69a07 // summary
Sup Hash = 0x6a103 // sup
Svg Hash = 0x6a403 // svg
System Hash = 0x6a706 // system
Tabindex Hash = 0x4d908 // tabindex
Table Hash = 0x25605 // table
Target Hash = 0x2f906 // target
Tbody Hash = 0x3f05 // tbody
Td Hash = 0xaa02 // td
Template Hash = 0x6aa08 // template
Text Hash = 0x1fa04 // text
Textarea Hash = 0x38908 // textarea
Tfoot Hash = 0xf005 // tfoot
Th Hash = 0x18f02 // th
Thead Hash = 0x37105 // thead
Time Hash = 0x2ee04 // time
Title Hash = 0x14a05 // title
Tr Hash = 0x1fd02 // tr
Track Hash = 0x1fd05 // track
Translate Hash = 0x22109 // translate
Truespeed Hash = 0x27309 // truespeed
Tt Hash = 0x9d02 // tt
Type Hash = 0x11204 // type
Typemustmatch Hash = 0x1da0d // typemustmatch
U Hash = 0xb01 // u
Ul Hash = 0x5802 // ul
Undeterminate Hash = 0x250d // undeterminate
Usemap Hash = 0x14106 // usemap
Valign Hash = 0x1506 // valign
Value Hash = 0x10d05 // value
Valuetype Hash = 0x10d09 // valuetype
Var Hash = 0x32f03 // var
Video Hash = 0x6b205 // video
Visible Hash = 0x6bd07 // visible
Vlink Hash = 0x6c405 // vlink
Wbr Hash = 0x57a03 // wbr
Width Hash = 0x60405 // width
Wrap Hash = 0x59304 // wrap
Xmlns Hash = 0x15f05 // xmlns
Xmp Hash = 0x16903 // xmp
)
// String returns the hash' name.
func (i Hash) String() string {
start := uint32(i >> 8)
n := uint32(i & 0xff)
if start+n > uint32(len(_Hash_text)) {
return ""
}
return _Hash_text[start : start+n]
}
// ToHash returns the hash whose name is s. It returns zero if there is no
// such hash. It is case sensitive.
func ToHash(s []byte) Hash {
if len(s) == 0 || len(s) > _Hash_maxLen {
return 0
}
h := uint32(_Hash_hash0)
for i := 0; i < len(s); i++ {
h ^= uint32(s[i])
h *= 16777619
}
if i := _Hash_table[h&uint32(len(_Hash_table)-1)]; int(i&0xff) == len(s) {
t := _Hash_text[i>>8 : i>>8+i&0xff]
for i := 0; i < len(s); i++ {
if t[i] != s[i] {
goto NEXT
}
}
return i
}
NEXT:
if i := _Hash_table[(h>>16)&uint32(len(_Hash_table)-1)]; int(i&0xff) == len(s) {
t := _Hash_text[i>>8 : i>>8+i&0xff]
for i := 0; i < len(s); i++ {
if t[i] != s[i] {
return 0
}
}
return i
}
return 0
}
const _Hash_hash0 = 0x5334b67c
const _Hash_maxLen = 16
const _Hash_text = "abbradiogrouparamainavalignobrbackgroundeterminateaccept-cha" +
"rsetbodyaccesskeygenabledefaultSelectedeferowspanoembedelabe" +
"longdescanvasideclarequiredetailsampatternoframesetdfnohrefl" +
"anguageacronymalignmarkbdialogalinkindirnamediagroupictureal" +
"tfooterubyasyncitemidisabledivaluetypeaudioncancelooptgroupi" +
"ngautocompleteautofocusemappletitleautoplayaxisindexmlnsandb" +
"oxmplaceholderulesectionblurbdoncanplaythrough1bgcolorbgsoun" +
"dlbigblinkblockquotebuttonabortclassidraggablegendcodetypemu" +
"stmatchallengecolgrouplaintextrackcolspannotationXmlcommandc" +
"ompactranslatecontrolshapecoordsmallowfullscreenoresizesorta" +
"blecrossoriginsourcefieldsetruespeedfigcaptionafterprintfigu" +
"reversedforeignObjectforeignobjectformactionbeforeprintforme" +
"nctypeformmethodformnovalidatetimeterformtargeth6heightmlhgr" +
"ouposterhiddenoscripthigh2http-equivariframeborderimageimgly" +
"ph3ismapreloaditemscopeditemtypemarqueematheaderspacermaxlen" +
"gth4mtextareadonlymultiplemutedoncloseamlesspellcheckedoncon" +
"textmenuoncuechangeondblclickondragendondragenterondragleave" +
"ondragoverondragstarticlearondropzonemptiedondurationchangeo" +
"nendedonerroronfocusrcdocodebasefontimeupdateonhashchangeoni" +
"nputoninvalidonkeydownloadonkeypressrclangonkeyupublicontent" +
"editableonloadeddatalistingonloadedmetadatabindexonloadstart" +
"onmessageonmousedownoshadefaultCheckedonmousemoveonmouseoutp" +
"utonmouseoveronmouseuponmousewheelonofflinertononlineonpageh" +
"idefaultMutedonpageshowbronpauseonexitempropenowrapromptonpl" +
"ayingonpopstateonprogressrcsetonratechangeonresetonresizeons" +
"crollingonseekedonseekingonselectedonshowidth5onstalledonsto" +
"rageonsubmitemrefacenteronsuspendonunloadonvolumechangeonwai" +
"tingoptimumanifesteprofileoptionbeforeunloaddresstrikeytypes" +
"trongstylesummarysupsvgsystemplatevideonclickvisiblevlink"
var _Hash_table = [1 << 9]Hash{
0x0: 0x2cb0b, // formenctype
0x1: 0x2d60a, // formmethod
0x2: 0x3c80b, // oncuechange
0x3: 0x3dd06, // ondrag
0x6: 0x68406, // strike
0x7: 0x6b205, // video
0x9: 0x4a907, // content
0xa: 0x4e07, // enabled
0xb: 0x59106, // nowrap
0xc: 0xd304, // link
0xe: 0x28a02, // rp
0xf: 0x2840c, // onafterprint
0x10: 0x14506, // applet
0x11: 0xf005, // tfoot
0x12: 0x5040e, // defaultChecked
0x13: 0x3330b, // frameborder
0x14: 0xf106, // footer
0x15: 0x5f708, // selected
0x16: 0x49707, // srclang
0x18: 0x52a0b, // onmouseover
0x19: 0x1d604, // code
0x1b: 0x47809, // oninvalid
0x1c: 0x62804, // face
0x1e: 0x3bd0b, // contextmenu
0x1f: 0xa308, // frameset
0x21: 0x5650c, // defaultMuted
0x22: 0x19905, // color
0x23: 0x59c06, // onplay
0x25: 0x2f005, // meter
0x26: 0x61309, // onstorage
0x27: 0x38e08, // readonly
0x29: 0x66707, // profile
0x2a: 0x8607, // declare
0x2b: 0xb01, // u
0x2c: 0x31908, // noscript
0x2d: 0x65e08, // manifest
0x2e: 0x1b806, // button
0x2f: 0x2ea08, // datetime
0x30: 0x47305, // input
0x31: 0x5407, // default
0x32: 0x1d608, // codetype
0x33: 0x2a90d, // foreignobject
0x34: 0x36807, // marquee
0x36: 0x19707, // bgcolor
0x37: 0x19502, // h1
0x39: 0x1e0a, // background
0x3b: 0x2f50a, // formtarget
0x41: 0x2f906, // target
0x43: 0x23b05, // small
0x44: 0x45008, // codebase
0x45: 0x55005, // inert
0x47: 0x38805, // mtext
0x48: 0x6607, // rowspan
0x49: 0x2be0d, // onbeforeprint
0x4a: 0x55508, // ononline
0x4c: 0x29006, // figure
0x4d: 0x4cb10, // onloadedmetadata
0x4e: 0xbb07, // acronym
0x50: 0x39608, // multiple
0x51: 0x320e, // accept-charset
0x52: 0x24f05, // sizes
0x53: 0x29c0d, // foreignObject
0x55: 0x2e40a, // novalidate
0x56: 0x55d0a, // onpagehide
0x57: 0x2e302, // mn
0x58: 0x38602, // h4
0x5a: 0x1c102, // rt
0x5b: 0xd205, // alink
0x5e: 0x59606, // prompt
0x5f: 0x17102, // ol
0x61: 0x5d108, // onresize
0x64: 0x69a07, // summary
0x65: 0x5a50a, // onpopstate
0x66: 0x38d04, // area
0x68: 0x64f09, // onwaiting
0x6b: 0xdc04, // name
0x6c: 0x23606, // coords
0x6d: 0x34303, // img
0x6e: 0x66404, // step
0x6f: 0x5ec09, // onseeking
0x70: 0x32104, // high
0x71: 0x49e07, // onkeyup
0x72: 0x5f706, // select
0x73: 0x1fd05, // track
0x74: 0x34b05, // ismap
0x76: 0x47107, // oninput
0x77: 0x8d01, // q
0x78: 0x48109, // onkeydown
0x79: 0x33e05, // image
0x7a: 0x2b604, // form
0x7b: 0x60a09, // onstalled
0x7c: 0xe707, // picture
0x7d: 0x42f08, // onchange
0x7e: 0x1a905, // blink
0x7f: 0xee03, // alt
0x80: 0xfa05, // async
0x82: 0x1702, // li
0x84: 0x2c02, // mi
0x85: 0xff06, // itemid
0x86: 0x11605, // audio
0x87: 0x31b06, // script
0x8b: 0x44b06, // srcdoc
0x8e: 0xc704, // mark
0x8f: 0x18403, // bdo
0x91: 0x5120b, // onmousemove
0x93: 0x3c404, // menu
0x94: 0x45804, // font
0x95: 0x14f08, // autoplay
0x96: 0x6c405, // vlink
0x98: 0x6e02, // em
0x9a: 0x5b806, // srcset
0x9b: 0x1ee08, // colgroup
0x9c: 0x58e04, // open
0x9d: 0x1d006, // legend
0x9e: 0x4e10b, // onloadstart
0xa2: 0x22109, // translate
0xa3: 0x6e05, // embed
0xa4: 0x1c305, // class
0xa6: 0x6aa08, // template
0xa7: 0x37206, // header
0xa9: 0x4b806, // onload
0xaa: 0x37105, // thead
0xab: 0x5db09, // scrolling
0xac: 0xc05, // param
0xae: 0x9b07, // pattern
0xaf: 0x9207, // details
0xb1: 0x4a406, // public
0xb3: 0x4f50b, // onmousedown
0xb4: 0x14403, // map
0xb6: 0x25b0b, // crossorigin
0xb7: 0x1506, // valign
0xb9: 0x1bc07, // onabort
0xba: 0x66e06, // option
0xbb: 0x26606, // source
0xbc: 0x6205, // defer
0xbd: 0x1e509, // challenge
0xbf: 0x10d05, // value
0xc0: 0x23d0f, // allowfullscreen
0xc1: 0xca03, // kbd
0xc2: 0x2070d, // annotationXml
0xc3: 0x5be0c, // onratechange
0xc4: 0x4f702, // mo
0xc6: 0x3af0a, // spellcheck
0xc7: 0x2c03, // min
0xc8: 0x4b80c, // onloadeddata
0xc9: 0x41205, // clear
0xca: 0x42710, // ondurationchange
0xcb: 0x1a04, // nobr
0xcd: 0x27309, // truespeed
0xcf: 0x30906, // hgroup
0xd0: 0x40c05, // start
0xd3: 0x41908, // dropzone
0xd5: 0x7405, // label
0xd8: 0xde0a, // mediagroup
0xd9: 0x17e06, // onblur
0xdb: 0x27f07, // caption
0xdd: 0x7c04, // desc
0xde: 0x15f05, // xmlns
0xdf: 0x30106, // height
0xe0: 0x21407, // command
0xe2: 0x57f0b, // pauseonexit
0xe3: 0x68f06, // strong
0xe4: 0x43e07, // onerror
0xe5: 0x61c08, // onsubmit
0xe6: 0xb308, // language
0xe7: 0x48608, // download
0xe9: 0x53509, // onmouseup
0xec: 0x2cf07, // enctype
0xed: 0x5f508, // onselect
0xee: 0x2b006, // object
0xef: 0x1f509, // plaintext
0xf0: 0x3d30a, // ondblclick
0xf1: 0x18610, // oncanplaythrough
0xf2: 0xd903, // dir
0xf3: 0x38908, // textarea
0xf4: 0x12a04, // ping
0xf5: 0x2da06, // method
0xf6: 0x22a08, // controls
0xf7: 0x37806, // spacer
0xf8: 0x6a403, // svg
0xf9: 0x30504, // html
0xfa: 0x3d01, // s
0xfc: 0xcc06, // dialog
0xfe: 0x1da0d, // typemustmatch
0xff: 0x3b407, // checked
0x101: 0x30e06, // poster
0x102: 0x3260a, // http-equiv
0x103: 0x44b03, // src
0x104: 0x10408, // disabled
0x105: 0x37207, // headers
0x106: 0x5af0a, // onprogress
0x107: 0x26c08, // fieldset
0x108: 0x32f03, // var
0x10a: 0xa305, // frame
0x10b: 0x36008, // itemtype
0x10c: 0x3fc0a, // ondragover
0x10d: 0x13a09, // autofocus
0x10f: 0x601, // i
0x110: 0x35902, // ms
0x111: 0x45404, // base
0x113: 0x35a05, // scope
0x114: 0x3206, // accept
0x115: 0x58808, // itemprop
0x117: 0xfe04, // cite
0x118: 0x3907, // charset
0x119: 0x14a05, // title
0x11a: 0x68807, // keytype
0x11b: 0x1fa04, // text
0x11c: 0x65807, // optimum
0x11e: 0x37204, // head
0x121: 0x21b07, // compact
0x123: 0x63009, // onsuspend
0x124: 0x4c404, // list
0x125: 0x4590c, // ontimeupdate
0x126: 0x62a06, // center
0x127: 0x31406, // hidden
0x129: 0x35609, // itemscope
0x12c: 0x1a402, // dl
0x12d: 0x17907, // section
0x12e: 0x11a08, // oncancel
0x12f: 0x6b607, // onclick
0x130: 0xde05, // media
0x131: 0x52406, // output
0x132: 0x4c008, // datalist
0x133: 0x53e0c, // onmousewheel
0x134: 0x45408, // basefont
0x135: 0x37e09, // maxlength
0x136: 0x6bd07, // visible
0x137: 0x2e00e, // formnovalidate
0x139: 0x16903, // xmp
0x13a: 0x101, // b
0x13b: 0x5710a, // onpageshow
0x13c: 0xf604, // ruby
0x13d: 0x16b0b, // placeholder
0x13e: 0x4c407, // listing
0x140: 0x26403, // ins
0x141: 0x62207, // itemref
0x144: 0x540f, // defaultSelected
0x146: 0x3f10b, // ondragleave
0x147: 0x1ae0a, // blockquote
0x148: 0x59304, // wrap
0x14a: 0x1a603, // big
0x14b: 0x35003, // rel
0x14c: 0x41706, // ondrop
0x14e: 0x6a706, // system
0x14f: 0x30a, // radiogroup
0x150: 0x25605, // table
0x152: 0x57a03, // wbr
0x153: 0x3bb0d, // oncontextmenu
0x155: 0x250d, // undeterminate
0x157: 0x20204, // cols
0x158: 0x16307, // sandbox
0x159: 0x1303, // nav
0x15a: 0x37e03, // max
0x15b: 0x7808, // longdesc
0x15c: 0x60405, // width
0x15d: 0x34902, // h3
0x15e: 0x19e07, // bgsound
0x161: 0x10d09, // valuetype
0x162: 0x69505, // style
0x164: 0x3f05, // tbody
0x165: 0x40e07, // article
0x169: 0xcb03, // bdi
0x16a: 0x67e07, // address
0x16b: 0x23105, // shape
0x16c: 0x2ba06, // action
0x16e: 0x1fd02, // tr
0x16f: 0xaa02, // td
0x170: 0x3dd09, // ondragend
0x171: 0x5802, // ul
0x172: 0x33806, // border
0x174: 0x4a06, // keygen
0x175: 0x4004, // body
0x177: 0x1c909, // draggable
0x178: 0x2b60a, // formaction
0x17b: 0x34406, // mglyph
0x17d: 0x1d02, // rb
0x17e: 0x2ff02, // h6
0x17f: 0x41e09, // onemptied
0x180: 0x5ca07, // onreset
0x181: 0x1004, // main
0x182: 0x12104, // loop
0x183: 0x48e0a, // onkeypress
0x184: 0x9d02, // tt
0x186: 0x20207, // colspan
0x188: 0x36f04, // math
0x189: 0x1605, // align
0x18a: 0xa108, // noframes
0x18b: 0xaf02, // hr
0x18c: 0xc10a, // malignmark
0x18e: 0x23f03, // low
0x18f: 0x8502, // id
0x190: 0x6604, // rows
0x191: 0x29403, // rev
0x192: 0x63908, // onunload
0x193: 0x39e05, // muted
0x194: 0x35a06, // scoped
0x195: 0x31602, // dd
0x196: 0x60602, // dt
0x197: 0x6720e, // onbeforeunload
0x199: 0x2070a, // annotation
0x19a: 0x29408, // reversed
0x19c: 0x11204, // type
0x19d: 0x57d07, // onpause
0x19e: 0xd604, // kind
0x19f: 0x4c004, // data
0x1a0: 0x4ff07, // noshade
0x1a3: 0x17505, // rules
0x1a4: 0x12308, // optgroup
0x1a5: 0x202, // br
0x1a7: 0x1, // a
0x1a8: 0x51d0a, // onmouseout
0x1aa: 0x54a09, // onoffline
0x1ab: 0x6410e, // onvolumechange
0x1ae: 0x61e03, // sub
0x1b3: 0x29c03, // for
0x1b5: 0x8b08, // required
0x1b6: 0x5b108, // progress
0x1b7: 0x14106, // usemap
0x1b8: 0x7f06, // canvas
0x1b9: 0x4a804, // icon
0x1bb: 0x1c103, // rtc
0x1bc: 0x8305, // aside
0x1bd: 0x2ee04, // time
0x1be: 0x4060b, // ondragstart
0x1c0: 0x27c0a, // figcaption
0x1c1: 0xaf04, // href
0x1c2: 0x33206, // iframe
0x1c3: 0x18609, // oncanplay
0x1c4: 0x6904, // span
0x1c5: 0x34f03, // pre
0x1c6: 0x6c07, // noembed
0x1c8: 0x5e408, // onseeked
0x1c9: 0x4d304, // meta
0x1ca: 0x32402, // h2
0x1cb: 0x3a808, // seamless
0x1cc: 0xab03, // dfn
0x1cd: 0x15704, // axis
0x1cf: 0x3e60b, // ondragenter
0x1d0: 0x18f02, // th
0x1d1: 0x4650c, // onhashchange
0x1d2: 0xb304, // lang
0x1d3: 0x44507, // onfocus
0x1d5: 0x24f04, // size
0x1d8: 0x12e0c, // autocomplete
0x1d9: 0xaf08, // hreflang
0x1da: 0x9804, // samp
0x1de: 0x19903, // col
0x1df: 0x10b03, // div
0x1e0: 0x25308, // sortable
0x1e1: 0x7203, // del
0x1e3: 0x3a307, // onclose
0x1e6: 0xd907, // dirname
0x1e8: 0x1c307, // classid
0x1e9: 0x34f07, // preload
0x1ea: 0x4d908, // tabindex
0x1eb: 0x60802, // h5
0x1ec: 0x5d908, // onscroll
0x1ed: 0x4a90f, // contenteditable
0x1ee: 0x4ec09, // onmessage
0x1ef: 0x4, // abbr
0x1f0: 0x15907, // isindex
0x1f1: 0x6a103, // sup
0x1f3: 0x24b08, // noresize
0x1f5: 0x59c09, // onplaying
0x1f6: 0x4409, // accesskey
0x1fa: 0xc01, // p
0x1fb: 0x43707, // onended
0x1fc: 0x5ff06, // onshow
0x1fe: 0xad06, // nohref
}

58
vendor/github.com/tdewolff/parse/html/hash_test.go generated vendored Normal file
View file

@ -0,0 +1,58 @@
package html // import "github.com/tdewolff/parse/html"
import (
"bytes"
"testing"
"github.com/tdewolff/test"
)
func TestHashTable(t *testing.T) {
test.T(t, ToHash([]byte("address")), Address, "'address' must resolve to Address")
test.T(t, Address.String(), "address")
test.T(t, Accept_Charset.String(), "accept-charset")
test.T(t, ToHash([]byte("")), Hash(0), "empty string must resolve to zero")
test.T(t, Hash(0xffffff).String(), "")
test.T(t, ToHash([]byte("iter")), Hash(0), "'iter' must resolve to zero")
test.T(t, ToHash([]byte("test")), Hash(0), "'test' must resolve to zero")
}
////////////////////////////////////////////////////////////////
var result int
// naive scenario
func BenchmarkCompareBytes(b *testing.B) {
var r int
val := []byte("span")
for n := 0; n < b.N; n++ {
if bytes.Equal(val, []byte("span")) {
r++
}
}
result = r
}
// using-atoms scenario
func BenchmarkFindAndCompareAtom(b *testing.B) {
var r int
val := []byte("span")
for n := 0; n < b.N; n++ {
if ToHash(val) == Span {
r++
}
}
result = r
}
// using-atoms worst-case scenario
func BenchmarkFindAtomCompareBytes(b *testing.B) {
var r int
val := []byte("zzzz")
for n := 0; n < b.N; n++ {
if h := ToHash(val); h == 0 && bytes.Equal(val, []byte("zzzz")) {
r++
}
}
result = r
}

485
vendor/github.com/tdewolff/parse/html/lex.go generated vendored Normal file
View file

@ -0,0 +1,485 @@
// Package html is an HTML5 lexer following the specifications at http://www.w3.org/TR/html5/syntax.html.
package html // import "github.com/tdewolff/parse/html"
import (
"io"
"strconv"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
)
// TokenType determines the type of token, eg. a number or a semicolon.
type TokenType uint32
// TokenType values.
const (
ErrorToken TokenType = iota // extra token when errors occur
CommentToken
DoctypeToken
StartTagToken
StartTagCloseToken
StartTagVoidToken
EndTagToken
AttributeToken
TextToken
SvgToken
MathToken
)
// String returns the string representation of a TokenType.
func (tt TokenType) String() string {
switch tt {
case ErrorToken:
return "Error"
case CommentToken:
return "Comment"
case DoctypeToken:
return "Doctype"
case StartTagToken:
return "StartTag"
case StartTagCloseToken:
return "StartTagClose"
case StartTagVoidToken:
return "StartTagVoid"
case EndTagToken:
return "EndTag"
case AttributeToken:
return "Attribute"
case TextToken:
return "Text"
case SvgToken:
return "Svg"
case MathToken:
return "Math"
}
return "Invalid(" + strconv.Itoa(int(tt)) + ")"
}
////////////////////////////////////////////////////////////////
// Lexer is the state for the lexer.
type Lexer struct {
r *buffer.Lexer
err error
rawTag Hash
inTag bool
text []byte
attrVal []byte
}
// NewLexer returns a new Lexer for a given io.Reader.
func NewLexer(r io.Reader) *Lexer {
return &Lexer{
r: buffer.NewLexer(r),
}
}
// Err returns the error encountered during lexing, this is often io.EOF but also other errors can be returned.
func (l *Lexer) Err() error {
if err := l.r.Err(); err != nil {
return err
}
return l.err
}
// Restore restores the NULL byte at the end of the buffer.
func (l *Lexer) Restore() {
l.r.Restore()
}
// Next returns the next Token. It returns ErrorToken when an error was encountered. Using Err() one can retrieve the error message.
func (l *Lexer) Next() (TokenType, []byte) {
l.text = nil
var c byte
if l.inTag {
l.attrVal = nil
for { // before attribute name state
if c = l.r.Peek(0); c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' {
l.r.Move(1)
continue
}
break
}
if c == 0 {
l.err = parse.NewErrorLexer("unexpected null character", l.r)
return ErrorToken, nil
} else if c != '>' && (c != '/' || l.r.Peek(1) != '>') {
return AttributeToken, l.shiftAttribute()
}
start := l.r.Pos()
l.inTag = false
if c == '/' {
l.r.Move(2)
l.text = l.r.Lexeme()[start:]
return StartTagVoidToken, l.r.Shift()
}
l.r.Move(1)
l.text = l.r.Lexeme()[start:]
return StartTagCloseToken, l.r.Shift()
}
if l.rawTag != 0 {
if rawText := l.shiftRawText(); len(rawText) > 0 {
l.rawTag = 0
return TextToken, rawText
}
l.rawTag = 0
}
for {
c = l.r.Peek(0)
if c == '<' {
c = l.r.Peek(1)
if l.r.Pos() > 0 {
if c == '/' && l.r.Peek(2) != 0 || 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '!' || c == '?' {
return TextToken, l.r.Shift()
}
} else if c == '/' && l.r.Peek(2) != 0 {
l.r.Move(2)
if c = l.r.Peek(0); c != '>' && !('a' <= c && c <= 'z' || 'A' <= c && c <= 'Z') {
return CommentToken, l.shiftBogusComment()
}
return EndTagToken, l.shiftEndTag()
} else if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' {
l.r.Move(1)
l.inTag = true
return l.shiftStartTag()
} else if c == '!' {
l.r.Move(2)
return l.readMarkup()
} else if c == '?' {
l.r.Move(1)
return CommentToken, l.shiftBogusComment()
}
} else if c == 0 {
if l.r.Pos() > 0 {
return TextToken, l.r.Shift()
}
l.err = parse.NewErrorLexer("unexpected null character", l.r)
return ErrorToken, nil
}
l.r.Move(1)
}
}
// Text returns the textual representation of a token. This excludes delimiters and additional leading/trailing characters.
func (l *Lexer) Text() []byte {
return l.text
}
// AttrVal returns the attribute value when an AttributeToken was returned from Next.
func (l *Lexer) AttrVal() []byte {
return l.attrVal
}
////////////////////////////////////////////////////////////////
// The following functions follow the specifications at http://www.w3.org/html/wg/drafts/html/master/syntax.html
func (l *Lexer) shiftRawText() []byte {
if l.rawTag == Plaintext {
for {
if l.r.Peek(0) == 0 {
return l.r.Shift()
}
l.r.Move(1)
}
} else { // RCDATA, RAWTEXT and SCRIPT
for {
c := l.r.Peek(0)
if c == '<' {
if l.r.Peek(1) == '/' {
mark := l.r.Pos()
l.r.Move(2)
for {
if c = l.r.Peek(0); !('a' <= c && c <= 'z' || 'A' <= c && c <= 'Z') {
break
}
l.r.Move(1)
}
if h := ToHash(parse.ToLower(parse.Copy(l.r.Lexeme()[mark+2:]))); h == l.rawTag { // copy so that ToLower doesn't change the case of the underlying slice
l.r.Rewind(mark)
return l.r.Shift()
}
} else if l.rawTag == Script && l.r.Peek(1) == '!' && l.r.Peek(2) == '-' && l.r.Peek(3) == '-' {
l.r.Move(4)
inScript := false
for {
c := l.r.Peek(0)
if c == '-' && l.r.Peek(1) == '-' && l.r.Peek(2) == '>' {
l.r.Move(3)
break
} else if c == '<' {
isEnd := l.r.Peek(1) == '/'
if isEnd {
l.r.Move(2)
} else {
l.r.Move(1)
}
mark := l.r.Pos()
for {
if c = l.r.Peek(0); !('a' <= c && c <= 'z' || 'A' <= c && c <= 'Z') {
break
}
l.r.Move(1)
}
if h := ToHash(parse.ToLower(parse.Copy(l.r.Lexeme()[mark:]))); h == Script { // copy so that ToLower doesn't change the case of the underlying slice
if !isEnd {
inScript = true
} else {
if !inScript {
l.r.Rewind(mark - 2)
return l.r.Shift()
}
inScript = false
}
}
} else if c == 0 {
return l.r.Shift()
}
l.r.Move(1)
}
} else {
l.r.Move(1)
}
} else if c == 0 {
return l.r.Shift()
} else {
l.r.Move(1)
}
}
}
}
func (l *Lexer) readMarkup() (TokenType, []byte) {
if l.at('-', '-') {
l.r.Move(2)
for {
if l.r.Peek(0) == 0 {
return CommentToken, l.r.Shift()
} else if l.at('-', '-', '>') {
l.text = l.r.Lexeme()[4:]
l.r.Move(3)
return CommentToken, l.r.Shift()
} else if l.at('-', '-', '!', '>') {
l.text = l.r.Lexeme()[4:]
l.r.Move(4)
return CommentToken, l.r.Shift()
}
l.r.Move(1)
}
} else if l.at('[', 'C', 'D', 'A', 'T', 'A', '[') {
l.r.Move(7)
for {
if l.r.Peek(0) == 0 {
return TextToken, l.r.Shift()
} else if l.at(']', ']', '>') {
l.r.Move(3)
return TextToken, l.r.Shift()
}
l.r.Move(1)
}
} else {
if l.atCaseInsensitive('d', 'o', 'c', 't', 'y', 'p', 'e') {
l.r.Move(7)
if l.r.Peek(0) == ' ' {
l.r.Move(1)
}
for {
if c := l.r.Peek(0); c == '>' || c == 0 {
l.text = l.r.Lexeme()[9:]
if c == '>' {
l.r.Move(1)
}
return DoctypeToken, l.r.Shift()
}
l.r.Move(1)
}
}
}
return CommentToken, l.shiftBogusComment()
}
func (l *Lexer) shiftBogusComment() []byte {
for {
c := l.r.Peek(0)
if c == '>' {
l.text = l.r.Lexeme()[2:]
l.r.Move(1)
return l.r.Shift()
} else if c == 0 {
l.text = l.r.Lexeme()[2:]
return l.r.Shift()
}
l.r.Move(1)
}
}
func (l *Lexer) shiftStartTag() (TokenType, []byte) {
for {
if c := l.r.Peek(0); c == ' ' || c == '>' || c == '/' && l.r.Peek(1) == '>' || c == '\t' || c == '\n' || c == '\r' || c == '\f' || c == 0 {
break
}
l.r.Move(1)
}
l.text = parse.ToLower(l.r.Lexeme()[1:])
if h := ToHash(l.text); h == Textarea || h == Title || h == Style || h == Xmp || h == Iframe || h == Script || h == Plaintext || h == Svg || h == Math {
if h == Svg {
l.inTag = false
return SvgToken, l.shiftXml(h)
} else if h == Math {
l.inTag = false
return MathToken, l.shiftXml(h)
}
l.rawTag = h
}
return StartTagToken, l.r.Shift()
}
func (l *Lexer) shiftAttribute() []byte {
nameStart := l.r.Pos()
var c byte
for { // attribute name state
if c = l.r.Peek(0); c == ' ' || c == '=' || c == '>' || c == '/' && l.r.Peek(1) == '>' || c == '\t' || c == '\n' || c == '\r' || c == '\f' || c == 0 {
break
}
l.r.Move(1)
}
nameEnd := l.r.Pos()
for { // after attribute name state
if c = l.r.Peek(0); c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' {
l.r.Move(1)
continue
}
break
}
if c == '=' {
l.r.Move(1)
for { // before attribute value state
if c = l.r.Peek(0); c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' {
l.r.Move(1)
continue
}
break
}
attrPos := l.r.Pos()
delim := c
if delim == '"' || delim == '\'' { // attribute value single- and double-quoted state
l.r.Move(1)
for {
c := l.r.Peek(0)
if c == delim {
l.r.Move(1)
break
} else if c == 0 {
break
}
l.r.Move(1)
}
} else { // attribute value unquoted state
for {
if c := l.r.Peek(0); c == ' ' || c == '>' || c == '\t' || c == '\n' || c == '\r' || c == '\f' || c == 0 {
break
}
l.r.Move(1)
}
}
l.attrVal = l.r.Lexeme()[attrPos:]
} else {
l.r.Rewind(nameEnd)
l.attrVal = nil
}
l.text = parse.ToLower(l.r.Lexeme()[nameStart:nameEnd])
return l.r.Shift()
}
func (l *Lexer) shiftEndTag() []byte {
for {
c := l.r.Peek(0)
if c == '>' {
l.text = l.r.Lexeme()[2:]
l.r.Move(1)
break
} else if c == 0 {
l.text = l.r.Lexeme()[2:]
break
}
l.r.Move(1)
}
end := len(l.text)
for end > 0 {
if c := l.text[end-1]; c == ' ' || c == '\t' || c == '\n' || c == '\r' {
end--
continue
}
break
}
l.text = l.text[:end]
return parse.ToLower(l.r.Shift())
}
func (l *Lexer) shiftXml(rawTag Hash) []byte {
inQuote := false
for {
c := l.r.Peek(0)
if c == '"' {
inQuote = !inQuote
l.r.Move(1)
} else if c == '<' && !inQuote {
if l.r.Peek(1) == '/' {
mark := l.r.Pos()
l.r.Move(2)
for {
if c = l.r.Peek(0); !('a' <= c && c <= 'z' || 'A' <= c && c <= 'Z') {
break
}
l.r.Move(1)
}
if h := ToHash(parse.ToLower(parse.Copy(l.r.Lexeme()[mark+2:]))); h == rawTag { // copy so that ToLower doesn't change the case of the underlying slice
break
}
} else {
l.r.Move(1)
}
} else if c == 0 {
return l.r.Shift()
}
l.r.Move(1)
}
for {
c := l.r.Peek(0)
if c == '>' {
l.r.Move(1)
break
} else if c == 0 {
break
}
l.r.Move(1)
}
return l.r.Shift()
}
////////////////////////////////////////////////////////////////
func (l *Lexer) at(b ...byte) bool {
for i, c := range b {
if l.r.Peek(i) != c {
return false
}
}
return true
}
func (l *Lexer) atCaseInsensitive(b ...byte) bool {
for i, c := range b {
if l.r.Peek(i) != c && (l.r.Peek(i)+('a'-'A')) != c {
return false
}
}
return true
}

262
vendor/github.com/tdewolff/parse/html/lex_test.go generated vendored Normal file
View file

@ -0,0 +1,262 @@
package html // import "github.com/tdewolff/parse/html"
import (
"bytes"
"fmt"
"io"
"testing"
"github.com/tdewolff/parse"
"github.com/tdewolff/test"
)
type TTs []TokenType
func TestTokens(t *testing.T) {
var tokenTests = []struct {
html string
expected []TokenType
}{
{"<html></html>", TTs{StartTagToken, StartTagCloseToken, EndTagToken}},
{"<img/>", TTs{StartTagToken, StartTagVoidToken}},
{"<!-- comment -->", TTs{CommentToken}},
{"<!-- comment --!>", TTs{CommentToken}},
{"<p>text</p>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken}},
{"<input type='button'/>", TTs{StartTagToken, AttributeToken, StartTagVoidToken}},
{"<input type='button' value=''/>", TTs{StartTagToken, AttributeToken, AttributeToken, StartTagVoidToken}},
{"<input type='=/>' \r\n\t\f value=\"'\" name=x checked />", TTs{StartTagToken, AttributeToken, AttributeToken, AttributeToken, AttributeToken, StartTagVoidToken}},
{"<!doctype>", TTs{DoctypeToken}},
{"<!doctype html>", TTs{DoctypeToken}},
{"<?bogus>", TTs{CommentToken}},
{"</0bogus>", TTs{CommentToken}},
{"<!bogus>", TTs{CommentToken}},
{"< ", TTs{TextToken}},
{"</", TTs{TextToken}},
// raw tags
{"<title><p></p></title>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken}},
{"<TITLE><p></p></TITLE>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken}},
{"<plaintext></plaintext>", TTs{StartTagToken, StartTagCloseToken, TextToken}},
{"<script></script>", TTs{StartTagToken, StartTagCloseToken, EndTagToken}},
{"<script>var x='</script>';</script>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken, TextToken, EndTagToken}},
{"<script><!--var x='</script>';--></script>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken, TextToken, EndTagToken}},
{"<script><!--var x='<script></script>';--></script>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken}},
{"<script><!--var x='<script>';--></script>", TTs{StartTagToken, StartTagCloseToken, TextToken, EndTagToken}},
{"<![CDATA[ test ]]>", TTs{TextToken}},
{"<svg>text</svg>", TTs{SvgToken}},
{"<math>text</math>", TTs{MathToken}},
{`<svg>text<x a="</svg>"></x></svg>`, TTs{SvgToken}},
{"<a><svg>text</svg></a>", TTs{StartTagToken, StartTagCloseToken, SvgToken, EndTagToken}},
// early endings
{"<!-- comment", TTs{CommentToken}},
{"<? bogus comment", TTs{CommentToken}},
{"<foo", TTs{StartTagToken}},
{"</foo", TTs{EndTagToken}},
{"<foo x", TTs{StartTagToken, AttributeToken}},
{"<foo x=", TTs{StartTagToken, AttributeToken}},
{"<foo x='", TTs{StartTagToken, AttributeToken}},
{"<foo x=''", TTs{StartTagToken, AttributeToken}},
{"<!DOCTYPE note SYSTEM", TTs{DoctypeToken}},
{"<![CDATA[ test", TTs{TextToken}},
{"<script>", TTs{StartTagToken, StartTagCloseToken}},
{"<script><!--", TTs{StartTagToken, StartTagCloseToken, TextToken}},
{"<script><!--var x='<script></script>';-->", TTs{StartTagToken, StartTagCloseToken, TextToken}},
// go-fuzz
{"</>", TTs{EndTagToken}},
}
for _, tt := range tokenTests {
t.Run(tt.html, func(t *testing.T) {
l := NewLexer(bytes.NewBufferString(tt.html))
i := 0
for {
token, _ := l.Next()
if token == ErrorToken {
test.T(t, l.Err(), io.EOF)
test.T(t, i, len(tt.expected), "when error occurred we must be at the end")
break
}
test.That(t, i < len(tt.expected), "index", i, "must not exceed expected token types size", len(tt.expected))
if i < len(tt.expected) {
test.T(t, token, tt.expected[i], "token types must match")
}
i++
}
})
}
test.T(t, TokenType(100).String(), "Invalid(100)")
}
func TestTags(t *testing.T) {
var tagTests = []struct {
html string
expected string
}{
{"<foo:bar.qux-norf/>", "foo:bar.qux-norf"},
{"<foo?bar/qux>", "foo?bar/qux"},
{"<!DOCTYPE note SYSTEM \"Note.dtd\">", " note SYSTEM \"Note.dtd\""},
{"</foo >", "foo"},
// early endings
{"<foo ", "foo"},
}
for _, tt := range tagTests {
t.Run(tt.html, func(t *testing.T) {
l := NewLexer(bytes.NewBufferString(tt.html))
for {
token, _ := l.Next()
if token == ErrorToken {
test.T(t, l.Err(), io.EOF)
test.Fail(t, "when error occurred we must be at the end")
break
} else if token == StartTagToken || token == EndTagToken || token == DoctypeToken {
test.String(t, string(l.Text()), tt.expected)
break
}
}
})
}
}
func TestAttributes(t *testing.T) {
var attributeTests = []struct {
attr string
expected []string
}{
{"<foo a=\"b\" />", []string{"a", "\"b\""}},
{"<foo \nchecked \r\n value\r=\t'=/>\"' />", []string{"checked", "", "value", "'=/>\"'"}},
{"<foo bar=\" a \n\t\r b \" />", []string{"bar", "\" a \n\t\r b \""}},
{"<foo a/>", []string{"a", ""}},
{"<foo /=/>", []string{"/", "/"}},
// early endings
{"<foo x", []string{"x", ""}},
{"<foo x=", []string{"x", ""}},
{"<foo x='", []string{"x", "'"}},
}
for _, tt := range attributeTests {
t.Run(tt.attr, func(t *testing.T) {
l := NewLexer(bytes.NewBufferString(tt.attr))
i := 0
for {
token, _ := l.Next()
if token == ErrorToken {
test.T(t, l.Err(), io.EOF)
test.T(t, i, len(tt.expected), "when error occurred we must be at the end")
break
} else if token == AttributeToken {
test.That(t, i+1 < len(tt.expected), "index", i+1, "must not exceed expected attributes size", len(tt.expected))
if i+1 < len(tt.expected) {
test.String(t, string(l.Text()), tt.expected[i], "attribute keys must match")
test.String(t, string(l.AttrVal()), tt.expected[i+1], "attribute keys must match")
i += 2
}
}
}
})
}
}
func TestErrors(t *testing.T) {
var errorTests = []struct {
html string
col int
}{
{"a\x00b", 2},
}
for _, tt := range errorTests {
t.Run(tt.html, func(t *testing.T) {
l := NewLexer(bytes.NewBufferString(tt.html))
for {
token, _ := l.Next()
if token == ErrorToken {
if tt.col == 0 {
test.T(t, l.Err(), io.EOF)
} else if perr, ok := l.Err().(*parse.Error); ok {
test.T(t, perr.Col, tt.col)
} else {
test.Fail(t, "bad error:", l.Err())
}
break
}
}
})
}
}
////////////////////////////////////////////////////////////////
var J int
var ss = [][]byte{
[]byte(" style"),
[]byte("style"),
[]byte(" \r\n\tstyle"),
[]byte(" style"),
[]byte(" x"),
[]byte("x"),
}
func BenchmarkWhitespace1(b *testing.B) {
for i := 0; i < b.N; i++ {
for _, s := range ss {
j := 0
for {
if c := s[j]; c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' {
j++
} else {
break
}
}
J += j
}
}
}
func BenchmarkWhitespace2(b *testing.B) {
for i := 0; i < b.N; i++ {
for _, s := range ss {
j := 0
for {
if c := s[j]; c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' {
j++
continue
}
break
}
J += j
}
}
}
func BenchmarkWhitespace3(b *testing.B) {
for i := 0; i < b.N; i++ {
for _, s := range ss {
j := 0
for {
if c := s[j]; c != ' ' && c != '\t' && c != '\n' && c != '\r' && c != '\f' {
break
}
j++
}
J += j
}
}
}
////////////////////////////////////////////////////////////////
func ExampleNewLexer() {
l := NewLexer(bytes.NewBufferString("<span class='user'>John Doe</span>"))
out := ""
for {
tt, data := l.Next()
if tt == ErrorToken {
break
}
out += string(data)
}
fmt.Println(out)
// Output: <span class='user'>John Doe</span>
}

129
vendor/github.com/tdewolff/parse/html/util.go generated vendored Normal file
View file

@ -0,0 +1,129 @@
package html // import "github.com/tdewolff/parse/html"
import "github.com/tdewolff/parse"
var (
singleQuoteEntityBytes = []byte("&#39;")
doubleQuoteEntityBytes = []byte("&#34;")
)
var charTable = [256]bool{
// ASCII
false, false, false, false, false, false, false, false,
false, true, true, true, true, true, false, false, // tab, new line, vertical tab, form feed, carriage return
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
true, false, true, false, false, false, true, true, // space, ", &, '
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, true, true, true, false, // <, =, >
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
true, false, false, false, false, false, false, false, // `
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
// non-ASCII
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
}
// EscapeAttrVal returns the escaped attribute value bytes without quotes.
func EscapeAttrVal(buf *[]byte, orig, b []byte) []byte {
singles := 0
doubles := 0
unquoted := true
entities := false
for i, c := range b {
if charTable[c] {
if c == '&' {
entities = true
if quote, n := parse.QuoteEntity(b[i:]); n > 0 {
if quote == '"' {
unquoted = false
doubles++
} else {
unquoted = false
singles++
}
}
} else {
unquoted = false
if c == '"' {
doubles++
} else if c == '\'' {
singles++
}
}
}
}
if unquoted {
return b
} else if !entities && len(orig) == len(b)+2 && (singles == 0 && orig[0] == '\'' || doubles == 0 && orig[0] == '"') {
return orig
}
n := len(b) + 2
var quote byte
var escapedQuote []byte
if doubles > singles {
n += singles * 4
quote = '\''
escapedQuote = singleQuoteEntityBytes
} else {
n += doubles * 4
quote = '"'
escapedQuote = doubleQuoteEntityBytes
}
if n > cap(*buf) {
*buf = make([]byte, 0, n) // maximum size, not actual size
}
t := (*buf)[:n] // maximum size, not actual size
t[0] = quote
j := 1
start := 0
for i, c := range b {
if c == '&' {
if entityQuote, n := parse.QuoteEntity(b[i:]); n > 0 {
j += copy(t[j:], b[start:i])
if entityQuote != quote {
t[j] = entityQuote
j++
} else {
j += copy(t[j:], escapedQuote)
}
start = i + n
}
} else if c == quote {
j += copy(t[j:], b[start:i])
j += copy(t[j:], escapedQuote)
start = i + 1
}
}
j += copy(t[j:], b[start:])
t[j] = quote
return t[:j+1]
}

43
vendor/github.com/tdewolff/parse/html/util_test.go generated vendored Normal file
View file

@ -0,0 +1,43 @@
package html // import "github.com/tdewolff/parse/html"
import (
"testing"
"github.com/tdewolff/test"
)
func TestEscapeAttrVal(t *testing.T) {
var escapeAttrValTests = []struct {
attrVal string
expected string
}{
{"xyz", "xyz"},
{"", ""},
{"x&amp;z", "x&amp;z"},
{"x/z", "x/z"},
{"x'z", "\"x'z\""},
{"x\"z", "'x\"z'"},
{"'x\"z'", "'x\"z'"},
{"'x&#39;\"&#39;z'", "\"x'&#34;'z\""},
{"\"x&#34;'&#34;z\"", "'x\"&#39;\"z'"},
{"\"x&#x27;z\"", "\"x'z\""},
{"'x&#x00022;z'", "'x\"z'"},
{"'x\"&gt;'", "'x\"&gt;'"},
{"You&#039;re encouraged to log in; however, it&#039;s not mandatory. [o]", "\"You're encouraged to log in; however, it's not mandatory. [o]\""},
{"a'b=\"\"", "'a&#39;b=\"\"'"},
{"x<z", "\"x<z\""},
{"'x\"&#39;\"z'", "'x\"&#39;\"z'"},
}
var buf []byte
for _, tt := range escapeAttrValTests {
t.Run(tt.attrVal, func(t *testing.T) {
b := []byte(tt.attrVal)
orig := b
if len(b) > 1 && (b[0] == '"' || b[0] == '\'') && b[0] == b[len(b)-1] {
b = b[1 : len(b)-1]
}
val := EscapeAttrVal(&buf, orig, []byte(b))
test.String(t, string(val), tt.expected)
})
}
}

89
vendor/github.com/tdewolff/parse/js/README.md generated vendored Normal file
View file

@ -0,0 +1,89 @@
# JS [![GoDoc](http://godoc.org/github.com/tdewolff/parse/js?status.svg)](http://godoc.org/github.com/tdewolff/parse/js) [![GoCover](http://gocover.io/_badge/github.com/tdewolff/parse/js)](http://gocover.io/github.com/tdewolff/parse/js)
This package is a JS lexer (ECMA-262, edition 6.0) written in [Go][1]. It follows the specification at [ECMAScript Language Specification](http://www.ecma-international.org/ecma-262/6.0/). The lexer takes an io.Reader and converts it into tokens until the EOF.
## Installation
Run the following command
go get github.com/tdewolff/parse/js
or add the following import and run project with `go get`
import "github.com/tdewolff/parse/js"
## Lexer
### Usage
The following initializes a new Lexer with io.Reader `r`:
``` go
l := js.NewLexer(r)
```
To tokenize until EOF an error, use:
``` go
for {
tt, text := l.Next()
switch tt {
case js.ErrorToken:
// error or EOF set in l.Err()
return
// ...
}
}
```
All tokens (see [ECMAScript Language Specification](http://www.ecma-international.org/ecma-262/6.0/)):
``` go
ErrorToken TokenType = iota // extra token when errors occur
UnknownToken // extra token when no token can be matched
WhitespaceToken // space \t \v \f
LineTerminatorToken // \r \n \r\n
CommentToken
IdentifierToken // also: null true false
PunctuatorToken /* { } ( ) [ ] . ; , < > <= >= == != === !== + - * % ++ -- << >>
>>> & | ^ ! ~ && || ? : = += -= *= %= <<= >>= >>>= &= |= ^= / /= => */
NumericToken
StringToken
RegexpToken
TemplateToken
```
### Quirks
Because the ECMAScript specification for `PunctuatorToken` (of which the `/` and `/=` symbols) and `RegexpToken` depends on a parser state to differentiate between the two, the lexer (to remain modular) uses different rules. It aims to correctly disambiguate contexts and returns `RegexpToken` or `PunctuatorToken` where appropriate with only few exceptions which don't make much sense in runtime and so don't happen in a real-world code: function literal division (`x = function y(){} / z`) and object literal division (`x = {y:1} / z`).
Another interesting case introduced by ES2015 is `yield` operator in function generators vs `yield` as an identifier in regular functions. This was done for backward compatibility, but is very hard to disambiguate correctly on a lexer level without essentially implementing entire parsing spec as a state machine and hurting performance, code readability and maintainability, so, instead, `yield` is just always assumed to be an operator. In combination with above paragraph, this means that, for example, `yield /x/i` will be always parsed as `yield`-ing regular expression and not as `yield` identifier divided by `x` and then `i`. There is no evidence though that this pattern occurs in any popular libraries.
### Examples
``` go
package main
import (
"os"
"github.com/tdewolff/parse/js"
)
// Tokenize JS from stdin.
func main() {
l := js.NewLexer(os.Stdin)
for {
tt, text := l.Next()
switch tt {
case js.ErrorToken:
if l.Err() != io.EOF {
fmt.Println("Error on line", l.Line(), ":", l.Err())
}
return
case js.IdentifierToken:
fmt.Println("Identifier", string(text))
case js.NumericToken:
fmt.Println("Numeric", string(text))
// ...
}
}
}
```
## License
Released under the [MIT license](https://github.com/tdewolff/parse/blob/master/LICENSE.md).
[1]: http://golang.org/ "Go Language"

156
vendor/github.com/tdewolff/parse/js/hash.go generated vendored Normal file
View file

@ -0,0 +1,156 @@
package js
// generated by hasher -file hash.go -type Hash; DO NOT EDIT, except for adding more constants to the list and rerun go generate
// uses github.com/tdewolff/hasher
//go:generate hasher -type=Hash -file=hash.go
// Hash defines perfect hashes for a predefined list of strings
type Hash uint32
// Unique hash definitions to be used instead of strings
const (
Break Hash = 0x5 // break
Case Hash = 0x3404 // case
Catch Hash = 0xba05 // catch
Class Hash = 0x505 // class
Const Hash = 0x2c05 // const
Continue Hash = 0x3e08 // continue
Debugger Hash = 0x8408 // debugger
Default Hash = 0xab07 // default
Delete Hash = 0xcd06 // delete
Do Hash = 0x4c02 // do
Else Hash = 0x3704 // else
Enum Hash = 0x3a04 // enum
Export Hash = 0x1806 // export
Extends Hash = 0x4507 // extends
False Hash = 0x5a05 // false
Finally Hash = 0x7a07 // finally
For Hash = 0xc403 // for
Function Hash = 0x4e08 // function
If Hash = 0x5902 // if
Implements Hash = 0x5f0a // implements
Import Hash = 0x6906 // import
In Hash = 0x4202 // in
Instanceof Hash = 0x710a // instanceof
Interface Hash = 0x8c09 // interface
Let Hash = 0xcf03 // let
New Hash = 0x1203 // new
Null Hash = 0x5504 // null
Package Hash = 0x9507 // package
Private Hash = 0x9c07 // private
Protected Hash = 0xa309 // protected
Public Hash = 0xb506 // public
Return Hash = 0xd06 // return
Static Hash = 0x2f06 // static
Super Hash = 0x905 // super
Switch Hash = 0x2606 // switch
This Hash = 0x2304 // this
Throw Hash = 0x1d05 // throw
True Hash = 0xb104 // true
Try Hash = 0x6e03 // try
Typeof Hash = 0xbf06 // typeof
Var Hash = 0xc703 // var
Void Hash = 0xca04 // void
While Hash = 0x1405 // while
With Hash = 0x2104 // with
Yield Hash = 0x8005 // yield
)
// String returns the hash' name.
func (i Hash) String() string {
start := uint32(i >> 8)
n := uint32(i & 0xff)
if start+n > uint32(len(_Hash_text)) {
return ""
}
return _Hash_text[start : start+n]
}
// ToHash returns the hash whose name is s. It returns zero if there is no
// such hash. It is case sensitive.
func ToHash(s []byte) Hash {
if len(s) == 0 || len(s) > _Hash_maxLen {
return 0
}
h := uint32(_Hash_hash0)
for i := 0; i < len(s); i++ {
h ^= uint32(s[i])
h *= 16777619
}
if i := _Hash_table[h&uint32(len(_Hash_table)-1)]; int(i&0xff) == len(s) {
t := _Hash_text[i>>8 : i>>8+i&0xff]
for i := 0; i < len(s); i++ {
if t[i] != s[i] {
goto NEXT
}
}
return i
}
NEXT:
if i := _Hash_table[(h>>16)&uint32(len(_Hash_table)-1)]; int(i&0xff) == len(s) {
t := _Hash_text[i>>8 : i>>8+i&0xff]
for i := 0; i < len(s); i++ {
if t[i] != s[i] {
return 0
}
}
return i
}
return 0
}
const _Hash_hash0 = 0x9acb0442
const _Hash_maxLen = 10
const _Hash_text = "breakclassupereturnewhilexporthrowithiswitchconstaticaselsen" +
"umcontinuextendsdofunctionullifalseimplementsimportryinstanc" +
"eofinallyieldebuggerinterfacepackageprivateprotectedefaultru" +
"epublicatchtypeoforvarvoidelete"
var _Hash_table = [1 << 6]Hash{
0x0: 0x2f06, // static
0x1: 0x9c07, // private
0x3: 0xb104, // true
0x6: 0x5a05, // false
0x7: 0x4c02, // do
0x9: 0x2c05, // const
0xa: 0x2606, // switch
0xb: 0x6e03, // try
0xc: 0x1203, // new
0xd: 0x4202, // in
0xf: 0x8005, // yield
0x10: 0x5f0a, // implements
0x11: 0xc403, // for
0x12: 0x505, // class
0x13: 0x3a04, // enum
0x16: 0xc703, // var
0x17: 0x5902, // if
0x19: 0xcf03, // let
0x1a: 0x9507, // package
0x1b: 0xca04, // void
0x1c: 0xcd06, // delete
0x1f: 0x5504, // null
0x20: 0x1806, // export
0x21: 0xd06, // return
0x23: 0x4507, // extends
0x25: 0x2304, // this
0x26: 0x905, // super
0x27: 0x1405, // while
0x29: 0x5, // break
0x2b: 0x3e08, // continue
0x2e: 0x3404, // case
0x2f: 0xab07, // default
0x31: 0x8408, // debugger
0x32: 0x1d05, // throw
0x33: 0xbf06, // typeof
0x34: 0x2104, // with
0x35: 0xba05, // catch
0x36: 0x4e08, // function
0x37: 0x710a, // instanceof
0x38: 0xa309, // protected
0x39: 0x8c09, // interface
0x3b: 0xb506, // public
0x3c: 0x3704, // else
0x3d: 0x7a07, // finally
0x3f: 0x6906, // import
}

18
vendor/github.com/tdewolff/parse/js/hash_test.go generated vendored Normal file
View file

@ -0,0 +1,18 @@
package js // import "github.com/tdewolff/parse/js"
import (
"testing"
"github.com/tdewolff/test"
)
func TestHashTable(t *testing.T) {
test.T(t, ToHash([]byte("break")), Break, "'break' must resolve to hash.Break")
test.T(t, ToHash([]byte("var")), Var, "'var' must resolve to hash.Var")
test.T(t, Break.String(), "break")
test.T(t, ToHash([]byte("")), Hash(0), "empty string must resolve to zero")
test.T(t, Hash(0xffffff).String(), "")
test.T(t, ToHash([]byte("breaks")), Hash(0), "'breaks' must resolve to zero")
test.T(t, ToHash([]byte("sdf")), Hash(0), "'sdf' must resolve to zero")
test.T(t, ToHash([]byte("uio")), Hash(0), "'uio' must resolve to zero")
}

Some files were not shown because too many files have changed in this diff Show more