1
0
Fork 0
mirror of https://github.com/miniflux/v2.git synced 2025-08-01 17:38:37 +00:00

First commit

This commit is contained in:
Frédéric Guillot 2017-11-19 21:10:04 -08:00
commit 8ffb773f43
2121 changed files with 1118910 additions and 0 deletions

1
vendor/github.com/tdewolff/minify/.gitattributes generated vendored Normal file
View file

@ -0,0 +1 @@
benchmarks/sample_* linguist-generated=true

4
vendor/github.com/tdewolff/minify/.gitignore generated vendored Normal file
View file

@ -0,0 +1,4 @@
dist/
benchmarks/*
!benchmarks/*.go
!benchmarks/sample_*

26
vendor/github.com/tdewolff/minify/.goreleaser.yml generated vendored Normal file
View file

@ -0,0 +1,26 @@
builds:
- binary: minify
main: ./cmd/minify/
ldflags: -s -w -X main.Version={{.Version}} -X main.Commit={{.Commit}} -X main.Date={{.Date}}
goos:
- windows
- linux
- darwin
goarch:
- amd64
- 386
- arm
- arm64
archive:
format: tar.gz
format_overrides:
- goos: windows
format: zip
name_template: "{{.Binary}}_{{.Version}}_{{.Os}}_{{.Arch}}"
files:
- README.md
- LICENSE.md
snapshot:
name_template: "devel"
release:
draft: true

5
vendor/github.com/tdewolff/minify/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,5 @@
language: go
before_install:
- go get github.com/mattn/goveralls
script:
- goveralls -v -service travis-ci -repotoken $COVERALLS_TOKEN -ignore=cmd/minify/* || go test -v ./...

22
vendor/github.com/tdewolff/minify/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,22 @@
Copyright (c) 2015 Taco de Wolff
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.

590
vendor/github.com/tdewolff/minify/README.md generated vendored Normal file
View file

@ -0,0 +1,590 @@
# Minify <a name="minify"></a> [![Build Status](https://travis-ci.org/tdewolff/minify.svg?branch=master)](https://travis-ci.org/tdewolff/minify) [![GoDoc](http://godoc.org/github.com/tdewolff/minify?status.svg)](http://godoc.org/github.com/tdewolff/minify) [![Coverage Status](https://coveralls.io/repos/github/tdewolff/minify/badge.svg?branch=master)](https://coveralls.io/github/tdewolff/minify?branch=master) [![Join the chat at https://gitter.im/tdewolff/minify](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/tdewolff/minify?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**[Online demo](http://go.tacodewolff.nl/minify) if you need to minify files *now*.**
**[Command line tool](https://github.com/tdewolff/minify/tree/master/cmd/minify) that minifies concurrently and supports watching file changes.**
**[All releases](https://github.com/tdewolff/minify/releases) for various platforms.**
---
Minify is a minifier package written in [Go][1]. It provides HTML5, CSS3, JS, JSON, SVG and XML minifiers and an interface to implement any other minifier. Minification is the process of removing bytes from a file (such as whitespace) without changing its output and therefore shrinking its size and speeding up transmission over the internet and possibly parsing. The implemented minifiers are high performance and streaming, which implies O(n).
The core functionality associates mimetypes with minification functions, allowing embedded resources (like CSS or JS within HTML files) to be minified as well. Users can add new implementations that are triggered based on a mimetype (or pattern), or redirect to an external command (like ClosureCompiler, UglifyCSS, ...).
#### Table of Contents
- [Minify](#minify)
- [Prologue](#prologue)
- [Installation](#installation)
- [API stability](#api-stability)
- [Testing](#testing)
- [Performance](#performance)
- [HTML](#html)
- [Whitespace removal](#whitespace-removal)
- [CSS](#css)
- [JS](#js)
- [JSON](#json)
- [SVG](#svg)
- [XML](#xml)
- [Usage](#usage)
- [New](#new)
- [From reader](#from-reader)
- [From bytes](#from-bytes)
- [From string](#from-string)
- [To reader](#to-reader)
- [To writer](#to-writer)
- [Middleware](#middleware)
- [Custom minifier](#custom-minifier)
- [Mediatypes](#mediatypes)
- [Examples](#examples)
- [Common minifiers](#common-minifiers)
- [Custom minifier](#custom-minifier-example)
- [ResponseWriter](#responsewriter)
- [Templates](#templates)
- [License](#license)
### Status
* CSS: **fully implemented**
* HTML: **fully implemented**
* JS: improved JSmin implementation
* JSON: **fully implemented**
* SVG: partially implemented; in development
* XML: **fully implemented**
### Roadmap
- [ ] General speed-up of all minifiers (use ASM for whitespace funcs)
- [ ] Improve JS minifiers by shortening variables and proper semicolon omission
- [ ] Speed-up SVG minifier, it is very slow
- [ ] Proper parser error reporting and line number + column information
- [ ] Generation of source maps (uncertain, might slow down parsers too much if it cannot run separately nicely)
- [ ] Look into compression of images, fonts and other web resources (into package `compress`?)
- [ ] Create a cmd to pack webfiles (much like webpack), ie. merging CSS and JS files, inlining small external files, minification and gzipping. This would work on HTML files.
- [ ] Create a package to format files, much like `gofmt` for Go files
## Prologue
Minifiers or bindings to minifiers exist in almost all programming languages. Some implementations are merely using several regular-expressions to trim whitespace and comments (even though regex for parsing HTML/XML is ill-advised, for a good read see [Regular Expressions: Now You Have Two Problems](http://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/)). Some implementations are much more profound, such as the [YUI Compressor](http://yui.github.io/yuicompressor/) and [Google Closure Compiler](https://github.com/google/closure-compiler) for JS. As most existing implementations either use Java or JavaScript and don't focus on performance, they are pretty slow. Additionally, loading the whole file into memory at once is bad for really large files (or impossible for streams).
This minifier proves to be that fast and extensive minifier that can handle HTML and any other filetype it may contain (CSS, JS, ...). It streams the input and output and can minify files concurrently.
## Installation
Run the following command
go get github.com/tdewolff/minify
or add the following imports and run the project with `go get`
``` go
import (
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/html"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
)
```
## API stability
There is no guarantee for absolute stability, but I take issues and bugs seriously and don't take API changes lightly. The library will be maintained in a compatible way unless vital bugs prevent me from doing so. There has been one API change after v1 which added options support and I took the opportunity to push through some more API clean up as well. There are no plans whatsoever for future API changes.
## Testing
For all subpackages and the imported `parse` and `buffer` packages, test coverage of 100% is pursued. Besides full coverage, the minifiers are [fuzz tested](https://github.com/tdewolff/fuzz) using [github.com/dvyukov/go-fuzz](http://www.github.com/dvyukov/go-fuzz), see [the wiki](https://github.com/tdewolff/minify/wiki) for the most important bugs found by fuzz testing. Furthermore am I working on adding visual testing to ensure that minification doesn't change anything visually. By using the WebKit browser to render the original and minified pages we can check whether any pixel is different.
These tests ensure that everything works as intended, the code does not crash (whatever the input) and that it doesn't change the final result visually. If you still encounter a bug, please report [here](https://github.com/tdewolff/minify/issues)!
## Performance
The benchmarks directory contains a number of standardized samples used to compare performance between changes. To give an indication of the speed of this library, I've ran the tests on my Thinkpad T460 (i5-6300U quad-core 2.4GHz running Arch Linux) using Go 1.9.2.
```
name time/op
CSS/sample_bootstrap.css-4 3.05ms ± 1%
CSS/sample_gumby.css-4 4.25ms ± 1%
HTML/sample_amazon.html-4 3.33ms ± 0%
HTML/sample_bbc.html-4 1.39ms ± 7%
HTML/sample_blogpost.html-4 222µs ± 1%
HTML/sample_es6.html-4 18.0ms ± 1%
HTML/sample_stackoverflow.html-4 3.08ms ± 1%
HTML/sample_wikipedia.html-4 6.06ms ± 1%
JS/sample_ace.js-4 9.92ms ± 1%
JS/sample_dot.js-4 91.4µs ± 4%
JS/sample_jquery.js-4 4.00ms ± 1%
JS/sample_jqueryui.js-4 7.93ms ± 0%
JS/sample_moment.js-4 1.46ms ± 1%
JSON/sample_large.json-4 5.07ms ± 4%
JSON/sample_testsuite.json-4 2.96ms ± 0%
JSON/sample_twitter.json-4 11.3µs ± 0%
SVG/sample_arctic.svg-4 64.7ms ± 0%
SVG/sample_gopher.svg-4 227µs ± 0%
SVG/sample_usa.svg-4 35.9ms ± 6%
XML/sample_books.xml-4 48.1µs ± 4%
XML/sample_catalog.xml-4 20.2µs ± 0%
XML/sample_omg.xml-4 9.02ms ± 0%
name speed
CSS/sample_bootstrap.css-4 45.0MB/s ± 1%
CSS/sample_gumby.css-4 43.8MB/s ± 1%
HTML/sample_amazon.html-4 142MB/s ± 0%
HTML/sample_bbc.html-4 83.0MB/s ± 7%
HTML/sample_blogpost.html-4 94.5MB/s ± 1%
HTML/sample_es6.html-4 56.8MB/s ± 1%
HTML/sample_stackoverflow.html-4 66.7MB/s ± 1%
HTML/sample_wikipedia.html-4 73.5MB/s ± 1%
JS/sample_ace.js-4 64.9MB/s ± 1%
JS/sample_dot.js-4 56.4MB/s ± 4%
JS/sample_jquery.js-4 61.8MB/s ± 1%
JS/sample_jqueryui.js-4 59.2MB/s ± 0%
JS/sample_moment.js-4 67.8MB/s ± 1%
JSON/sample_large.json-4 150MB/s ± 4%
JSON/sample_testsuite.json-4 233MB/s ± 0%
JSON/sample_twitter.json-4 134MB/s ± 0%
SVG/sample_arctic.svg-4 22.7MB/s ± 0%
SVG/sample_gopher.svg-4 25.6MB/s ± 0%
SVG/sample_usa.svg-4 28.6MB/s ± 6%
XML/sample_books.xml-4 92.1MB/s ± 4%
XML/sample_catalog.xml-4 95.6MB/s ± 0%
```
## HTML
HTML (with JS and CSS) minification typically shaves off about 10%.
The HTML5 minifier uses these minifications:
- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
- strip superfluous quotes, or uses single/double quotes whichever requires fewer escapes
- strip default attribute values and attribute boolean values
- strip some empty attributes
- strip unrequired tags (`html`, `head`, `body`, ...)
- strip unrequired end tags (`tr`, `td`, `li`, ... and often `p`)
- strip default protocols (`http:`, `https:` and `javascript:`)
- strip all comments (including conditional comments, old IE versions are not supported anymore by Microsoft)
- shorten `doctype` and `meta` charset
- lowercase tags, attributes and some values to enhance gzip compression
Options:
- `KeepConditionalComments` preserve all IE conditional comments such as `<!--[if IE 6]><![endif]-->` and `<![if IE 6]><![endif]>`, see https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).aspx#syntax
- `KeepDefaultAttrVals` preserve default attribute values such as `<script type="text/javascript">`
- `KeepDocumentTags` preserve `html`, `head` and `body` tags
- `KeepEndTags` preserve all end tags
- `KeepWhitespace` preserve whitespace between inline tags but still collapse multiple whitespace characters into one
After recent benchmarking and profiling it became really fast and minifies pages in the 10ms range, making it viable for on-the-fly minification.
However, be careful when doing on-the-fly minification. Minification typically trims off 10% and does this at worst around about 20MB/s. This means users have to download slower than 2MB/s to make on-the-fly minification worthwhile. This may or may not apply in your situation. Rather use caching!
### Whitespace removal
The whitespace removal mechanism collapses all sequences of whitespace (spaces, newlines, tabs) to a single space. If the sequence contained a newline or carriage return it will collapse into a newline character instead. It trims all text parts (in between tags) depending on whether it was preceded by a space from a previous piece of text and whether it is followed up by a block element or an inline element. In the former case we can omit spaces while for inline elements whitespace has significance.
Make sure your HTML doesn't depend on whitespace between `block` elements that have been changed to `inline` or `inline-block` elements using CSS. Your layout *should not* depend on those whitespaces as the minifier will remove them. An example is a menu consisting of multiple `<li>` that have `display:inline-block` applied and have whitespace in between them. It is bad practise to rely on whitespace for element positioning anyways!
## CSS
Minification typically shaves off about 10%-15%.
The CSS minifier will only use safe minifications:
- remove comments and unnecessary whitespace
- remove trailing semicolons
- optimize `margin`, `padding` and `border-width` number of sides
- shorten numbers by removing unnecessary `+` and zeros and rewriting with/without exponent
- remove dimension and percentage for zero values
- remove quotes for URLs
- remove quotes for font families and make lowercase
- rewrite hex colors to/from color names, or to 3 digit hex
- rewrite `rgb(`, `rgba(`, `hsl(` and `hsla(` colors to hex or name
- replace `normal` and `bold` by numbers for `font-weight` and `font`
- replace `none` &#8594; `0` for `border`, `background` and `outline`
- lowercase all identifiers except classes, IDs and URLs to enhance gzip compression
- shorten MS alpha function
- rewrite data URIs with base64 or ASCII whichever is shorter
- calls minifier for data URI mediatypes, thus you can compress embedded SVG files if you have that minifier attached
It does purposely not use the following techniques:
- (partially) merge rulesets
- (partially) split rulesets
- collapse multiple declarations when main declaration is defined within a ruleset (don't put `font-weight` within an already existing `font`, too complex)
- remove overwritten properties in ruleset (this not always overwrites it, for example with `!important`)
- rewrite properties into one ruleset if possible (like `margin-top`, `margin-right`, `margin-bottom` and `margin-left` &#8594; `margin`)
- put nested ID selector at the front (`body > div#elem p` &#8594; `#elem p`)
- rewrite attribute selectors for IDs and classes (`div[id=a]` &#8594; `div#a`)
- put space after pseudo-selectors (IE6 is old, move on!)
It's great that so many other tools make comparison tables: [CSS Minifier Comparison](http://www.codenothing.com/benchmarks/css-compressor-3.0/full.html), [CSS minifiers comparison](http://www.phpied.com/css-minifiers-comparison/) and [CleanCSS tests](http://goalsmashers.github.io/css-minification-benchmark/). From the last link, this CSS minifier is almost without doubt the fastest and has near-perfect minification rates. It falls short with the purposely not implemented and often unsafe techniques, so that's fine.
Options:
- `Decimals` number of decimals to preserve for numbers, `-1` means no trimming
## JS
The JS minifier is pretty basic. It removes comments, whitespace and line breaks whenever it can. It employs all the rules that [JSMin](http://www.crockford.com/javascript/jsmin.html) does too, but has additional improvements. For example the prefix-postfix bug is fixed.
Common speeds of PHP and JS implementations are about 100-300kB/s (see [Uglify2](http://lisperator.net/uglifyjs/), [Adventures in PHP web asset minimization](https://www.happyassassin.net/2014/12/29/adventures-in-php-web-asset-minimization/)). This implementation or orders of magnitude faster, around ~50MB/s.
TODO:
- shorten local variables / function parameters names
- precise semicolon and newline omission
## JSON
Minification typically shaves off about 15% of filesize for common indented JSON such as generated by [JSON Generator](http://www.json-generator.com/).
The JSON minifier only removes whitespace, which is the only thing that can be left out.
## SVG
The SVG minifier uses these minifications:
- trim and collapse whitespace between all tags
- strip comments, empty `doctype`, XML prelude, `metadata`
- strip SVG version
- strip CDATA sections wherever possible
- collapse tags with no content to a void tag
- collapse empty container tags (`g`, `svg`, ...)
- minify style tag and attributes with the CSS minifier
- minify colors
- shorten lengths and numbers and remove default `px` unit
- shorten `path` data
- convert `rect`, `line`, `polygon`, `polyline` to `path`
- use relative or absolute positions in path data whichever is shorter
TODO:
- convert attributes to style attribute whenever shorter
- merge path data? (same style and no intersection -- the latter is difficult)
Options:
- `Decimals` number of decimals to preserve for numbers, `-1` means no trimming
## XML
The XML minifier uses these minifications:
- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
- strip comments
- collapse tags with no content to a void tag
- strip CDATA sections wherever possible
Options:
- `KeepWhitespace` preserve whitespace between inline tags but still collapse multiple whitespace characters into one
## Usage
Any input stream is being buffered by the minification functions. This is how the underlying buffer package inherently works to ensure high performance. The output stream however is not buffered. It is wise to preallocate a buffer as big as the input to which the output is written, or otherwise use `bufio` to buffer to a streaming writer.
### New
Retrieve a minifier struct which holds a map of mediatype &#8594; minifier functions.
``` go
m := minify.New()
```
The following loads all provided minifiers.
``` go
m := minify.New()
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/html", html.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
```
You can set options to several minifiers.
``` go
m.Add("text/html", &html.Minifier{
KeepDefaultAttrVals: true,
KeepWhitespace: true,
})
```
### From reader
Minify from an `io.Reader` to an `io.Writer` for a specific mediatype.
``` go
if err := m.Minify(mediatype, w, r); err != nil {
panic(err)
}
```
### From bytes
Minify from and to a `[]byte` for a specific mediatype.
``` go
b, err = m.Bytes(mediatype, b)
if err != nil {
panic(err)
}
```
### From string
Minify from and to a `string` for a specific mediatype.
``` go
s, err = m.String(mediatype, s)
if err != nil {
panic(err)
}
```
### To reader
Get a minifying reader for a specific mediatype.
``` go
mr := m.Reader(mediatype, r)
if _, err := mr.Read(b); err != nil {
panic(err)
}
```
### To writer
Get a minifying writer for a specific mediatype. Must be explicitly closed because it uses an `io.Pipe` underneath.
``` go
mw := m.Writer(mediatype, w)
if mw.Write([]byte("input")); err != nil {
panic(err)
}
if err := mw.Close(); err != nil {
panic(err)
}
```
### Middleware
Minify resources on the fly using middleware. It passes a wrapped response writer to the handler that removes the Content-Length header. The minifier is chosen based on the Content-Type header or, if the header is empty, by the request URI file extension. This is on-the-fly processing, you should preferably cache the results though!
``` go
fs := http.FileServer(http.Dir("www/"))
http.Handle("/", m.Middleware(fs))
```
### Custom minifier
Add a minifier for a specific mimetype.
``` go
type CustomMinifier struct {
KeepLineBreaks bool
}
func (c *CustomMinifier) Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
// ...
return nil
}
m.Add(mimetype, &CustomMinifier{KeepLineBreaks: true})
// or
m.AddRegexp(regexp.MustCompile("/x-custom$"), &CustomMinifier{KeepLineBreaks: true})
```
Add a minify function for a specific mimetype.
``` go
m.AddFunc(mimetype, func(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
// ...
return nil
})
m.AddFuncRegexp(regexp.MustCompile("/x-custom$"), func(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
// ...
return nil
})
```
Add a command `cmd` with arguments `args` for a specific mimetype.
``` go
m.AddCmd(mimetype, exec.Command(cmd, args...))
m.AddCmdRegexp(regexp.MustCompile("/x-custom$"), exec.Command(cmd, args...))
```
### Mediatypes
Using the `params map[string]string` argument one can pass parameters to the minifier such as seen in mediatypes (`type/subtype; key1=val2; key2=val2`). Examples are the encoding or charset of the data. Calling `Minify` will split the mimetype and parameters for the minifiers for you, but `MinifyMimetype` can be used if you already have them split up.
Minifiers can also be added using a regular expression. For example a minifier with `image/.*` will match any image mime.
## Examples
### Common minifiers
Basic example that minifies from stdin to stdout and loads the default HTML, CSS and JS minifiers. Optionally, one can enable `java -jar build/compiler.jar` to run for JS (for example the [ClosureCompiler](https://code.google.com/p/closure-compiler/)). Note that reading the file into a buffer first and writing to a pre-allocated buffer would be faster (but would disable streaming).
``` go
package main
import (
"log"
"os"
"os/exec"
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/html"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
)
func main() {
m := minify.New()
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/html", html.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
// Or use the following for better minification of JS but lower speed:
// m.AddCmd("text/javascript", exec.Command("java", "-jar", "build/compiler.jar"))
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}
```
### <a name="custom-minifier-example"></a> Custom minifier
Custom minifier showing an example that implements the minifier function interface. Within a custom minifier, it is possible to call any minifier function (through `m minify.Minifier`) recursively when dealing with embedded resources.
``` go
package main
import (
"bufio"
"fmt"
"io"
"log"
"strings"
"github.com/tdewolff/minify"
)
func main() {
m := minify.New()
m.AddFunc("text/plain", func(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
// remove newlines and spaces
rb := bufio.NewReader(r)
for {
line, err := rb.ReadString('\n')
if err != nil && err != io.EOF {
return err
}
if _, errws := io.WriteString(w, strings.Replace(line, " ", "", -1)); errws != nil {
return errws
}
if err == io.EOF {
break
}
}
return nil
})
in := "Because my coffee was too cold, I heated it in the microwave."
out, err := m.String("text/plain", in)
if err != nil {
panic(err)
}
fmt.Println(out)
// Output: Becausemycoffeewastoocold,Iheateditinthemicrowave.
}
```
### ResponseWriter
#### Middleware
``` go
func main() {
m := minify.New()
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/html", html.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
fs := http.FileServer(http.Dir("www/"))
http.Handle("/", m.Middleware(fs))
}
```
#### ResponseWriter
``` go
func Serve(w http.ResponseWriter, r *http.Request) {
mw := m.ResponseWriter(w, r)
defer mw.Close()
w = mw
http.ServeFile(w, r, path.Join("www", r.URL.Path))
}
```
#### Custom response writer
ResponseWriter example which returns a ResponseWriter that minifies the content and then writes to the original ResponseWriter. Any write after applying this filter will be minified.
``` go
type MinifyResponseWriter struct {
http.ResponseWriter
io.WriteCloser
}
func (m MinifyResponseWriter) Write(b []byte) (int, error) {
return m.WriteCloser.Write(b)
}
// MinifyResponseWriter must be closed explicitly by calling site.
func MinifyFilter(mediatype string, res http.ResponseWriter) MinifyResponseWriter {
m := minify.New()
// add minfiers
mw := m.Writer(mediatype, res)
return MinifyResponseWriter{res, mw}
}
```
``` go
// Usage
func(w http.ResponseWriter, req *http.Request) {
w = MinifyFilter("text/html", w)
if _, err := io.WriteString(w, "<p class="message"> This HTTP response will be minified. </p>"); err != nil {
panic(err)
}
if err := w.Close(); err != nil {
panic(err)
}
// Output: <p class=message>This HTTP response will be minified.
}
```
### Templates
Here's an example of a replacement for `template.ParseFiles` from `template/html`, which automatically minifies each template before parsing it.
Be aware that minifying templates will work in most cases but not all. Because the HTML minifier only works for valid HTML5, your template must be valid HTML5 of itself. Template tags are parsed as regular text by the minifier.
``` go
func compileTemplates(filenames ...string) (*template.Template, error) {
m := minify.New()
m.AddFunc("text/html", html.Minify)
var tmpl *template.Template
for _, filename := range filenames {
name := filepath.Base(filename)
if tmpl == nil {
tmpl = template.New(name)
} else {
tmpl = tmpl.New(name)
}
b, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
mb, err := m.Bytes("text/html", b)
if err != nil {
return nil, err
}
tmpl.Parse(string(mb))
}
return tmpl, nil
}
```
Example usage:
``` go
templates := template.MustCompile(compileTemplates("view.html", "home.html"))
```
## License
Released under the [MIT license](LICENSE.md).
[1]: http://golang.org/ "Go Language"

View file

@ -0,0 +1,32 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/css"
)
var cssSamples = []string{
"sample_bootstrap.css",
"sample_gumby.css",
}
func init() {
for _, sample := range cssSamples {
load(sample)
}
}
func BenchmarkCSS(b *testing.B) {
for _, sample := range cssSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
css.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,36 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/html"
)
var htmlSamples = []string{
"sample_amazon.html",
"sample_bbc.html",
"sample_blogpost.html",
"sample_es6.html",
"sample_stackoverflow.html",
"sample_wikipedia.html",
}
func init() {
for _, sample := range htmlSamples {
load(sample)
}
}
func BenchmarkHTML(b *testing.B) {
for _, sample := range htmlSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
html.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,35 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/js"
)
var jsSamples = []string{
"sample_ace.js",
"sample_dot.js",
"sample_jquery.js",
"sample_jqueryui.js",
"sample_moment.js",
}
func init() {
for _, sample := range jsSamples {
load(sample)
}
}
func BenchmarkJS(b *testing.B) {
for _, sample := range jsSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
js.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,33 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/json"
)
var jsonSamples = []string{
"sample_large.json",
"sample_testsuite.json",
"sample_twitter.json",
}
func init() {
for _, sample := range jsonSamples {
load(sample)
}
}
func BenchmarkJSON(b *testing.B) {
for _, sample := range jsonSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
json.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,18 @@
package benchmarks
import (
"io/ioutil"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse/buffer"
)
var m = minify.New()
var r = map[string]*buffer.Reader{}
var w = map[string]*buffer.Writer{}
func load(filename string) {
sample, _ := ioutil.ReadFile(filename)
r[filename] = buffer.NewReader(sample)
w[filename] = buffer.NewWriter(make([]byte, 0, len(sample)))
}

18655
vendor/github.com/tdewolff/minify/benchmarks/sample_ace.js generated vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

After

Width:  |  Height:  |  Size: 1.4 MiB

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,580 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>research!rsc: My Go Resolutions for 2017</title>
<link rel="alternate" type="application/atom+xml" title="research!rsc - Atom" href="http://research.swtch.com/feed.atom" />
<link href='https://fonts.googleapis.com/css?family=Inconsolata:400,700' rel='stylesheet' type='text/css'>
<script type="text/javascript" src="https://use.typekit.com/skm6yij.js"></script>
<script type="text/javascript">try{Typekit.load();}catch(e){}</script>
<style>
body {
padding: 0;
margin: 0;
font-size: 100%;
}
.header {
height: 1.25em;
background-color: #dff;
margin: 0;
padding: 0.1em 0.1em 0.2em;
border-top: 1px solid black;
border-bottom: 1px solid #8ff;
}
.header h3 {
margin: 0;
padding: 0 2em;
display: inline-block;
padding-right: 2em;
font-style: italic;
font-family: "adobe-text-pro" !important;
font-size: 90%;
}
.rss {
float: right;
padding-top: 0.2em;
padding-right: 2em;
display: none;
}
.toc {
margin-top: 2em;
}
.toc-title {
font-family: "caflisch-script-pro";
font-size: 300%;
line-height: 50%;
}
.toc-subtitle {
display: block;
margin-bottom: 1em;
font-size: 83%;
}
@media only screen and (max-width: 550px) { .toc-subtitle { display: none; } }
.header h3 a {
color: black;
}
.header h4 {
margin: 0;
padding: 0;
display: inline-block;
font-weight: normal;
font-size: 83%;
}
@media only screen and (max-width: 550px) { .header h4 { display: none; } }
.main {
padding: 0 2em;
}
@media only screen and (max-width: 479px) { .article { font-size: 120%; } }
.article h1 {
text-align: center;
}
.article h1, .article h2, .article h3 {
font-family: 'Myriad Pro';
}
.normal {
font-size: medium;
font-weight: normal;
}
.when {
text-align: center;
font-size: 100%;
margin: 0;
padding: 0;
}
.when p {
margin: 0;
padding: 0;
}
.article h2 {
font-size: 100%;
padding-top: 0.25em;
}
pre {
margin-left: 4em;
margin-right: 4em;
}
pre, code {
font-family: 'Inconsolata', monospace;
font-size: 100%;
}
.footer {
margin-top: 10px;
font-size: 83%;
font-family: sans-serif;
}
.comments {
margin-top: 2em;
background-color: #ffe;
border-top: 1px solid #aa4;
border-left: 1px solid #aa4;
border-right: 1px solid #aa4;
}
.comments-header {
padding: 0 5px 0 5px;
}
.comments-header p {
padding: 0;
margin: 3px 0 0 0;
}
.comments-body {
padding: 5px 5px 5px 5px;
}
#plus-comments {
border-bottom: 1px dotted #ccc;
}
.plus-comment {
width: 100%;
font-size: 14px;
border-top: 1px dotted #ccc;
}
.me {
background-color: #eec;
}
.plus-comment ul {
margin: 0;
padding: 0;
list-style: none;
width: 100%;
display: inline-block;
}
.comment-when {
color:#999;
width:auto;
padding:0 5px;
}
.old {
font-size: 83%;
}
.plus-comment ul li {
display: inline-block;
vertical-align: top;
margin-top: 5px;
margin-bottom: 5px;
padding: 0;
}
.plus-icon {
width: 45px;
}
.plus-img {
float: left;
margin: 4px 4px 4px 4px;
width: 32px;
height: 32px;
}
.plus-comment p {
margin: 0;
padding: 0;
}
.plus-clear {
clear: left;
}
.toc-when {
font-size: 83%;
color: #ccc;
}
.toc {
list-style: none;
}
.toc li {
margin-bottom: 0.5em;
}
.toc-head {
margin-bottom: 1em !important;
font-size: 117%;
}
.toc-summary {
margin-left: 2em;
}
.favorite {
font-weight: bold;
}
.article p {
line-height: 144%;
}
sup, sub {
vertical-align: baseline;
position: relative;
font-size: 83%;
}
sup {
bottom: 1ex;
}
sub {
top: 0.8ex;
}
.main {
position: relative;
margin: 0 auto;
padding: 0;
width: 900px;
}
@media only screen and (min-width: 768px) and (max-width: 959px) { .main { width: 708px; } }
@media only screen and (min-width: 640px) and (max-width: 767px) { .main { width: 580px; } }
@media only screen and (min-width: 480px) and (max-width: 639px) { .main { width: 420px; } }
@media only screen and (max-width: 479px) { .main { width: 300px; } }
</style>
</head>
<body>
<div class="header">
<h3><a href="/">research!rsc</a></h3>
<h4>Thoughts and links about programming,
by <a href="https://swtch.com/~rsc/" rel="author">Russ Cox</a> </h4>
<a class="rss" href="/feed.atom"><img src="/feed-icon-14x14.png" /></a>
</div>
<div class="main">
<div class="article">
<h1>My Go Resolutions for 2017
<div class="normal">
<div class="when">
Posted on Wednesday, January 18, 2017.
</div>
</div>
</h1>
<p class=lp>Tis the season for resolutions,
and I thought it would make sense to write a little
about what I hope to work on this year as far as Go is concerned.</p>
<p class=pp>My goal every year is to <em>help Go developers</em>.
I want to make sure that the work we do on the Go team
has a significant, positive impact on Go developers.
That may sound obvious, but there are a variety of common ways to fail to achieve that:
for example, spending too much time cleaning up or optimizing code that doesnt need it;
responding only to the most common or recent complaints or requests;
or focusing too much on short-term improvements.
Its important to step back and make sure were focusing
our development work where it does the most good.</p>
<p class=pp>This post outlines a few of my own major focuses for this year.
This is only my personal list, not the Go teams list.</p>
<p class=pp>One reason for posting this is to gather feedback.
If these spark any ideas or suggestions of your own,
please feel free to comment below or on the linked GitHub issues.</p>
<p class=pp>Another reason is to make clear that Im aware of these issues as important.
I think too often people interpret lack of action by the Go team
as a signal that we think everything is perfect, when instead
there is simply other, higher priority work to do first.</p>
<h2><a name="alias"></a>Type aliases</h2>
<p class=lp>There is a recurring problem with moving types
from one package to another during large codebase refactorings.
We tried to solve it last year with <a href="https://golang.org/issue/16339">general aliases</a>,
which didnt work for at least two reasons: we didnt explain the change well enough,
and we didnt deliver it on time, so it wasnt ready for Go 1.8.
Learning from that experience,
I <a href="https://www.youtube.com/watch?v=h6Cw9iCDVcU">gave a talk</a>
and <a href="https://talks.golang.org/2016/refactor.article">wrote an article</a>
about the underlying problem,
and that started a <a href="https://golang.org/issue/18130">productive discussion</a>
on the Go issue tracker about the solution space.
It looks like more limited <a href="https://golang.org/design/18130-type-alias">type aliases</a>
are the right next step.
I want to make sure those land smoothly in Go 1.9. <a href="https://golang.org/issue/18130">#18130</a>.</p>
<h2><a name="package"></a>Package management</h2>
<p class=lp>I designed the Go support for downloading published packages
(“goinstall”, which became “go get”) in February 2010.
A lot has happened since then.
In particular, other language ecosystems have really raised the bar
for what people expect from package management,
and the open source world has mostly agreed on
<a href="http://semver.org/">semantic versioning</a>, which provides a useful base
for inferring version compatibility.
Go needs to do better here, and a group of contributors have been
<a href="https://blog.gopheracademy.com/advent-2016/saga-go-dependency-management/">working on a solution</a>.
I want to make sure these ideas are integrated well
into the standard Go toolchain and to make package management
a reason that people love Go.</p>
<h2><a name="build"></a>Build improvements</h2>
<p class=lp>There are a handful of shortcomings in the design of
the go commands build system that are overdue to be fixed.
Here are three representative examples that I intend to
address with a bit of a redesign of the internals of the go command.</p>
<p class=pp>Builds can be too slow,
because the go command doesnt cache build results as aggressively as it should.
Many people dont realize that <code>go</code> <code>install</code> saves its work while <code>go</code> <code>build</code> does not,
and then they run repeated <code>go</code> <code>build</code> commands that are slow
because the later builds do more work than they should need to.
The same for repeated <code>go</code> <code>test</code> without <code>go</code> <code>test</code> <code>-i</code> when dependencies are modified.
All builds should be as incremental as possible.
<a href="https://golang.org/issue/4719">#4719</a>.</p>
<p class=pp>Test results should be cached too:
if none of the inputs to a test have changed,
then usually there is no need to rerun the test.
This will make it very cheap to run “all tests” when little or nothing has changed.
<a href="https://golang.org/issue/11193">#11193</a>.</p>
<p class=pp>Work outside GOPATH should be supported nearly as well
as work inside GOPATH.
In particular, it should be possible to <code>git</code> <code>clone</code> a repo,
<code>cd</code> into it, and run <code>go</code> commands and have them work fine.
Package management only makes that more important:
youll need to be able to work on different versions of a package (say, v1 and v2)
without having entirely separate GOPATHs for them.
<a href="https://golang.org/issue/17271">#17271</a>.</p>
<h2><a name="corpus"></a>Code corpus</h2>
<p class=lp>I think it helped to have concrete examples from real projects
in the talk and article I prepared about codebase refactoring (see <a href="#alias">above</a>).
We&rsquo;ve also defined that <a href="https://golang.org/src/cmd/vet/README">additions to vet</a>
must target problems that happen frequently in real programs.
I&rsquo;d like to see that kind of analysis of actual practice—examining
the effects on and possible improvements to real programs—become a
standard way we discuss and evaluate changes to Go.</p>
<p class=pp>Right now there&rsquo;s not an agreed-upon representative corpus of code to use for
those analyses: everyone must first create their own, which is too much work.
I&rsquo;d like to put together a single, self-contained Git repo people can check out that
contains our official baseline corpus for those analyses.
A possible starting point could be the top 100 Go language repos
on GitHub by stars or forks or both.</p>
<h2><a name="vet"></a>Automatic vet</h2>
<p class=lp>The Go distribution ships with this powerful tool,
<a href="https://golang.org/cmd/vet/"><code>go</code> <code>vet</code></a>,
that points out correctness bugs.
We have a high bar for checks, so that when vet speaks, you should listen.
But everyone has to remember to run it.
It would be better if you didnt have to remember.
In particular, I think we could probably run vet
in parallel with the final compile and link of the test binary
during <code>go</code> <code>test</code> without slowing the compile-edit-test cycle at all.
If we can do that, and if we limit the enabled vet checks to a subset
that is essentially 100% accurate,
we can make passing vet a precondition for running a test at all.
Then developers dont need to remember to run <code>go</code> <code>vet</code>.
They run <code>go</code> <code>test</code>,
and once in a while vet speaks up with something important
and avoids a debugging session.
<a href="https://golang.org/issue/18084">#18084</a>,
<a href="https://golang.org/issue/18085">#18085</a>.</p>
<h2><a name="error"></a>Errors &amp; best practices</h2>
<p class=lp>Part of the intended contract for error reporting in Go is that functions
include relevant available context, including the operation being attempted
(such as the function name and its arguments).
For example, this program:</p>
<pre><code>err := os.Remove(&quot;/tmp/nonexist&quot;)
fmt.Println(err)
</code></pre>
<p class=lp>prints this output:</p>
<pre><code>remove /tmp/nonexist: no such file or directory
</code></pre>
<p class=lp>Not enough Go code adds context like <code>os.Remove</code> does. Too much code does only</p>
<pre><code>if err != nil {
return err
}
</code></pre>
<p class=lp>all the way up the call stack,
discarding useful context that should be reported
(like <code>remove</code> <code>/tmp/nonexist:</code> above).
I would like to try to understand whether our expectations
for including context are wrong, or if there is something
we can do to make it easier to write code that returns better errors.</p>
<p class=pp>There are also various discussions in the community about
agreed-upon interfaces for stripping error context.
I would like to try to understand when that makes sense and
whether we should adopt an official recommendation.</p>
<h2><a name="context"></a>Context &amp; best practices</h2>
<p class=lp>We added the new <a href="https://golang.org/pkg/context/">context package</a>
in Go 1.7 for holding request-scoped information like
<a href="https://blog.golang.org/context">timeouts, cancellation state, and credentials</a>.
An individual context is immutable (like an individual string or int):
it is only possible to derive a new, updated context and
pass that context explicitly further down the call stack or
(less commonly) back up to the caller.
The context is now carried through APIs such as
<a href="https://golang.org/pkg/database/sql">database/sql</a>
and
<a href="https://golang.org/pkg/net/http">net/http</a>,
mainly so that those can stop processing a request when the caller
is no longer interested in the result.
Timeout information is appropriate to carry in a context,
but—to use a <a href="https://golang.org/issue/18284">real example we removed</a>—database options
are not, because they are unlikely to apply equally well to all possible
database operations carried out during a request.
What about the current clock source, or logging sink?
Is either of those appropriate to store in a context?
I would like to try to understand and characterize the
criteria for what is and is not an appropriate use of context.</p>
<h2><a name="memory"></a>Memory model</h2>
<p class=lp>Gos <a href="https://golang.org/ref/mem">memory model</a> is intentionally low-key,
making few promises to users, compared to other languages.
In fact it starts by discouraging people from reading the rest of the document.
At the same time, it demands more of the compiler than other languages:
in particular, a race on an integer value is not sufficient license
for your program to misbehave in arbitrary ways.
But there are some complete gaps, in particular no mention of
the <a href="https://golang.org/pkg/sync/atomic/">sync/atomic package</a>.
I think the core compiler and runtime developers all agree
that the behavior of those atomics should be roughly the same as
C++ seqcst atomics or Java volatiles,
but we still need to write that down carefully in the memory model,
and probably also in a long blog post.
<a href="https://golang.org/issue/5045">#5045</a>,
<a href="https://golang.org/issue/7948">#7948</a>,
<a href="https://golang.org/issue/9442">#9442</a>.</p>
<h2><a name="immutability"></a>Immutability</h2>
<p class=lp>The <a href="https://golang.org/doc/articles/race_detector.html">race detector</a>
is one of Gos most loved features.
But not having races would be even better.
I would love it if there were some reasonable way to integrate
<a href="https://www.google.com/search?q=%22reference+immutability%22">reference immutability</a> into Go,
so that programmers can make clear, checked assertions about what can and cannot
be written and thereby eliminate certain races at compile time.
Go already has one immutable type, <code>string</code>; it would
be nice to retroactively define that
<code>string</code> is a named type (or type alias) for <code>immutable</code> <code>[]byte</code>.
I dont think that will happen this year,
but Id like to understand the solution space better.
Javari, Midori, Pony, and Rust have all staked out interesting points
in the solution space, and there are plenty of research papers
beyond those.</p>
<p class=pp>In the long-term, if we could statically eliminate the possibility of races,
that would eliminate the need for most of the memory model.
That may well be an impossible dream,
but again Id like to understand the solution space better.</p>
<h2><a name="generics"></a>Generics</h2>
<p class=lp>Nothing sparks more <a href="https://research.swtch.com/dogma">heated arguments</a>
among Go and non-Go developers than the question of whether Go should
have support for generics (or how many years ago that should have happened).
I dont believe the Go team has ever said “Go does not need generics.”
What we <em>have</em> said is that there are higher-priority issues facing Go.
For example, I believe that better support for package management
would have a much larger immediate positive impact on most Go developers
than adding generics.
But we do certainly understand that for a certain subset of Go use cases,
the lack of parametric polymorphism is a significant hindrance.</p>
<p class=pp>Personally, I would like to be able to write general channel-processing
functions like:</p>
<pre><code>// Join makes all messages received on the input channels
// available for receiving from the returned channel.
func Join(inputs ...&lt;-chan T) &lt;-chan T
// Dup duplicates messages received on c to both c1 and c2.
func Dup(c &lt;-chan T) (c1, c2 &lt;-chan T)
</code></pre>
<p class=lp>I would also like to be able to write
Go support for high-level data processing abstractions,
analogous to
<a href="https://research.google.com/pubs/archive/35650.pdf">FlumeJava</a> or
C#s <a href="https://en.wikipedia.org/wiki/Language_Integrated_Query">LINQ</a>,
in a way that catches type errors at compile time instead of at run time.
There are also any number of data structures or generic algorithms
that might be written,
but I personally find these broader applications more compelling.</p>
<p class=pp>Weve <a href="https://research.swtch.com/generic">struggled</a> off and on
<a href="https://golang.org/design/15292-generics">for years</a>
to find the right way to add generics to Go.
At least a few of the past proposals got hung up on trying to design
something that provided both general parametric polymorphism
(like <code>chan</code> <code>T</code>) and also a unification of <code>string</code> and <code>[]byte</code>.
If the latter is handled by parameterization over immutability,
as described in the previous section, then maybe that simplifies
the demands on a design for generics.</p>
<p class=pp>When I first started thinking about generics for Go in 2008,
the main examples to learn from were C#, Java, Haskell, and ML.
None of the approaches in those languages seemed like a
perfect fit for Go.
Today, there are newer attempts to learn from as well,
including Dart, Midori, Rust, and Swift.</p>
<p class=pp>Its been a few years since we ventured out and explored the design space.
It is probably time to look around again,
especially in light of the insight about mutability and
the additional examples set by newer languages.
I dont think generics will happen this year,
but Id like to be able to say I understand the solution space better.</p>
</div>
<div id="disqus_thread"></div>
<script>
var disqus_config = function () {
this.page.url = "https://research.swtch.com/go2017";
this.page.identifier = "blog/go2017";
};
(function() {
var d = document, s = d.createElement('script');
s.src = '//swtch.disqus.com/embed.js';
s.setAttribute('data-timestamp', +new Date());
(d.head || d.body).appendChild(s);
})();
</script>
<noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
</div>
</div>
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
var pageTracker = _gat._getTracker("UA-3319603-2");
pageTracker._initData();
pageTracker._trackPageview();
</script>
</body>
</html>

View file

@ -0,0 +1,120 @@
<?xml version="1.0"?>
<catalog>
<book id="bk101">
<author>Gambardella, Matthew</author>
<title>XML Developer's Guide</title>
<genre>Computer</genre>
<price>44.95</price>
<publish_date>2000-10-01</publish_date>
<description>An in-depth look at creating applications
with XML.</description>
</book>
<book id="bk102">
<author>Ralls, Kim</author>
<title>Midnight Rain</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2000-12-16</publish_date>
<description>A former architect battles corporate zombies,
an evil sorceress, and her own childhood to become queen
of the world.</description>
</book>
<book id="bk103">
<author>Corets, Eva</author>
<title>Maeve Ascendant</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2000-11-17</publish_date>
<description>After the collapse of a nanotechnology
society in England, the young survivors lay the
foundation for a new society.</description>
</book>
<book id="bk104">
<author>Corets, Eva</author>
<title>Oberon's Legacy</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2001-03-10</publish_date>
<description>In post-apocalypse England, the mysterious
agent known only as Oberon helps to create a new life
for the inhabitants of London. Sequel to Maeve
Ascendant.</description>
</book>
<book id="bk105">
<author>Corets, Eva</author>
<title>The Sundered Grail</title>
<genre>Fantasy</genre>
<price>5.95</price>
<publish_date>2001-09-10</publish_date>
<description>The two daughters of Maeve, half-sisters,
battle one another for control of England. Sequel to
Oberon's Legacy.</description>
</book>
<book id="bk106">
<author>Randall, Cynthia</author>
<title>Lover Birds</title>
<genre>Romance</genre>
<price>4.95</price>
<publish_date>2000-09-02</publish_date>
<description>When Carla meets Paul at an ornithology
conference, tempers fly as feathers get ruffled.</description>
</book>
<book id="bk107">
<author>Thurman, Paula</author>
<title>Splish Splash</title>
<genre>Romance</genre>
<price>4.95</price>
<publish_date>2000-11-02</publish_date>
<description>A deep sea diver finds true love twenty
thousand leagues beneath the sea.</description>
</book>
<book id="bk108">
<author>Knorr, Stefan</author>
<title>Creepy Crawlies</title>
<genre>Horror</genre>
<price>4.95</price>
<publish_date>2000-12-06</publish_date>
<description>An anthology of horror stories about roaches,
centipedes, scorpions and other insects.</description>
</book>
<book id="bk109">
<author>Kress, Peter</author>
<title>Paradox Lost</title>
<genre>Science Fiction</genre>
<price>6.95</price>
<publish_date>2000-11-02</publish_date>
<description>After an inadvertant trip through a Heisenberg
Uncertainty Device, James Salway discovers the problems
of being quantum.</description>
</book>
<book id="bk110">
<author>O'Brien, Tim</author>
<title>Microsoft .NET: The Programming Bible</title>
<genre>Computer</genre>
<price>36.95</price>
<publish_date>2000-12-09</publish_date>
<description>Microsoft's .NET initiative is explored in
detail in this deep programmer's reference.</description>
</book>
<book id="bk111">
<author>O'Brien, Tim</author>
<title>MSXML3: A Comprehensive Guide</title>
<genre>Computer</genre>
<price>36.95</price>
<publish_date>2000-12-01</publish_date>
<description>The Microsoft MSXML3 parser is covered in
detail, with attention to XML DOM interfaces, XSLT processing,
SAX and more.</description>
</book>
<book id="bk112">
<author>Galos, Mike</author>
<title>Visual Studio 7: A Comprehensive Guide</title>
<genre>Computer</genre>
<price>49.95</price>
<publish_date>2001-04-16</publish_date>
<description>Microsoft Visual Studio 7 is explored in depth,
looking at how Visual Basic, Visual C++, C#, and ASP+ are
integrated into a comprehensive development
environment.</description>
</book>
</catalog>

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,42 @@
<?xml version="1.0"?>
<?xml-stylesheet href="catalog.xsl" type="text/xsl"?>
<!DOCTYPE catalog SYSTEM "catalog.dtd">
<catalog>
<product description="Cardigan Sweater" product_image="cardigan.jpg">
<catalog_item gender="Men's">
<item_number>QWZ5671</item_number>
<price>39.95</price>
<size description="Medium">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
</size>
<size description="Large">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
</size>
</catalog_item>
<catalog_item gender="Women's">
<item_number>RRX9856</item_number>
<price>42.50</price>
<size description="Small">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="navy_cardigan.jpg">Navy</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
</size>
<size description="Medium">
<color_swatch image="red_cardigan.jpg">Red</color_swatch>
<color_swatch image="navy_cardigan.jpg">Navy</color_swatch>
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
<color_swatch image="black_cardigan.jpg">Black</color_swatch>
</size>
<size description="Large">
<color_swatch image="navy_cardigan.jpg">Navy</color_swatch>
<color_swatch image="black_cardigan.jpg">Black</color_swatch>
</size>
<size description="Extra Large">
<color_swatch image="burgundy_cardigan.jpg">Burgundy</color_swatch>
<color_swatch image="black_cardigan.jpg">Black</color_swatch>
</size>
</catalog_item>
</product>
</catalog>

View file

@ -0,0 +1,140 @@
// doT.js
// 2011-2014, Laura Doktorova, https://github.com/olado/doT
// Licensed under the MIT license.
(function() {
"use strict";
var doT = {
version: "1.0.3",
templateSettings: {
evaluate: /\{\{([\s\S]+?(\}?)+)\}\}/g,
interpolate: /\{\{=([\s\S]+?)\}\}/g,
encode: /\{\{!([\s\S]+?)\}\}/g,
use: /\{\{#([\s\S]+?)\}\}/g,
useParams: /(^|[^\w$])def(?:\.|\[[\'\"])([\w$\.]+)(?:[\'\"]\])?\s*\:\s*([\w$\.]+|\"[^\"]+\"|\'[^\']+\'|\{[^\}]+\})/g,
define: /\{\{##\s*([\w\.$]+)\s*(\:|=)([\s\S]+?)#\}\}/g,
defineParams:/^\s*([\w$]+):([\s\S]+)/,
conditional: /\{\{\?(\?)?\s*([\s\S]*?)\s*\}\}/g,
iterate: /\{\{~\s*(?:\}\}|([\s\S]+?)\s*\:\s*([\w$]+)\s*(?:\:\s*([\w$]+))?\s*\}\})/g,
varname: "it",
strip: true,
append: true,
selfcontained: false,
doNotSkipEncoded: false
},
template: undefined, //fn, compile template
compile: undefined //fn, for express
}, _globals;
doT.encodeHTMLSource = function(doNotSkipEncoded) {
var encodeHTMLRules = { "&": "&#38;", "<": "&#60;", ">": "&#62;", '"': "&#34;", "'": "&#39;", "/": "&#47;" },
matchHTML = doNotSkipEncoded ? /[&<>\/]/g : /&(?!#?\w+;)|<|>|\//g;
return function(code) {
return code ? code.toString().replace(matchHTML, function(m) {return encodeHTMLRules[m] || m;}) : "";
};
};
_globals = (function(){ return this || (0,eval)("this"); }());
if (typeof module !== "undefined" && module.exports) {
module.exports = doT;
} else if (typeof define === "function" && define.amd) {
define(function(){return doT;});
} else {
_globals.doT = doT;
}
var startend = {
append: { start: "'+(", end: ")+'", startencode: "'+encodeHTML(" },
split: { start: "';out+=(", end: ");out+='", startencode: "';out+=encodeHTML(" }
}, skip = /$^/;
function resolveDefs(c, block, def) {
return ((typeof block === "string") ? block : block.toString())
.replace(c.define || skip, function(m, code, assign, value) {
if (code.indexOf("def.") === 0) {
code = code.substring(4);
}
if (!(code in def)) {
if (assign === ":") {
if (c.defineParams) value.replace(c.defineParams, function(m, param, v) {
def[code] = {arg: param, text: v};
});
if (!(code in def)) def[code]= value;
} else {
new Function("def", "def['"+code+"']=" + value)(def);
}
}
return "";
})
.replace(c.use || skip, function(m, code) {
if (c.useParams) code = code.replace(c.useParams, function(m, s, d, param) {
if (def[d] && def[d].arg && param) {
var rw = (d+":"+param).replace(/'|\\/g, "_");
def.__exp = def.__exp || {};
def.__exp[rw] = def[d].text.replace(new RegExp("(^|[^\\w$])" + def[d].arg + "([^\\w$])", "g"), "$1" + param + "$2");
return s + "def.__exp['"+rw+"']";
}
});
var v = new Function("def", "return " + code)(def);
return v ? resolveDefs(c, v, def) : v;
});
}
function unescape(code) {
return code.replace(/\\('|\\)/g, "$1").replace(/[\r\t\n]/g, " ");
}
doT.template = function(tmpl, c, def) {
c = c || doT.templateSettings;
var cse = c.append ? startend.append : startend.split, needhtmlencode, sid = 0, indv,
str = (c.use || c.define) ? resolveDefs(c, tmpl, def || {}) : tmpl;
str = ("var out='" + (c.strip ? str.replace(/(^|\r|\n)\t* +| +\t*(\r|\n|$)/g," ")
.replace(/\r|\n|\t|\/\*[\s\S]*?\*\//g,""): str)
.replace(/'|\\/g, "\\$&")
.replace(c.interpolate || skip, function(m, code) {
return cse.start + unescape(code) + cse.end;
})
.replace(c.encode || skip, function(m, code) {
needhtmlencode = true;
return cse.startencode + unescape(code) + cse.end;
})
.replace(c.conditional || skip, function(m, elsecase, code) {
return elsecase ?
(code ? "';}else if(" + unescape(code) + "){out+='" : "';}else{out+='") :
(code ? "';if(" + unescape(code) + "){out+='" : "';}out+='");
})
.replace(c.iterate || skip, function(m, iterate, vname, iname) {
if (!iterate) return "';} } out+='";
sid+=1; indv=iname || "i"+sid; iterate=unescape(iterate);
return "';var arr"+sid+"="+iterate+";if(arr"+sid+"){var "+vname+","+indv+"=-1,l"+sid+"=arr"+sid+".length-1;while("+indv+"<l"+sid+"){"
+vname+"=arr"+sid+"["+indv+"+=1];out+='";
})
.replace(c.evaluate || skip, function(m, code) {
return "';" + unescape(code) + "out+='";
})
+ "';return out;")
.replace(/\n/g, "\\n").replace(/\t/g, '\\t').replace(/\r/g, "\\r")
.replace(/(\s|;|\}|^|\{)out\+='';/g, '$1').replace(/\+''/g, "");
//.replace(/(\s|;|\}|^|\{)out\+=''\+/g,'$1out+=');
if (needhtmlencode) {
if (!c.selfcontained && _globals && !_globals._encodeHTML) _globals._encodeHTML = doT.encodeHTMLSource(c.doNotSkipEncoded);
str = "var encodeHTML = typeof _encodeHTML !== 'undefined' ? _encodeHTML : ("
+ doT.encodeHTMLSource.toString() + "(" + (c.doNotSkipEncoded || '') + "));"
+ str;
}
try {
return new Function(c.varname, str);
} catch (e) {
if (typeof console !== "undefined") console.log("Could not create a template function: " + str);
throw e;
}
};
doT.compile = function(tmpl, def) {
return doT.template(tmpl, null, def);
};
}());

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,68 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 15.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="レイヤー_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px"
y="0px" width="401.98px" height="559.472px" viewBox="0 0 401.98 559.472" enable-background="new 0 0 401.98 559.472"
xml:space="preserve">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M10.634,300.493c0.764,15.751,16.499,8.463,23.626,3.539c6.765-4.675,8.743-0.789,9.337-10.015
c0.389-6.064,1.088-12.128,0.744-18.216c-10.23-0.927-21.357,1.509-29.744,7.602C10.277,286.542,2.177,296.561,10.634,300.493"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#C6B198" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M10.634,300.493c2.29-0.852,4.717-1.457,6.271-3.528"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#6AD7E5" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M46.997,112.853C-13.3,95.897,31.536,19.189,79.956,50.74L46.997,112.853z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#6AD7E5" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M314.895,44.984c47.727-33.523,90.856,42.111,35.388,61.141L314.895,44.984z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M325.161,494.343c12.123,7.501,34.282,30.182,16.096,41.18c-17.474,15.999-27.254-17.561-42.591-22.211
C305.271,504.342,313.643,496.163,325.161,494.343z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="none" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M341.257,535.522c-2.696-5.361-3.601-11.618-8.102-15.939"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M108.579,519.975c-14.229,2.202-22.238,15.039-34.1,21.558c-11.178,6.665-15.454-2.134-16.461-3.92
c-1.752-0.799-1.605,0.744-4.309-1.979c-10.362-16.354,10.797-28.308,21.815-36.432C90.87,496.1,100.487,509.404,108.579,519.975z"
/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="none" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M58.019,537.612c0.542-6.233,5.484-10.407,7.838-15.677"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M49.513,91.667c-7.955-4.208-13.791-9.923-8.925-19.124
c4.505-8.518,12.874-7.593,20.83-3.385L49.513,91.667z"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M337.716,83.667c7.955-4.208,13.791-9.923,8.925-19.124
c-4.505-8.518-12.874-7.593-20.83-3.385L337.716,83.667z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M392.475,298.493c-0.764,15.751-16.499,8.463-23.626,3.539c-6.765-4.675-8.743-0.789-9.337-10.015
c-0.389-6.064-1.088-12.128-0.744-18.216c10.23-0.927,21.357,1.509,29.744,7.602C392.831,284.542,400.932,294.561,392.475,298.493"
/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#C6B198" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M392.475,298.493c-2.29-0.852-4.717-1.457-6.271-3.528"/>
<g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#6AD7E5" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M195.512,13.124c60.365,0,116.953,8.633,146.452,66.629c26.478,65.006,17.062,135.104,21.1,203.806
c3.468,58.992,11.157,127.145-16.21,181.812c-28.79,57.514-100.73,71.982-160,69.863c-46.555-1.666-102.794-16.854-129.069-59.389
c-30.826-49.9-16.232-124.098-13.993-179.622c2.652-65.771-17.815-131.742,3.792-196.101
C69.999,33.359,130.451,18.271,195.512,13.124"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="2.9081" stroke-linecap="round" d="
M206.169,94.16c10.838,63.003,113.822,46.345,99.03-17.197C291.935,19.983,202.567,35.755,206.169,94.16"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="2.8214" stroke-linecap="round" d="
M83.103,104.35c14.047,54.85,101.864,40.807,98.554-14.213C177.691,24.242,69.673,36.957,83.103,104.35"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M218.594,169.762c0.046,8.191,1.861,17.387,0.312,26.101c-2.091,3.952-6.193,4.37-9.729,5.967c-4.89-0.767-9.002-3.978-10.963-8.552
c-1.255-9.946,0.468-19.576,0.785-29.526L218.594,169.762z"/>
<g>
<ellipse fill-rule="evenodd" clip-rule="evenodd" cx="107.324" cy="95.404" rx="14.829" ry="16.062"/>
<ellipse fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" cx="114.069" cy="99.029" rx="3.496" ry="4.082"/>
</g>
<g>
<ellipse fill-rule="evenodd" clip-rule="evenodd" cx="231.571" cy="91.404" rx="14.582" ry="16.062"/>
<ellipse fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" cx="238.204" cy="95.029" rx="3.438" ry="4.082"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" stroke="#000000" stroke-width="3" stroke-linecap="round" d="
M176.217,168.87c-6.47,15.68,3.608,47.035,21.163,23.908c-1.255-9.946,0.468-19.576,0.785-29.526L176.217,168.87z"/>
<g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#F6D2A2" stroke="#231F20" stroke-width="3" stroke-linecap="round" d="
M178.431,138.673c-12.059,1.028-21.916,15.366-15.646,26.709c8.303,15.024,26.836-1.329,38.379,0.203
c13.285,0.272,24.17,14.047,34.84,2.49c11.867-12.854-5.109-25.373-18.377-30.97L178.431,138.673z"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M176.913,138.045c-0.893-20.891,38.938-23.503,43.642-6.016
C225.247,149.475,178.874,153.527,176.913,138.045C175.348,125.682,176.913,138.045,176.913,138.045z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.7 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,52 @@
[{
"created_at": "Thu Jun 22 21:00:00 +0000 2017",
"id": 877994604561387500,
"id_str": "877994604561387520",
"text": "Creating a Grocery List Manager Using Angular, Part 1: Add &amp; Display Items https://t.co/xFox78juL1 #Angular",
"truncated": false,
"entities": {
"hashtags": [{
"text": "Angular",
"indices": [103, 111]
}],
"symbols": [],
"user_mentions": [],
"urls": [{
"url": "https://t.co/xFox78juL1",
"expanded_url": "http://buff.ly/2sr60pf",
"display_url": "buff.ly/2sr60pf",
"indices": [79, 102]
}]
},
"source": "<a href=\"http://bufferapp.com\" rel=\"nofollow\">Buffer</a>",
"user": {
"id": 772682964,
"id_str": "772682964",
"name": "SitePoint JavaScript",
"screen_name": "SitePointJS",
"location": "Melbourne, Australia",
"description": "Keep up with JavaScript tutorials, tips, tricks and articles at SitePoint.",
"url": "http://t.co/cCH13gqeUK",
"entities": {
"url": {
"urls": [{
"url": "http://t.co/cCH13gqeUK",
"expanded_url": "http://sitepoint.com/javascript",
"display_url": "sitepoint.com/javascript",
"indices": [0, 22]
}]
},
"description": {
"urls": []
}
},
"protected": false,
"followers_count": 2145,
"friends_count": 18,
"listed_count": 328,
"created_at": "Wed Aug 22 02:06:33 +0000 2012",
"favourites_count": 57,
"utc_offset": 43200,
"time_zone": "Wellington"
}
}]

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 999 KiB

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,33 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/svg"
)
var svgSamples = []string{
"sample_arctic.svg",
"sample_gopher.svg",
"sample_usa.svg",
}
func init() {
for _, sample := range svgSamples {
load(sample)
}
}
func BenchmarkSVG(b *testing.B) {
for _, sample := range svgSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
svg.Minify(m, w[sample], r[sample], nil)
}
})
}
}

View file

@ -0,0 +1,33 @@
package benchmarks
import (
"testing"
"github.com/tdewolff/minify/xml"
)
var xmlSamples = []string{
"sample_books.xml",
"sample_catalog.xml",
"sample_omg.xml",
}
func init() {
for _, sample := range xmlSamples {
load(sample)
}
}
func BenchmarkXML(b *testing.B) {
for _, sample := range xmlSamples {
b.Run(sample, func(b *testing.B) {
b.SetBytes(int64(r[sample].Len()))
for i := 0; i < b.N; i++ {
r[sample].Reset()
w[sample].Reset()
xml.Minify(m, w[sample], r[sample], nil)
}
})
}
}

149
vendor/github.com/tdewolff/minify/cmd/minify/README.md generated vendored Normal file
View file

@ -0,0 +1,149 @@
# Minify [![Join the chat at https://gitter.im/tdewolff/minify](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/tdewolff/minify?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
**[Download binaries](https://github.com/tdewolff/minify/releases) for Windows, Linux and macOS**
Minify is a CLI implementation of the minify [library package](https://github.com/tdewolff/minify).
## Installation
Make sure you have [Go](http://golang.org/) and [Git](http://git-scm.com/) installed.
Run the following command
go get github.com/tdewolff/minify/cmd/minify
and the `minify` command will be in your `$GOPATH/bin`.
## Usage
Usage: minify [options] [input]
Options:
-a, --all
Minify all files, including hidden files and files in hidden directories
-l, --list
List all accepted filetypes
--match string
Filename pattern matching using regular expressions, see https://github.com/google/re2/wiki/Syntax
--mime string
Mimetype (text/css, application/javascript, ...), optional for input filenames, has precedence over -type
-o, --output string
Output file or directory (must have trailing slash), leave blank to use stdout
-r, --recursive
Recursively minify directories
--type string
Filetype (css, html, js, ...), optional for input filenames
-u, --update
Update binary
--url string
URL of file to enable URL minification
-v, --verbose
Verbose
-w, --watch
Watch files and minify upon changes
--css-decimals
Number of decimals to preserve in numbers, -1 is all
--html-keep-conditional-comments
Preserve all IE conditional comments
--html-keep-default-attrvals
Preserve default attribute values
--html-keep-document-tags
Preserve html, head and body tags
--html-keep-end-tags
Preserve all end tags
--html-keep-whitespace
Preserve whitespace characters but still collapse multiple into one
--svg-decimals
Number of decimals to preserve in numbers, -1 is all
--xml-keep-whitespace
Preserve whitespace characters but still collapse multiple into one
Input:
Files or directories, leave blank to use stdin
### Types
css text/css
htm text/html
html text/html
js text/javascript
json application/json
svg image/svg+xml
xml text/xml
## Examples
Minify **index.html** to **index-min.html**:
```sh
$ minify -o index-min.html index.html
```
Minify **index.html** to standard output (leave `-o` blank):
```sh
$ minify index.html
```
Normally the mimetype is inferred from the extension, to set the mimetype explicitly:
```sh
$ minify --type=html -o index-min.tpl index.tpl
```
You need to set the type or the mimetype option when using standard input:
```sh
$ minify --mime=text/javascript < script.js > script-min.js
$ cat script.js | minify --type=js > script-min.js
```
### Directories
You can also give directories as input, and these directories can be minified recursively.
Minify files in the current working directory to **out/** (no subdirectories):
```sh
$ minify -o out/ .
```
Minify files recursively in **src/**:
```sh
$ minify -r -o out/ src
```
Minify only javascript files in **src/**:
```sh
$ minify -r -o out/ --match=\.js src
```
### Concatenate
When multiple inputs are given and either standard output or a single output file, it will concatenate the files together.
Concatenate **one.css** and **two.css** into **style.css**:
```sh
$ minify -o style.css one.css two.css
```
Concatenate all files in **styles/** into **style.css**:
```sh
$ minify -o style.css styles
```
You can also use `cat` as standard input to concatenate files and use gzip for example:
```sh
$ cat one.css two.css three.css | minify --type=css | gzip -9 -c > style.css.gz
```
### Watching
To watch file changes and automatically re-minify you can use the `-w` or `--watch` option.
Minify **style.css** to itself and watch changes:
```sh
$ minify -w -o style.css style.css
```
Minify and concatenate **one.css** and **two.css** to **style.css** and watch changes:
```sh
$ minify -w -o style.css one.css two.css
```
Minify files in **src/** and subdirectories to **out/** and watch changes:
```sh
$ minify -w -r -o out/ src
```

648
vendor/github.com/tdewolff/minify/cmd/minify/main.go generated vendored Normal file
View file

@ -0,0 +1,648 @@
package main
import (
"bufio"
"fmt"
"io"
"io/ioutil"
"log"
"net/url"
"os"
"os/signal"
"path"
"path/filepath"
"regexp"
"runtime"
"sort"
"strings"
"sync/atomic"
"time"
humanize "github.com/dustin/go-humanize"
"github.com/matryer/try"
flag "github.com/spf13/pflag"
min "github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/html"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
)
var Version = "master"
var Commit = ""
var Date = ""
var filetypeMime = map[string]string{
"css": "text/css",
"htm": "text/html",
"html": "text/html",
"js": "text/javascript",
"json": "application/json",
"svg": "image/svg+xml",
"xml": "text/xml",
}
var (
hidden bool
list bool
m *min.M
pattern *regexp.Regexp
recursive bool
verbose bool
version bool
watch bool
)
type task struct {
srcs []string
srcDir string
dst string
}
var (
Error *log.Logger
Info *log.Logger
)
func main() {
output := ""
mimetype := ""
filetype := ""
match := ""
siteurl := ""
cssMinifier := &css.Minifier{}
htmlMinifier := &html.Minifier{}
jsMinifier := &js.Minifier{}
jsonMinifier := &json.Minifier{}
svgMinifier := &svg.Minifier{}
xmlMinifier := &xml.Minifier{}
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "Usage: %s [options] [input]\n\nOptions:\n", os.Args[0])
flag.PrintDefaults()
fmt.Fprintf(os.Stderr, "\nInput:\n Files or directories, leave blank to use stdin\n")
}
flag.StringVarP(&output, "output", "o", "", "Output file or directory (must have trailing slash), leave blank to use stdout")
flag.StringVar(&mimetype, "mime", "", "Mimetype (text/css, application/javascript, ...), optional for input filenames, has precedence over -type")
flag.StringVar(&filetype, "type", "", "Filetype (css, html, js, ...), optional for input filenames")
flag.StringVar(&match, "match", "", "Filename pattern matching using regular expressions, see https://github.com/google/re2/wiki/Syntax")
flag.BoolVarP(&recursive, "recursive", "r", false, "Recursively minify directories")
flag.BoolVarP(&hidden, "all", "a", false, "Minify all files, including hidden files and files in hidden directories")
flag.BoolVarP(&list, "list", "l", false, "List all accepted filetypes")
flag.BoolVarP(&verbose, "verbose", "v", false, "Verbose")
flag.BoolVarP(&watch, "watch", "w", false, "Watch files and minify upon changes")
flag.BoolVarP(&version, "version", "", false, "Version")
flag.StringVar(&siteurl, "url", "", "URL of file to enable URL minification")
flag.IntVar(&cssMinifier.Decimals, "css-decimals", -1, "Number of decimals to preserve in numbers, -1 is all")
flag.BoolVar(&htmlMinifier.KeepConditionalComments, "html-keep-conditional-comments", false, "Preserve all IE conditional comments")
flag.BoolVar(&htmlMinifier.KeepDefaultAttrVals, "html-keep-default-attrvals", false, "Preserve default attribute values")
flag.BoolVar(&htmlMinifier.KeepDocumentTags, "html-keep-document-tags", false, "Preserve html, head and body tags")
flag.BoolVar(&htmlMinifier.KeepEndTags, "html-keep-end-tags", false, "Preserve all end tags")
flag.BoolVar(&htmlMinifier.KeepWhitespace, "html-keep-whitespace", false, "Preserve whitespace characters but still collapse multiple into one")
flag.IntVar(&svgMinifier.Decimals, "svg-decimals", -1, "Number of decimals to preserve in numbers, -1 is all")
flag.BoolVar(&xmlMinifier.KeepWhitespace, "xml-keep-whitespace", false, "Preserve whitespace characters but still collapse multiple into one")
flag.Parse()
rawInputs := flag.Args()
Error = log.New(os.Stderr, "ERROR: ", 0)
if verbose {
Info = log.New(os.Stderr, "INFO: ", 0)
} else {
Info = log.New(ioutil.Discard, "INFO: ", 0)
}
if version {
if Version == "devel" {
fmt.Printf("minify version devel+%.7s %s\n", Commit, Date)
} else {
fmt.Printf("minify version %s\n", Version)
}
return
}
if list {
var keys []string
for k := range filetypeMime {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
fmt.Println(k + "\t" + filetypeMime[k])
}
return
}
useStdin := len(rawInputs) == 0
mimetype = getMimetype(mimetype, filetype, useStdin)
var err error
if match != "" {
pattern, err = regexp.Compile(match)
if err != nil {
Error.Fatalln(err)
}
}
if watch && (useStdin || output == "") {
Error.Fatalln("watch doesn't work with stdin or stdout")
}
////////////////
dirDst := false
if output != "" {
output = sanitizePath(output)
if output[len(output)-1] == '/' {
dirDst = true
if err := os.MkdirAll(output, 0777); err != nil {
Error.Fatalln(err)
}
}
}
tasks, ok := expandInputs(rawInputs, dirDst)
if !ok {
os.Exit(1)
}
if ok = expandOutputs(output, &tasks); !ok {
os.Exit(1)
}
if len(tasks) == 0 {
tasks = append(tasks, task{[]string{""}, "", output}) // stdin
}
m = min.New()
m.Add("text/css", cssMinifier)
m.Add("text/html", htmlMinifier)
m.Add("text/javascript", jsMinifier)
m.Add("image/svg+xml", svgMinifier)
m.AddRegexp(regexp.MustCompile("[/+]json$"), jsonMinifier)
m.AddRegexp(regexp.MustCompile("[/+]xml$"), xmlMinifier)
if m.URL, err = url.Parse(siteurl); err != nil {
Error.Fatalln(err)
}
start := time.Now()
var fails int32
if verbose || len(tasks) == 1 {
for _, t := range tasks {
if ok := minify(mimetype, t); !ok {
fails++
}
}
} else {
numWorkers := 4
if n := runtime.NumCPU(); n > numWorkers {
numWorkers = n
}
sem := make(chan struct{}, numWorkers)
for _, t := range tasks {
sem <- struct{}{}
go func(t task) {
defer func() {
<-sem
}()
if ok := minify(mimetype, t); !ok {
atomic.AddInt32(&fails, 1)
}
}(t)
}
// wait for all jobs to be done
for i := 0; i < cap(sem); i++ {
sem <- struct{}{}
}
}
if watch {
var watcher *RecursiveWatcher
watcher, err = NewRecursiveWatcher(recursive)
if err != nil {
Error.Fatalln(err)
}
defer watcher.Close()
var watcherTasks = make(map[string]task, len(rawInputs))
for _, task := range tasks {
for _, src := range task.srcs {
watcherTasks[src] = task
watcher.AddPath(src)
}
}
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
skip := make(map[string]int)
changes := watcher.Run()
for changes != nil {
select {
case <-c:
watcher.Close()
case file, ok := <-changes:
if !ok {
changes = nil
break
}
file = sanitizePath(file)
if skip[file] > 0 {
skip[file]--
continue
}
var t task
if t, ok = watcherTasks[file]; ok {
if !verbose {
fmt.Fprintln(os.Stderr, file, "changed")
}
for _, src := range t.srcs {
if src == t.dst {
skip[file] = 2 // minify creates both a CREATE and WRITE on the file
break
}
}
if ok := minify(mimetype, t); !ok {
fails++
}
}
}
}
}
if verbose {
Info.Println(time.Since(start), "total")
}
if fails > 0 {
os.Exit(1)
}
}
func getMimetype(mimetype, filetype string, useStdin bool) string {
if mimetype == "" && filetype != "" {
var ok bool
if mimetype, ok = filetypeMime[filetype]; !ok {
Error.Fatalln("cannot find mimetype for filetype", filetype)
}
}
if mimetype == "" && useStdin {
Error.Fatalln("must specify mimetype or filetype for stdin")
}
if verbose {
if mimetype == "" {
Info.Println("infer mimetype from file extensions")
} else {
Info.Println("use mimetype", mimetype)
}
}
return mimetype
}
func sanitizePath(p string) string {
p = filepath.ToSlash(p)
isDir := p[len(p)-1] == '/'
p = path.Clean(p)
if isDir {
p += "/"
} else if info, err := os.Stat(p); err == nil && info.Mode().IsDir() {
p += "/"
}
return p
}
func validFile(info os.FileInfo) bool {
if info.Mode().IsRegular() && len(info.Name()) > 0 && (hidden || info.Name()[0] != '.') {
if pattern != nil && !pattern.MatchString(info.Name()) {
return false
}
ext := path.Ext(info.Name())
if len(ext) > 0 {
ext = ext[1:]
}
if _, ok := filetypeMime[ext]; !ok {
return false
}
return true
}
return false
}
func validDir(info os.FileInfo) bool {
return info.Mode().IsDir() && len(info.Name()) > 0 && (hidden || info.Name()[0] != '.')
}
func expandInputs(inputs []string, dirDst bool) ([]task, bool) {
ok := true
tasks := []task{}
for _, input := range inputs {
input = sanitizePath(input)
info, err := os.Stat(input)
if err != nil {
Error.Println(err)
ok = false
continue
}
if info.Mode().IsRegular() {
tasks = append(tasks, task{[]string{filepath.ToSlash(input)}, "", ""})
} else if info.Mode().IsDir() {
expandDir(input, &tasks, &ok)
} else {
Error.Println("not a file or directory", input)
ok = false
}
}
if len(tasks) > 1 && !dirDst {
// concatenate
tasks[0].srcDir = ""
for _, task := range tasks[1:] {
tasks[0].srcs = append(tasks[0].srcs, task.srcs[0])
}
tasks = tasks[:1]
}
if verbose && ok {
if len(inputs) == 0 {
Info.Println("minify from stdin")
} else if len(tasks) == 1 {
if len(tasks[0].srcs) > 1 {
Info.Println("minify and concatenate", len(tasks[0].srcs), "input files")
} else {
Info.Println("minify input file", tasks[0].srcs[0])
}
} else {
Info.Println("minify", len(tasks), "input files")
}
}
return tasks, ok
}
func expandDir(input string, tasks *[]task, ok *bool) {
if !recursive {
if verbose {
Info.Println("expanding directory", input)
}
infos, err := ioutil.ReadDir(input)
if err != nil {
Error.Println(err)
*ok = false
}
for _, info := range infos {
if validFile(info) {
*tasks = append(*tasks, task{[]string{path.Join(input, info.Name())}, input, ""})
}
}
} else {
if verbose {
Info.Println("expanding directory", input, "recursively")
}
err := filepath.Walk(input, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if validFile(info) {
*tasks = append(*tasks, task{[]string{filepath.ToSlash(path)}, input, ""})
} else if info.Mode().IsDir() && !validDir(info) && info.Name() != "." && info.Name() != ".." { // check for IsDir, so we don't skip the rest of the directory when we have an invalid file
return filepath.SkipDir
}
return nil
})
if err != nil {
Error.Println(err)
*ok = false
}
}
}
func expandOutputs(output string, tasks *[]task) bool {
if verbose {
if output == "" {
Info.Println("minify to stdout")
} else if output[len(output)-1] != '/' {
Info.Println("minify to output file", output)
} else if output == "./" {
Info.Println("minify to current working directory")
} else {
Info.Println("minify to output directory", output)
}
}
if output == "" {
return true
}
ok := true
for i, t := range *tasks {
var err error
(*tasks)[i].dst, err = getOutputFilename(output, t)
if err != nil {
Error.Println(err)
ok = false
}
}
return ok
}
func getOutputFilename(output string, t task) (string, error) {
if len(output) > 0 && output[len(output)-1] == '/' {
rel, err := filepath.Rel(t.srcDir, t.srcs[0])
if err != nil {
return "", err
}
return path.Clean(filepath.ToSlash(path.Join(output, rel))), nil
}
return output, nil
}
func openInputFile(input string) (*os.File, bool) {
var r *os.File
if input == "" {
r = os.Stdin
} else {
err := try.Do(func(attempt int) (bool, error) {
var err error
r, err = os.Open(input)
return attempt < 5, err
})
if err != nil {
Error.Println(err)
return nil, false
}
}
return r, true
}
func openOutputFile(output string) (*os.File, bool) {
var w *os.File
if output == "" {
w = os.Stdout
} else {
if err := os.MkdirAll(path.Dir(output), 0777); err != nil {
Error.Println(err)
return nil, false
}
err := try.Do(func(attempt int) (bool, error) {
var err error
w, err = os.OpenFile(output, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0666)
return attempt < 5, err
})
if err != nil {
Error.Println(err)
return nil, false
}
}
return w, true
}
func minify(mimetype string, t task) bool {
if mimetype == "" {
for _, src := range t.srcs {
if len(path.Ext(src)) > 0 {
srcMimetype, ok := filetypeMime[path.Ext(src)[1:]]
if !ok {
Error.Println("cannot infer mimetype from extension in", src)
return false
}
if mimetype == "" {
mimetype = srcMimetype
} else if srcMimetype != mimetype {
Error.Println("inferred mimetype", srcMimetype, "of", src, "for concatenation unequal to previous mimetypes", mimetype)
return false
}
}
}
}
srcName := strings.Join(t.srcs, " + ")
if len(t.srcs) > 1 {
srcName = "(" + srcName + ")"
}
if srcName == "" {
srcName = "stdin"
}
dstName := t.dst
if dstName == "" {
dstName = "stdin"
} else {
// rename original when overwriting
for i := range t.srcs {
if t.srcs[i] == t.dst {
t.srcs[i] += ".bak"
err := try.Do(func(attempt int) (bool, error) {
err := os.Rename(t.dst, t.srcs[i])
return attempt < 5, err
})
if err != nil {
Error.Println(err)
return false
}
break
}
}
}
frs := make([]io.Reader, len(t.srcs))
for i, src := range t.srcs {
fr, ok := openInputFile(src)
if !ok {
for _, fr := range frs {
fr.(io.ReadCloser).Close()
}
return false
}
if i > 0 && mimetype == filetypeMime["js"] {
// prepend newline when concatenating JS files
frs[i] = NewPrependReader(fr, []byte("\n"))
} else {
frs[i] = fr
}
}
r := &countingReader{io.MultiReader(frs...), 0}
fw, ok := openOutputFile(t.dst)
if !ok {
for _, fr := range frs {
fr.(io.ReadCloser).Close()
}
return false
}
var w *countingWriter
if fw == os.Stdout {
w = &countingWriter{fw, 0}
} else {
w = &countingWriter{bufio.NewWriter(fw), 0}
}
success := true
startTime := time.Now()
err := m.Minify(mimetype, w, r)
if err != nil {
Error.Println("cannot minify "+srcName+":", err)
success = false
}
if verbose {
dur := time.Since(startTime)
speed := "Inf MB"
if dur > 0 {
speed = humanize.Bytes(uint64(float64(r.N) / dur.Seconds()))
}
ratio := 1.0
if r.N > 0 {
ratio = float64(w.N) / float64(r.N)
}
stats := fmt.Sprintf("(%9v, %6v, %5.1f%%, %6v/s)", dur, humanize.Bytes(uint64(w.N)), ratio*100, speed)
if srcName != dstName {
Info.Println(stats, "-", srcName, "to", dstName)
} else {
Info.Println(stats, "-", srcName)
}
}
for _, fr := range frs {
fr.(io.ReadCloser).Close()
}
if bw, ok := w.Writer.(*bufio.Writer); ok {
bw.Flush()
}
fw.Close()
// remove original that was renamed, when overwriting files
for i := range t.srcs {
if t.srcs[i] == t.dst+".bak" {
if err == nil {
if err = os.Remove(t.srcs[i]); err != nil {
Error.Println(err)
return false
}
} else {
if err = os.Remove(t.dst); err != nil {
Error.Println(err)
return false
} else if err = os.Rename(t.srcs[i], t.dst); err != nil {
Error.Println(err)
return false
}
}
t.srcs[i] = t.dst
break
}
}
return success
}

46
vendor/github.com/tdewolff/minify/cmd/minify/util.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
package main
import "io"
type countingReader struct {
io.Reader
N int
}
func (r *countingReader) Read(p []byte) (int, error) {
n, err := r.Reader.Read(p)
r.N += n
return n, err
}
type countingWriter struct {
io.Writer
N int
}
func (w *countingWriter) Write(p []byte) (int, error) {
n, err := w.Writer.Write(p)
w.N += n
return n, err
}
type prependReader struct {
io.ReadCloser
prepend []byte
}
func NewPrependReader(r io.ReadCloser, prepend []byte) *prependReader {
return &prependReader{r, prepend}
}
func (r *prependReader) Read(p []byte) (int, error) {
if r.prepend != nil {
n := copy(p, r.prepend)
if n != len(r.prepend) {
return n, io.ErrShortBuffer
}
r.prepend = nil
return n, nil
}
return r.ReadCloser.Read(p)
}

106
vendor/github.com/tdewolff/minify/cmd/minify/watch.go generated vendored Normal file
View file

@ -0,0 +1,106 @@
package main
import (
"os"
"path/filepath"
"github.com/fsnotify/fsnotify"
)
type RecursiveWatcher struct {
watcher *fsnotify.Watcher
paths map[string]bool
recursive bool
}
func NewRecursiveWatcher(recursive bool) (*RecursiveWatcher, error) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
return nil, err
}
return &RecursiveWatcher{watcher, make(map[string]bool), recursive}, nil
}
func (rw *RecursiveWatcher) Close() error {
return rw.watcher.Close()
}
func (rw *RecursiveWatcher) AddPath(root string) error {
info, err := os.Stat(root)
if err != nil {
return err
}
if info.Mode().IsRegular() {
root = filepath.Dir(root)
if rw.paths[root] {
return nil
}
if err := rw.watcher.Add(root); err != nil {
return err
}
rw.paths[root] = true
return nil
} else if !rw.recursive {
if rw.paths[root] {
return nil
}
if err := rw.watcher.Add(root); err != nil {
return err
}
rw.paths[root] = true
return nil
} else {
return filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.Mode().IsDir() {
if !validDir(info) || rw.paths[path] {
return filepath.SkipDir
}
if err := rw.watcher.Add(path); err != nil {
return err
}
rw.paths[path] = true
}
return nil
})
}
}
func (rw *RecursiveWatcher) Run() chan string {
files := make(chan string, 10)
go func() {
for rw.watcher.Events != nil && rw.watcher.Errors != nil {
select {
case event, ok := <-rw.watcher.Events:
if !ok {
rw.watcher.Events = nil
break
}
if info, err := os.Stat(event.Name); err == nil {
if validDir(info) {
if event.Op&fsnotify.Create == fsnotify.Create {
if err := rw.AddPath(event.Name); err != nil {
Error.Println(err)
}
}
} else if validFile(info) {
if event.Op&fsnotify.Create == fsnotify.Create || event.Op&fsnotify.Write == fsnotify.Write {
files <- event.Name
}
}
}
case err, ok := <-rw.watcher.Errors:
if !ok {
rw.watcher.Errors = nil
break
}
Error.Println(err)
}
}
close(files)
}()
return files
}

339
vendor/github.com/tdewolff/minify/common.go generated vendored Normal file
View file

@ -0,0 +1,339 @@
package minify // import "github.com/tdewolff/minify"
import (
"bytes"
"encoding/base64"
"net/url"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/strconv"
)
// Epsilon is the closest number to zero that is not considered to be zero.
var Epsilon = 0.00001
// ContentType minifies a given mediatype by removing all whitespace.
func ContentType(b []byte) []byte {
j := 0
start := 0
inString := false
for i, c := range b {
if !inString && parse.IsWhitespace(c) {
if start != 0 {
j += copy(b[j:], b[start:i])
} else {
j += i
}
start = i + 1
} else if c == '"' {
inString = !inString
}
}
if start != 0 {
j += copy(b[j:], b[start:])
return parse.ToLower(b[:j])
}
return parse.ToLower(b)
}
// DataURI minifies a data URI and calls a minifier by the specified mediatype. Specifications: https://www.ietf.org/rfc/rfc2397.txt.
func DataURI(m *M, dataURI []byte) []byte {
if mediatype, data, err := parse.DataURI(dataURI); err == nil {
dataURI, _ = m.Bytes(string(mediatype), data)
base64Len := len(";base64") + base64.StdEncoding.EncodedLen(len(dataURI))
asciiLen := len(dataURI)
for _, c := range dataURI {
if 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-' || c == '_' || c == '.' || c == '~' || c == ' ' {
asciiLen++
} else {
asciiLen += 2
}
if asciiLen > base64Len {
break
}
}
if asciiLen > base64Len {
encoded := make([]byte, base64Len-len(";base64"))
base64.StdEncoding.Encode(encoded, dataURI)
dataURI = encoded
mediatype = append(mediatype, []byte(";base64")...)
} else {
dataURI = []byte(url.QueryEscape(string(dataURI)))
dataURI = bytes.Replace(dataURI, []byte("\""), []byte("\\\""), -1)
}
if len("text/plain") <= len(mediatype) && parse.EqualFold(mediatype[:len("text/plain")], []byte("text/plain")) {
mediatype = mediatype[len("text/plain"):]
}
for i := 0; i+len(";charset=us-ascii") <= len(mediatype); i++ {
// must start with semicolon and be followed by end of mediatype or semicolon
if mediatype[i] == ';' && parse.EqualFold(mediatype[i+1:i+len(";charset=us-ascii")], []byte("charset=us-ascii")) && (i+len(";charset=us-ascii") >= len(mediatype) || mediatype[i+len(";charset=us-ascii")] == ';') {
mediatype = append(mediatype[:i], mediatype[i+len(";charset=us-ascii"):]...)
break
}
}
dataURI = append(append(append([]byte("data:"), mediatype...), ','), dataURI...)
}
return dataURI
}
const MaxInt = int(^uint(0) >> 1)
const MinInt = -MaxInt - 1
// Number minifies a given byte slice containing a number (see parse.Number) and removes superfluous characters.
func Number(num []byte, prec int) []byte {
// omit first + and register mantissa start and end, whether it's negative and the exponent
neg := false
start := 0
dot := -1
end := len(num)
origExp := 0
if 0 < end && (num[0] == '+' || num[0] == '-') {
if num[0] == '-' {
neg = true
}
start++
}
for i, c := range num[start:] {
if c == '.' {
dot = start + i
} else if c == 'e' || c == 'E' {
end = start + i
i += start + 1
if i < len(num) && num[i] == '+' {
i++
}
if tmpOrigExp, n := strconv.ParseInt(num[i:]); n > 0 && tmpOrigExp >= int64(MinInt) && tmpOrigExp <= int64(MaxInt) {
// range checks for when int is 32 bit
origExp = int(tmpOrigExp)
} else {
return num
}
break
}
}
if dot == -1 {
dot = end
}
// trim leading zeros but leave at least one digit
for start < end-1 && num[start] == '0' {
start++
}
// trim trailing zeros
i := end - 1
for ; i > dot; i-- {
if num[i] != '0' {
end = i + 1
break
}
}
if i == dot {
end = dot
if start == end {
num[start] = '0'
return num[start : start+1]
}
} else if start == end-1 && num[start] == '0' {
return num[start:end]
}
// n is the number of significant digits
// normExp would be the exponent if it were normalised (0.1 <= f < 1)
n := 0
normExp := 0
if dot == start {
for i = dot + 1; i < end; i++ {
if num[i] != '0' {
n = end - i
normExp = dot - i + 1
break
}
}
} else if dot == end {
normExp = end - start
for i = end - 1; i >= start; i-- {
if num[i] != '0' {
n = i + 1 - start
end = i + 1
break
}
}
} else {
n = end - start - 1
normExp = dot - start
}
if origExp < 0 && (normExp < MinInt-origExp || normExp-n < MinInt-origExp) || origExp > 0 && (normExp > MaxInt-origExp || normExp-n > MaxInt-origExp) {
return num
}
normExp += origExp
// intExp would be the exponent if it were an integer
intExp := normExp - n
lenIntExp := 1
if intExp <= -10 || intExp >= 10 {
lenIntExp = strconv.LenInt(int64(intExp))
}
// there are three cases to consider when printing the number
// case 1: without decimals and with an exponent (large numbers)
// case 2: with decimals and without an exponent (around zero)
// case 3: without decimals and with a negative exponent (small numbers)
if normExp >= n {
// case 1
if dot < end {
if dot == start {
start = end - n
} else {
// TODO: copy the other part if shorter?
copy(num[dot:], num[dot+1:end])
end--
}
}
if normExp >= n+3 {
num[end] = 'e'
end++
for i := end + lenIntExp - 1; i >= end; i-- {
num[i] = byte(intExp%10) + '0'
intExp /= 10
}
end += lenIntExp
} else if normExp == n+2 {
num[end] = '0'
num[end+1] = '0'
end += 2
} else if normExp == n+1 {
num[end] = '0'
end++
}
} else if normExp >= -lenIntExp-1 {
// case 2
zeroes := -normExp
newDot := 0
if zeroes > 0 {
// dot placed at the front and add zeroes
newDot = end - n - zeroes - 1
if newDot != dot {
d := start - newDot
if d > 0 {
if dot < end {
// copy original digits behind the dot backwards
copy(num[dot+1+d:], num[dot+1:end])
if dot > start {
// copy original digits before the dot backwards
copy(num[start+d+1:], num[start:dot])
}
} else if dot > start {
// copy original digits before the dot backwards
copy(num[start+d:], num[start:dot])
}
newDot = start
end += d
} else {
start += -d
}
num[newDot] = '.'
for i := 0; i < zeroes; i++ {
num[newDot+1+i] = '0'
}
}
} else {
// placed in the middle
if dot == start {
// TODO: try if placing at the end reduces copying
// when there are zeroes after the dot
dot = end - n - 1
start = dot
} else if dot >= end {
// TODO: try if placing at the start reduces copying
// when input has no dot in it
dot = end
end++
}
newDot = start + normExp
if newDot > dot {
// copy digits forwards
copy(num[dot:], num[dot+1:newDot+1])
} else if newDot < dot {
// copy digits backwards
copy(num[newDot+1:], num[newDot:dot])
}
num[newDot] = '.'
}
// apply precision
dot = newDot
if prec > -1 && dot+1+prec < end {
end = dot + 1 + prec
inc := num[end] >= '5'
if inc || num[end-1] == '0' {
for i := end - 1; i > start; i-- {
if i == dot {
end--
} else if inc {
if num[i] == '9' {
if i > dot {
end--
} else {
num[i] = '0'
}
} else {
num[i]++
inc = false
break
}
} else if i > dot && num[i] == '0' {
end--
}
}
}
if dot == start && end == start+1 {
if inc {
num[start] = '1'
} else {
num[start] = '0'
}
} else {
if dot+1 == end {
end--
}
if inc {
if num[start] == '9' {
num[start] = '0'
copy(num[start+1:], num[start:end])
end++
num[start] = '1'
} else {
num[start]++
}
}
}
}
} else {
// case 3
if dot < end {
if dot == start {
copy(num[start:], num[end-n:end])
end = start + n
} else {
copy(num[dot:], num[dot+1:end])
end--
}
}
num[end] = 'e'
num[end+1] = '-'
end += 2
intExp = -intExp
for i := end + lenIntExp - 1; i >= end; i-- {
num[i] = byte(intExp%10) + '0'
intExp /= 10
}
end += lenIntExp
}
if neg {
start--
num[start] = '-'
}
return num[start:end]
}

237
vendor/github.com/tdewolff/minify/common_test.go generated vendored Normal file
View file

@ -0,0 +1,237 @@
package minify // import "github.com/tdewolff/minify"
import (
"fmt"
"io"
"io/ioutil"
"math"
"math/rand"
"strconv"
"testing"
"github.com/tdewolff/test"
)
func TestContentType(t *testing.T) {
contentTypeTests := []struct {
contentType string
expected string
}{
{"text/html", "text/html"},
{"text/html; charset=UTF-8", "text/html;charset=utf-8"},
{"text/html; charset=UTF-8 ; param = \" ; \"", "text/html;charset=utf-8;param=\" ; \""},
{"text/html, text/css", "text/html,text/css"},
}
for _, tt := range contentTypeTests {
t.Run(tt.contentType, func(t *testing.T) {
contentType := ContentType([]byte(tt.contentType))
test.Minify(t, tt.contentType, nil, string(contentType), tt.expected)
})
}
}
func TestDataURI(t *testing.T) {
dataURITests := []struct {
dataURI string
expected string
}{
{"data:,text", "data:,text"},
{"data:text/plain;charset=us-ascii,text", "data:,text"},
{"data:TEXT/PLAIN;CHARSET=US-ASCII,text", "data:,text"},
{"data:text/plain;charset=us-asciiz,text", "data:;charset=us-asciiz,text"},
{"data:;base64,dGV4dA==", "data:,text"},
{"data:text/svg+xml;base64,PT09PT09", "data:text/svg+xml;base64,PT09PT09"},
{"data:text/xml;version=2.0,content", "data:text/xml;version=2.0,content"},
{"data:text/xml; version = 2.0,content", "data:text/xml;version=2.0,content"},
{"data:,=====", "data:,%3D%3D%3D%3D%3D"},
{"data:,======", "data:;base64,PT09PT09"},
{"data:text/x,<?x?>", "data:text/x,%3C%3Fx%3F%3E"},
}
m := New()
m.AddFunc("text/x", func(_ *M, w io.Writer, r io.Reader, _ map[string]string) error {
b, _ := ioutil.ReadAll(r)
test.String(t, string(b), "<?x?>")
w.Write(b)
return nil
})
for _, tt := range dataURITests {
t.Run(tt.dataURI, func(t *testing.T) {
dataURI := DataURI(m, []byte(tt.dataURI))
test.Minify(t, tt.dataURI, nil, string(dataURI), tt.expected)
})
}
}
func TestNumber(t *testing.T) {
numberTests := []struct {
number string
expected string
}{
{"0", "0"},
{".0", "0"},
{"1.0", "1"},
{"0.1", ".1"},
{"+1", "1"},
{"-1", "-1"},
{"-0.1", "-.1"},
{"10", "10"},
{"100", "100"},
{"1000", "1e3"},
{"0.001", ".001"},
{"0.0001", "1e-4"},
{"100e1", "1e3"},
{"1.1e+1", "11"},
{"1.1e6", "11e5"},
{"0.252", ".252"},
{"1.252", "1.252"},
{"-1.252", "-1.252"},
{"0.075", ".075"},
{"789012345678901234567890123456789e9234567890123456789", "789012345678901234567890123456789e9234567890123456789"},
{".000100009", "100009e-9"},
{".0001000009", ".0001000009"},
{".0001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009", ".0001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009"},
{"E\x1f", "E\x1f"}, // fuzz
{"1e9223372036854775807", "1e9223372036854775807"},
{"11e9223372036854775807", "11e9223372036854775807"},
{".01e-9223372036854775808", ".01e-9223372036854775808"},
{".011e-9223372036854775808", ".011e-9223372036854775808"},
{".12345e8", "12345e3"},
{".12345e7", "1234500"},
{".12345e6", "123450"},
{".12345e5", "12345"},
{".012345e6", "12345"},
{".12345e4", "1234.5"},
{"-.12345e4", "-1234.5"},
{".12345e0", ".12345"},
{".12345e-1", ".012345"},
{".12345e-2", ".0012345"},
{".12345e-3", "12345e-8"},
{".12345e-4", "12345e-9"},
{".12345e-5", "12345e-10"},
{".123456e-3", "123456e-9"},
{".123456e-2", ".00123456"},
{".1234567e-4", "1234567e-11"},
{".1234567e-3", ".0001234567"},
{"12345678e-1", "1234567.8"},
{"72.e-3", ".072"},
{"7640e-2", "76.4"},
{"10.e-3", ".01"},
{".0319e3", "31.9"},
{"39.7e-2", ".397"},
{"39.7e-3", ".0397"},
{".01e1", ".1"},
{".001e1", ".01"},
{"39.7e-5", "397e-6"},
}
for _, tt := range numberTests {
t.Run(tt.number, func(t *testing.T) {
number := Number([]byte(tt.number), -1)
test.Minify(t, tt.number, nil, string(number), tt.expected)
})
}
}
func TestNumberTruncate(t *testing.T) {
numberTests := []struct {
number string
truncate int
expected string
}{
{"0.1", 1, ".1"},
{"0.0001", 1, "1e-4"},
{"0.111", 1, ".1"},
{"0.111", 0, "0"},
{"0.075", 1, ".1"},
{"0.025", 1, "0"},
{"9.99", 1, "10"},
{"8.88", 1, "8.9"},
{"8.88", 0, "9"},
{"8.00", 0, "8"},
{".88", 0, "1"},
{"1.234", 1, "1.2"},
{"33.33", 0, "33"},
{"29.666", 0, "30"},
{"1.51", 1, "1.5"},
}
for _, tt := range numberTests {
t.Run(tt.number, func(t *testing.T) {
number := Number([]byte(tt.number), tt.truncate)
test.Minify(t, tt.number, nil, string(number), tt.expected, "truncate to", tt.truncate)
})
}
}
func TestNumberRandom(t *testing.T) {
N := int(1e4)
if testing.Short() {
N = 0
}
for i := 0; i < N; i++ {
b := RandNumBytes()
f, _ := strconv.ParseFloat(string(b), 64)
b2 := make([]byte, len(b))
copy(b2, b)
b2 = Number(b2, -1)
f2, _ := strconv.ParseFloat(string(b2), 64)
if math.Abs(f-f2) > 1e-6 {
fmt.Println("Bad:", f, "!=", f2, "in", string(b), "to", string(b2))
}
}
}
////////////////
var n = 100
var numbers [][]byte
func TestMain(t *testing.T) {
numbers = make([][]byte, 0, n)
for j := 0; j < n; j++ {
numbers = append(numbers, RandNumBytes())
}
}
func RandNumBytes() []byte {
var b []byte
n := rand.Int() % 10
for i := 0; i < n; i++ {
b = append(b, byte(rand.Int()%10)+'0')
}
if rand.Int()%2 == 0 {
b = append(b, '.')
n = rand.Int() % 10
for i := 0; i < n; i++ {
b = append(b, byte(rand.Int()%10)+'0')
}
}
if rand.Int()%2 == 0 {
b = append(b, 'e')
if rand.Int()%2 == 0 {
b = append(b, '-')
}
n = 1 + rand.Int()%4
for i := 0; i < n; i++ {
b = append(b, byte(rand.Int()%10)+'0')
}
}
return b
}
func BenchmarkNumber(b *testing.B) {
for i := 0; i < b.N; i++ {
for j := 0; j < n; j++ {
Number(numbers[j], -1)
}
}
}
func BenchmarkNumber2(b *testing.B) {
num := []byte("1.2345e-6")
for i := 0; i < b.N; i++ {
Number(num, -1)
}
}

559
vendor/github.com/tdewolff/minify/css/css.go generated vendored Normal file
View file

@ -0,0 +1,559 @@
// Package css minifies CSS3 following the specifications at http://www.w3.org/TR/css-syntax-3/.
package css // import "github.com/tdewolff/minify/css"
import (
"bytes"
"encoding/hex"
"io"
"strconv"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/css"
)
var (
spaceBytes = []byte(" ")
colonBytes = []byte(":")
semicolonBytes = []byte(";")
commaBytes = []byte(",")
leftBracketBytes = []byte("{")
rightBracketBytes = []byte("}")
zeroBytes = []byte("0")
msfilterBytes = []byte("-ms-filter")
backgroundNoneBytes = []byte("0 0")
)
type cssMinifier struct {
m *minify.M
w io.Writer
p *css.Parser
o *Minifier
}
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{Decimals: -1}
// Minifier is a CSS minifier.
type Minifier struct {
Decimals int
}
// Minify minifies CSS data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies CSS data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
isInline := params != nil && params["inline"] == "1"
c := &cssMinifier{
m: m,
w: w,
p: css.NewParser(r, isInline),
o: o,
}
defer c.p.Restore()
if err := c.minifyGrammar(); err != nil && err != io.EOF {
return err
}
return nil
}
func (c *cssMinifier) minifyGrammar() error {
semicolonQueued := false
for {
gt, _, data := c.p.Next()
if gt == css.ErrorGrammar {
if perr, ok := c.p.Err().(*parse.Error); ok && perr.Message == "unexpected token in declaration" {
if semicolonQueued {
if _, err := c.w.Write(semicolonBytes); err != nil {
return err
}
}
// write out the offending declaration
if _, err := c.w.Write(data); err != nil {
return err
}
for _, val := range c.p.Values() {
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
semicolonQueued = true
continue
} else {
return c.p.Err()
}
} else if gt == css.EndAtRuleGrammar || gt == css.EndRulesetGrammar {
if _, err := c.w.Write(rightBracketBytes); err != nil {
return err
}
semicolonQueued = false
continue
}
if semicolonQueued {
if _, err := c.w.Write(semicolonBytes); err != nil {
return err
}
semicolonQueued = false
}
if gt == css.AtRuleGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
for _, val := range c.p.Values() {
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
semicolonQueued = true
} else if gt == css.BeginAtRuleGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
for _, val := range c.p.Values() {
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
if _, err := c.w.Write(leftBracketBytes); err != nil {
return err
}
} else if gt == css.QualifiedRuleGrammar {
if err := c.minifySelectors(data, c.p.Values()); err != nil {
return err
}
if _, err := c.w.Write(commaBytes); err != nil {
return err
}
} else if gt == css.BeginRulesetGrammar {
if err := c.minifySelectors(data, c.p.Values()); err != nil {
return err
}
if _, err := c.w.Write(leftBracketBytes); err != nil {
return err
}
} else if gt == css.DeclarationGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
if _, err := c.w.Write(colonBytes); err != nil {
return err
}
if err := c.minifyDeclaration(data, c.p.Values()); err != nil {
return err
}
semicolonQueued = true
} else if gt == css.CustomPropertyGrammar {
if _, err := c.w.Write(data); err != nil {
return err
}
if _, err := c.w.Write(colonBytes); err != nil {
return err
}
if _, err := c.w.Write(c.p.Values()[0].Data); err != nil {
return err
}
semicolonQueued = true
} else if gt == css.CommentGrammar {
if len(data) > 5 && data[1] == '*' && data[2] == '!' {
if _, err := c.w.Write(data[:3]); err != nil {
return err
}
comment := parse.TrimWhitespace(parse.ReplaceMultipleWhitespace(data[3 : len(data)-2]))
if _, err := c.w.Write(comment); err != nil {
return err
}
if _, err := c.w.Write(data[len(data)-2:]); err != nil {
return err
}
}
} else if _, err := c.w.Write(data); err != nil {
return err
}
}
}
func (c *cssMinifier) minifySelectors(property []byte, values []css.Token) error {
inAttr := false
isClass := false
for _, val := range c.p.Values() {
if !inAttr {
if val.TokenType == css.IdentToken {
if !isClass {
parse.ToLower(val.Data)
}
isClass = false
} else if val.TokenType == css.DelimToken && val.Data[0] == '.' {
isClass = true
} else if val.TokenType == css.LeftBracketToken {
inAttr = true
}
} else {
if val.TokenType == css.StringToken && len(val.Data) > 2 {
s := val.Data[1 : len(val.Data)-1]
if css.IsIdent([]byte(s)) {
if _, err := c.w.Write(s); err != nil {
return err
}
continue
}
} else if val.TokenType == css.RightBracketToken {
inAttr = false
}
}
if _, err := c.w.Write(val.Data); err != nil {
return err
}
}
return nil
}
func (c *cssMinifier) minifyDeclaration(property []byte, values []css.Token) error {
if len(values) == 0 {
return nil
}
prop := css.ToHash(property)
inProgid := false
for i, value := range values {
if inProgid {
if value.TokenType == css.FunctionToken {
inProgid = false
}
continue
} else if value.TokenType == css.IdentToken && css.ToHash(value.Data) == css.Progid {
inProgid = true
continue
}
value.TokenType, value.Data = c.shortenToken(prop, value.TokenType, value.Data)
if prop == css.Font || prop == css.Font_Family || prop == css.Font_Weight {
if value.TokenType == css.IdentToken && (prop == css.Font || prop == css.Font_Weight) {
val := css.ToHash(value.Data)
if val == css.Normal && prop == css.Font_Weight {
// normal could also be specified for font-variant, not just font-weight
value.TokenType = css.NumberToken
value.Data = []byte("400")
} else if val == css.Bold {
value.TokenType = css.NumberToken
value.Data = []byte("700")
}
} else if value.TokenType == css.StringToken && (prop == css.Font || prop == css.Font_Family) && len(value.Data) > 2 {
unquote := true
parse.ToLower(value.Data)
s := value.Data[1 : len(value.Data)-1]
if len(s) > 0 {
for _, split := range bytes.Split(s, spaceBytes) {
val := css.ToHash(split)
// if len is zero, it contains two consecutive spaces
if val == css.Inherit || val == css.Serif || val == css.Sans_Serif || val == css.Monospace || val == css.Fantasy || val == css.Cursive || val == css.Initial || val == css.Default ||
len(split) == 0 || !css.IsIdent(split) {
unquote = false
break
}
}
}
if unquote {
value.Data = s
}
}
} else if prop == css.Outline || prop == css.Border || prop == css.Border_Bottom || prop == css.Border_Left || prop == css.Border_Right || prop == css.Border_Top {
if css.ToHash(value.Data) == css.None {
value.TokenType = css.NumberToken
value.Data = zeroBytes
}
}
values[i].TokenType, values[i].Data = value.TokenType, value.Data
}
important := false
if len(values) > 2 && values[len(values)-2].TokenType == css.DelimToken && values[len(values)-2].Data[0] == '!' && css.ToHash(values[len(values)-1].Data) == css.Important {
values = values[:len(values)-2]
important = true
}
if len(values) == 1 {
if prop == css.Background && css.ToHash(values[0].Data) == css.None {
values[0].Data = backgroundNoneBytes
} else if bytes.Equal(property, msfilterBytes) {
alpha := []byte("progid:DXImageTransform.Microsoft.Alpha(Opacity=")
if values[0].TokenType == css.StringToken && bytes.HasPrefix(values[0].Data[1:len(values[0].Data)-1], alpha) {
values[0].Data = append(append([]byte{values[0].Data[0]}, []byte("alpha(opacity=")...), values[0].Data[1+len(alpha):]...)
}
}
} else {
if prop == css.Margin || prop == css.Padding || prop == css.Border_Width {
if (values[0].TokenType == css.NumberToken || values[0].TokenType == css.DimensionToken || values[0].TokenType == css.PercentageToken) && (len(values)+1)%2 == 0 {
valid := true
for i := 1; i < len(values); i += 2 {
if values[i].TokenType != css.WhitespaceToken || values[i+1].TokenType != css.NumberToken && values[i+1].TokenType != css.DimensionToken && values[i+1].TokenType != css.PercentageToken {
valid = false
break
}
}
if valid {
n := (len(values) + 1) / 2
if n == 2 {
if bytes.Equal(values[0].Data, values[2].Data) {
values = values[:1]
}
} else if n == 3 {
if bytes.Equal(values[0].Data, values[2].Data) && bytes.Equal(values[0].Data, values[4].Data) {
values = values[:1]
} else if bytes.Equal(values[0].Data, values[4].Data) {
values = values[:3]
}
} else if n == 4 {
if bytes.Equal(values[0].Data, values[2].Data) && bytes.Equal(values[0].Data, values[4].Data) && bytes.Equal(values[0].Data, values[6].Data) {
values = values[:1]
} else if bytes.Equal(values[0].Data, values[4].Data) && bytes.Equal(values[2].Data, values[6].Data) {
values = values[:3]
} else if bytes.Equal(values[2].Data, values[6].Data) {
values = values[:5]
}
}
}
}
} else if prop == css.Filter && len(values) == 11 {
if bytes.Equal(values[0].Data, []byte("progid")) &&
values[1].TokenType == css.ColonToken &&
bytes.Equal(values[2].Data, []byte("DXImageTransform")) &&
values[3].Data[0] == '.' &&
bytes.Equal(values[4].Data, []byte("Microsoft")) &&
values[5].Data[0] == '.' &&
bytes.Equal(values[6].Data, []byte("Alpha(")) &&
bytes.Equal(parse.ToLower(values[7].Data), []byte("opacity")) &&
values[8].Data[0] == '=' &&
values[10].Data[0] == ')' {
values = values[6:]
values[0].Data = []byte("alpha(")
}
}
}
for i := 0; i < len(values); i++ {
if values[i].TokenType == css.FunctionToken {
n, err := c.minifyFunction(values[i:])
if err != nil {
return err
}
i += n - 1
} else if _, err := c.w.Write(values[i].Data); err != nil {
return err
}
}
if important {
if _, err := c.w.Write([]byte("!important")); err != nil {
return err
}
}
return nil
}
func (c *cssMinifier) minifyFunction(values []css.Token) (int, error) {
n := 1
simple := true
for i, value := range values[1:] {
if value.TokenType == css.RightParenthesisToken {
n++
break
}
if i%2 == 0 && (value.TokenType != css.NumberToken && value.TokenType != css.PercentageToken) || (i%2 == 1 && value.TokenType != css.CommaToken) {
simple = false
}
n++
}
values = values[:n]
if simple && (n-1)%2 == 0 {
fun := css.ToHash(values[0].Data[:len(values[0].Data)-1])
nArgs := (n - 1) / 2
if (fun == css.Rgba || fun == css.Hsla) && nArgs == 4 {
d, _ := strconv.ParseFloat(string(values[7].Data), 32) // can never fail because if simple == true than this is a NumberToken or PercentageToken
if d-1.0 > -minify.Epsilon {
if fun == css.Rgba {
values[0].Data = []byte("rgb(")
fun = css.Rgb
} else {
values[0].Data = []byte("hsl(")
fun = css.Hsl
}
values = values[:len(values)-2]
values[len(values)-1].Data = []byte(")")
nArgs = 3
} else if d < minify.Epsilon {
values[0].Data = []byte("transparent")
values = values[:1]
fun = 0
nArgs = 0
}
}
if fun == css.Rgb && nArgs == 3 {
var err [3]error
rgb := [3]byte{}
for j := 0; j < 3; j++ {
val := values[j*2+1]
if val.TokenType == css.NumberToken {
var d int64
d, err[j] = strconv.ParseInt(string(val.Data), 10, 32)
if d < 0 {
d = 0
} else if d > 255 {
d = 255
}
rgb[j] = byte(d)
} else if val.TokenType == css.PercentageToken {
var d float64
d, err[j] = strconv.ParseFloat(string(val.Data[:len(val.Data)-1]), 32)
if d < 0.0 {
d = 0.0
} else if d > 100.0 {
d = 100.0
}
rgb[j] = byte((d / 100.0 * 255.0) + 0.5)
}
}
if err[0] == nil && err[1] == nil && err[2] == nil {
val := make([]byte, 7)
val[0] = '#'
hex.Encode(val[1:], rgb[:])
parse.ToLower(val)
if s, ok := ShortenColorHex[string(val)]; ok {
if _, err := c.w.Write(s); err != nil {
return 0, err
}
} else {
if len(val) == 7 && val[1] == val[2] && val[3] == val[4] && val[5] == val[6] {
val[2] = val[3]
val[3] = val[5]
val = val[:4]
}
if _, err := c.w.Write(val); err != nil {
return 0, err
}
}
return n, nil
}
} else if fun == css.Hsl && nArgs == 3 {
if values[1].TokenType == css.NumberToken && values[3].TokenType == css.PercentageToken && values[5].TokenType == css.PercentageToken {
h, err1 := strconv.ParseFloat(string(values[1].Data), 32)
s, err2 := strconv.ParseFloat(string(values[3].Data[:len(values[3].Data)-1]), 32)
l, err3 := strconv.ParseFloat(string(values[5].Data[:len(values[5].Data)-1]), 32)
if err1 == nil && err2 == nil && err3 == nil {
r, g, b := css.HSL2RGB(h/360.0, s/100.0, l/100.0)
rgb := []byte{byte((r * 255.0) + 0.5), byte((g * 255.0) + 0.5), byte((b * 255.0) + 0.5)}
val := make([]byte, 7)
val[0] = '#'
hex.Encode(val[1:], rgb[:])
parse.ToLower(val)
if s, ok := ShortenColorHex[string(val)]; ok {
if _, err := c.w.Write(s); err != nil {
return 0, err
}
} else {
if len(val) == 7 && val[1] == val[2] && val[3] == val[4] && val[5] == val[6] {
val[2] = val[3]
val[3] = val[5]
val = val[:4]
}
if _, err := c.w.Write(val); err != nil {
return 0, err
}
}
return n, nil
}
}
}
}
for _, value := range values {
if _, err := c.w.Write(value.Data); err != nil {
return 0, err
}
}
return n, nil
}
func (c *cssMinifier) shortenToken(prop css.Hash, tt css.TokenType, data []byte) (css.TokenType, []byte) {
if tt == css.NumberToken || tt == css.PercentageToken || tt == css.DimensionToken {
if tt == css.NumberToken && (prop == css.Z_Index || prop == css.Counter_Increment || prop == css.Counter_Reset || prop == css.Orphans || prop == css.Widows) {
return tt, data // integers
}
n := len(data)
if tt == css.PercentageToken {
n--
} else if tt == css.DimensionToken {
n = parse.Number(data)
}
dim := data[n:]
parse.ToLower(dim)
data = minify.Number(data[:n], c.o.Decimals)
if tt == css.PercentageToken && (len(data) != 1 || data[0] != '0' || prop == css.Color) {
data = append(data, '%')
} else if tt == css.DimensionToken && (len(data) != 1 || data[0] != '0' || requiredDimension[string(dim)]) {
data = append(data, dim...)
}
} else if tt == css.IdentToken {
//parse.ToLower(data) // TODO: not all identifiers are case-insensitive; all <custom-ident> properties are case-sensitive
if hex, ok := ShortenColorName[css.ToHash(data)]; ok {
tt = css.HashToken
data = hex
}
} else if tt == css.HashToken {
parse.ToLower(data)
if ident, ok := ShortenColorHex[string(data)]; ok {
tt = css.IdentToken
data = ident
} else if len(data) == 7 && data[1] == data[2] && data[3] == data[4] && data[5] == data[6] {
tt = css.HashToken
data[2] = data[3]
data[3] = data[5]
data = data[:4]
}
} else if tt == css.StringToken {
// remove any \\\r\n \\\r \\\n
for i := 1; i < len(data)-2; i++ {
if data[i] == '\\' && (data[i+1] == '\n' || data[i+1] == '\r') {
// encountered first replacee, now start to move bytes to the front
j := i + 2
if data[i+1] == '\r' && len(data) > i+2 && data[i+2] == '\n' {
j++
}
for ; j < len(data); j++ {
if data[j] == '\\' && len(data) > j+1 && (data[j+1] == '\n' || data[j+1] == '\r') {
if data[j+1] == '\r' && len(data) > j+2 && data[j+2] == '\n' {
j++
}
j++
} else {
data[i] = data[j]
i++
}
}
data = data[:i]
break
}
}
} else if tt == css.URLToken {
parse.ToLower(data[:3])
if len(data) > 10 {
uri := data[4 : len(data)-1]
delim := byte('"')
if uri[0] == '\'' || uri[0] == '"' {
delim = uri[0]
uri = uri[1 : len(uri)-1]
}
uri = minify.DataURI(c.m, uri)
if css.IsURLUnquoted(uri) {
data = append(append([]byte("url("), uri...), ')')
} else {
data = append(append(append([]byte("url("), delim), uri...), delim, ')')
}
}
}
return tt, data
}

234
vendor/github.com/tdewolff/minify/css/css_test.go generated vendored Normal file
View file

@ -0,0 +1,234 @@
package css // import "github.com/tdewolff/minify/css"
import (
"bytes"
"fmt"
"os"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestCSS(t *testing.T) {
cssTests := []struct {
css string
expected string
}{
{"/*comment*/", ""},
{"/*! bang comment */", "/*!bang comment*/"},
{"i{}/*! bang comment */", "i{}/*!bang comment*/"},
{"i { key: value; key2: value; }", "i{key:value;key2:value}"},
{".cla .ss > #id { x:y; }", ".cla .ss>#id{x:y}"},
{".cla[id ^= L] { x:y; }", ".cla[id^=L]{x:y}"},
{"area:focus { outline : 0;}", "area:focus{outline:0}"},
{"@import 'file';", "@import 'file'"},
{"@font-face { x:y; }", "@font-face{x:y}"},
{"input[type=\"radio\"]{x:y}", "input[type=radio]{x:y}"},
{"DIV{margin:1em}", "div{margin:1em}"},
{".CLASS{margin:1em}", ".CLASS{margin:1em}"},
{"@MEDIA all{}", "@media all{}"},
{"@media only screen and (max-width : 800px){}", "@media only screen and (max-width:800px){}"},
{"@media (-webkit-min-device-pixel-ratio:1.5),(min-resolution:1.5dppx){}", "@media(-webkit-min-device-pixel-ratio:1.5),(min-resolution:1.5dppx){}"},
{"[class^=icon-] i[class^=icon-],i[class*=\" icon-\"]{x:y}", "[class^=icon-] i[class^=icon-],i[class*=\" icon-\"]{x:y}"},
{"html{line-height:1;}html{line-height:1;}", "html{line-height:1}html{line-height:1}"},
{"a { b: 1", "a{b:1}"},
{":root { --custom-variable:0px; }", ":root{--custom-variable:0px}"},
// case sensitivity
{"@counter-style Ident{}", "@counter-style Ident{}"},
// coverage
{"a, b + c { x:y; }", "a,b+c{x:y}"},
// bad declaration
{".clearfix { *zoom: 1px; }", ".clearfix{*zoom:1px}"},
{".clearfix { *zoom: 1px }", ".clearfix{*zoom:1px}"},
{".clearfix { color:green; *zoom: 1px; color:red; }", ".clearfix{color:green;*zoom:1px;color:red}"},
// go-fuzz
{"input[type=\"\x00\"] { a: b\n}.a{}", "input[type=\"\x00\"]{a:b}.a{}"},
{"a{a:)'''", "a{a:)'''}"},
}
m := minify.New()
for _, tt := range cssTests {
t.Run(tt.css, func(t *testing.T) {
r := bytes.NewBufferString(tt.css)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.css, err, w.String(), tt.expected)
})
}
}
func TestCSSInline(t *testing.T) {
cssTests := []struct {
css string
expected string
}{
{"/*comment*/", ""},
{"/*! bang comment */", ""},
{";", ""},
{"empty:", "empty:"},
{"key: value;", "key:value"},
{"margin: 0 1; padding: 0 1;", "margin:0 1;padding:0 1"},
{"color: #FF0000;", "color:red"},
{"color: #000000;", "color:#000"},
{"color: black;", "color:#000"},
{"color: rgb(255,255,255);", "color:#fff"},
{"color: rgb(100%,100%,100%);", "color:#fff"},
{"color: rgba(255,0,0,1);", "color:red"},
{"color: rgba(255,0,0,2);", "color:red"},
{"color: rgba(255,0,0,0.5);", "color:rgba(255,0,0,.5)"},
{"color: rgba(255,0,0,-1);", "color:transparent"},
{"color: rgba(0%,15%,25%,0.2);", "color:rgba(0%,15%,25%,.2)"},
{"color: rgba(0,0,0,0.5);", "color:rgba(0,0,0,.5)"},
{"color: hsla(5,0%,10%,0.75);", "color:hsla(5,0%,10%,.75)"},
{"color: hsl(0,100%,50%);", "color:red"},
{"color: hsla(1,2%,3%,1);", "color:#080807"},
{"color: hsla(1,2%,3%,0);", "color:transparent"},
{"color: hsl(48,100%,50%);", "color:#fc0"},
{"font-weight: bold; font-weight: normal;", "font-weight:700;font-weight:400"},
{"font: bold \"Times new Roman\",\"Sans-Serif\";", "font:700 times new roman,\"sans-serif\""},
{"outline: none;", "outline:0"},
{"outline: none !important;", "outline:0!important"},
{"border-left: none;", "border-left:0"},
{"margin: 1 1 1 1;", "margin:1"},
{"margin: 1 2 1 2;", "margin:1 2"},
{"margin: 1 2 3 2;", "margin:1 2 3"},
{"margin: 1 2 3 4;", "margin:1 2 3 4"},
{"margin: 1 1 1 a;", "margin:1 1 1 a"},
{"margin: 1 1 1 1 !important;", "margin:1!important"},
{"padding:.2em .4em .2em", "padding:.2em .4em"},
{"margin: 0em;", "margin:0"},
{"font-family:'Arial', 'Times New Roman';", "font-family:arial,times new roman"},
{"background:url('http://domain.com/image.png');", "background:url(http://domain.com/image.png)"},
{"filter: progid : DXImageTransform.Microsoft.BasicImage(rotation=1);", "filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=1)"},
{"filter: progid:DXImageTransform.Microsoft.Alpha(Opacity=0);", "filter:alpha(opacity=0)"},
{"content: \"a\\\nb\";", "content:\"ab\""},
{"content: \"a\\\r\nb\\\r\nc\";", "content:\"abc\""},
{"content: \"\";", "content:\"\""},
{"font:27px/13px arial,sans-serif", "font:27px/13px arial,sans-serif"},
{"text-decoration: none !important", "text-decoration:none!important"},
{"color:#fff", "color:#fff"},
{"border:2px rgb(255,255,255);", "border:2px #fff"},
{"margin:-1px", "margin:-1px"},
{"margin:+1px", "margin:1px"},
{"margin:0.5em", "margin:.5em"},
{"margin:-0.5em", "margin:-.5em"},
{"margin:05em", "margin:5em"},
{"margin:.50em", "margin:.5em"},
{"margin:5.0em", "margin:5em"},
{"margin:5000em", "margin:5e3em"},
{"color:#c0c0c0", "color:silver"},
{"-ms-filter: \"progid:DXImageTransform.Microsoft.Alpha(Opacity=80)\";", "-ms-filter:\"alpha(opacity=80)\""},
{"filter: progid:DXImageTransform.Microsoft.Alpha(Opacity = 80);", "filter:alpha(opacity=80)"},
{"MARGIN:1EM", "margin:1em"},
//{"color:CYAN", "color:cyan"}, // TODO
{"width:attr(Name em)", "width:attr(Name em)"},
{"content:CounterName", "content:CounterName"},
{"background:URL(x.PNG);", "background:url(x.PNG)"},
{"background:url(/*nocomment*/)", "background:url(/*nocomment*/)"},
{"background:url(data:,text)", "background:url(data:,text)"},
{"background:url('data:text/xml; version = 2.0,content')", "background:url(data:text/xml;version=2.0,content)"},
{"background:url('data:\\'\",text')", "background:url('data:\\'\",text')"},
{"margin:0 0 18px 0;", "margin:0 0 18px"},
{"background:none", "background:0 0"},
{"background:none 1 1", "background:none 1 1"},
{"z-index:1000", "z-index:1000"},
{"any:0deg 0s 0ms 0dpi 0dpcm 0dppx 0hz 0khz", "any:0 0s 0ms 0dpi 0dpcm 0dppx 0hz 0khz"},
{"--custom-variable:0px;", "--custom-variable:0px"},
{"--foo: if(x > 5) this.width = 10", "--foo: if(x > 5) this.width = 10"},
{"--foo: ;", "--foo: "},
// case sensitivity
{"animation:Ident", "animation:Ident"},
{"animation-name:Ident", "animation-name:Ident"},
// coverage
{"margin: 1 1;", "margin:1"},
{"margin: 1 2;", "margin:1 2"},
{"margin: 1 1 1;", "margin:1"},
{"margin: 1 2 1;", "margin:1 2"},
{"margin: 1 2 3;", "margin:1 2 3"},
{"margin: 0%;", "margin:0"},
{"color: rgb(255,64,64);", "color:#ff4040"},
{"color: rgb(256,-34,2342435);", "color:#f0f"},
{"color: rgb(120%,-45%,234234234%);", "color:#f0f"},
{"color: rgb(0, 1, ident);", "color:rgb(0,1,ident)"},
{"color: rgb(ident);", "color:rgb(ident)"},
{"margin: rgb(ident);", "margin:rgb(ident)"},
{"filter: progid:b().c.Alpha(rgba(x));", "filter:progid:b().c.Alpha(rgba(x))"},
// go-fuzz
{"FONT-FAMILY: ru\"", "font-family:ru\""},
}
m := minify.New()
params := map[string]string{"inline": "1"}
for _, tt := range cssTests {
t.Run(tt.css, func(t *testing.T) {
r := bytes.NewBufferString(tt.css)
w := &bytes.Buffer{}
err := Minify(m, w, r, params)
test.Minify(t, tt.css, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
css string
n []int
}{
{`@import 'file'`, []int{0, 2}},
{`@media all{}`, []int{0, 2, 3, 4}},
{`a[id^="L"]{margin:2in!important;color:red}`, []int{0, 4, 6, 7, 8, 9, 10, 11}},
{`a{color:rgb(255,0,0)}`, []int{4}},
{`a{color:rgb(255,255,255)}`, []int{4}},
{`a{color:hsl(0,100%,50%)}`, []int{4}},
{`a{color:hsl(360,100%,100%)}`, []int{4}},
{`a{color:f(arg)}`, []int{4}},
{`<!--`, []int{0}},
{`/*!comment*/`, []int{0, 1, 2}},
{`a{--var:val}`, []int{2, 3, 4}},
{`a{*color:0}`, []int{2, 3}},
{`a{color:0;baddecl 5}`, []int{5}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.css, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.css)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("text/css", Minify)
if err := m.Minify("text/css", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

153
vendor/github.com/tdewolff/minify/css/table.go generated vendored Normal file
View file

@ -0,0 +1,153 @@
package css
import "github.com/tdewolff/parse/css"
var requiredDimension = map[string]bool{
"s": true,
"ms": true,
"dpi": true,
"dpcm": true,
"dppx": true,
"hz": true,
"khz": true,
}
// Uses http://www.w3.org/TR/2010/PR-css3-color-20101028/ for colors
// ShortenColorHex maps a color hexcode to its shorter name
var ShortenColorHex = map[string][]byte{
"#000080": []byte("navy"),
"#008000": []byte("green"),
"#008080": []byte("teal"),
"#4b0082": []byte("indigo"),
"#800000": []byte("maroon"),
"#800080": []byte("purple"),
"#808000": []byte("olive"),
"#808080": []byte("gray"),
"#a0522d": []byte("sienna"),
"#a52a2a": []byte("brown"),
"#c0c0c0": []byte("silver"),
"#cd853f": []byte("peru"),
"#d2b48c": []byte("tan"),
"#da70d6": []byte("orchid"),
"#dda0dd": []byte("plum"),
"#ee82ee": []byte("violet"),
"#f0e68c": []byte("khaki"),
"#f0ffff": []byte("azure"),
"#f5deb3": []byte("wheat"),
"#f5f5dc": []byte("beige"),
"#fa8072": []byte("salmon"),
"#faf0e6": []byte("linen"),
"#ff6347": []byte("tomato"),
"#ff7f50": []byte("coral"),
"#ffa500": []byte("orange"),
"#ffc0cb": []byte("pink"),
"#ffd700": []byte("gold"),
"#ffe4c4": []byte("bisque"),
"#fffafa": []byte("snow"),
"#fffff0": []byte("ivory"),
"#ff0000": []byte("red"),
"#f00": []byte("red"),
}
// ShortenColorName maps a color name to its shorter hexcode
var ShortenColorName = map[css.Hash][]byte{
css.Black: []byte("#000"),
css.Darkblue: []byte("#00008b"),
css.Mediumblue: []byte("#0000cd"),
css.Darkgreen: []byte("#006400"),
css.Darkcyan: []byte("#008b8b"),
css.Deepskyblue: []byte("#00bfff"),
css.Darkturquoise: []byte("#00ced1"),
css.Mediumspringgreen: []byte("#00fa9a"),
css.Springgreen: []byte("#00ff7f"),
css.Midnightblue: []byte("#191970"),
css.Dodgerblue: []byte("#1e90ff"),
css.Lightseagreen: []byte("#20b2aa"),
css.Forestgreen: []byte("#228b22"),
css.Seagreen: []byte("#2e8b57"),
css.Darkslategray: []byte("#2f4f4f"),
css.Limegreen: []byte("#32cd32"),
css.Mediumseagreen: []byte("#3cb371"),
css.Turquoise: []byte("#40e0d0"),
css.Royalblue: []byte("#4169e1"),
css.Steelblue: []byte("#4682b4"),
css.Darkslateblue: []byte("#483d8b"),
css.Mediumturquoise: []byte("#48d1cc"),
css.Darkolivegreen: []byte("#556b2f"),
css.Cadetblue: []byte("#5f9ea0"),
css.Cornflowerblue: []byte("#6495ed"),
css.Mediumaquamarine: []byte("#66cdaa"),
css.Slateblue: []byte("#6a5acd"),
css.Olivedrab: []byte("#6b8e23"),
css.Slategray: []byte("#708090"),
css.Lightslateblue: []byte("#789"),
css.Mediumslateblue: []byte("#7b68ee"),
css.Lawngreen: []byte("#7cfc00"),
css.Chartreuse: []byte("#7fff00"),
css.Aquamarine: []byte("#7fffd4"),
css.Lightskyblue: []byte("#87cefa"),
css.Blueviolet: []byte("#8a2be2"),
css.Darkmagenta: []byte("#8b008b"),
css.Saddlebrown: []byte("#8b4513"),
css.Darkseagreen: []byte("#8fbc8f"),
css.Lightgreen: []byte("#90ee90"),
css.Mediumpurple: []byte("#9370db"),
css.Darkviolet: []byte("#9400d3"),
css.Palegreen: []byte("#98fb98"),
css.Darkorchid: []byte("#9932cc"),
css.Yellowgreen: []byte("#9acd32"),
css.Darkgray: []byte("#a9a9a9"),
css.Lightblue: []byte("#add8e6"),
css.Greenyellow: []byte("#adff2f"),
css.Paleturquoise: []byte("#afeeee"),
css.Lightsteelblue: []byte("#b0c4de"),
css.Powderblue: []byte("#b0e0e6"),
css.Firebrick: []byte("#b22222"),
css.Darkgoldenrod: []byte("#b8860b"),
css.Mediumorchid: []byte("#ba55d3"),
css.Rosybrown: []byte("#bc8f8f"),
css.Darkkhaki: []byte("#bdb76b"),
css.Mediumvioletred: []byte("#c71585"),
css.Indianred: []byte("#cd5c5c"),
css.Chocolate: []byte("#d2691e"),
css.Lightgray: []byte("#d3d3d3"),
css.Goldenrod: []byte("#daa520"),
css.Palevioletred: []byte("#db7093"),
css.Gainsboro: []byte("#dcdcdc"),
css.Burlywood: []byte("#deb887"),
css.Lightcyan: []byte("#e0ffff"),
css.Lavender: []byte("#e6e6fa"),
css.Darksalmon: []byte("#e9967a"),
css.Palegoldenrod: []byte("#eee8aa"),
css.Lightcoral: []byte("#f08080"),
css.Aliceblue: []byte("#f0f8ff"),
css.Honeydew: []byte("#f0fff0"),
css.Sandybrown: []byte("#f4a460"),
css.Whitesmoke: []byte("#f5f5f5"),
css.Mintcream: []byte("#f5fffa"),
css.Ghostwhite: []byte("#f8f8ff"),
css.Antiquewhite: []byte("#faebd7"),
css.Lightgoldenrodyellow: []byte("#fafad2"),
css.Fuchsia: []byte("#f0f"),
css.Magenta: []byte("#f0f"),
css.Deeppink: []byte("#ff1493"),
css.Orangered: []byte("#ff4500"),
css.Darkorange: []byte("#ff8c00"),
css.Lightsalmon: []byte("#ffa07a"),
css.Lightpink: []byte("#ffb6c1"),
css.Peachpuff: []byte("#ffdab9"),
css.Navajowhite: []byte("#ffdead"),
css.Moccasin: []byte("#ffe4b5"),
css.Mistyrose: []byte("#ffe4e1"),
css.Blanchedalmond: []byte("#ffebcd"),
css.Papayawhip: []byte("#ffefd5"),
css.Lavenderblush: []byte("#fff0f5"),
css.Seashell: []byte("#fff5ee"),
css.Cornsilk: []byte("#fff8dc"),
css.Lemonchiffon: []byte("#fffacd"),
css.Floralwhite: []byte("#fffaf0"),
css.Yellow: []byte("#ff0"),
css.Lightyellow: []byte("#ffffe0"),
css.White: []byte("#fff"),
}

131
vendor/github.com/tdewolff/minify/html/buffer.go generated vendored Normal file
View file

@ -0,0 +1,131 @@
package html // import "github.com/tdewolff/minify/html"
import (
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/html"
)
// Token is a single token unit with an attribute value (if given) and hash of the data.
type Token struct {
html.TokenType
Hash html.Hash
Data []byte
Text []byte
AttrVal []byte
Traits traits
}
// TokenBuffer is a buffer that allows for token look-ahead.
type TokenBuffer struct {
l *html.Lexer
buf []Token
pos int
attrBuffer []*Token
}
// NewTokenBuffer returns a new TokenBuffer.
func NewTokenBuffer(l *html.Lexer) *TokenBuffer {
return &TokenBuffer{
l: l,
buf: make([]Token, 0, 8),
}
}
func (z *TokenBuffer) read(t *Token) {
t.TokenType, t.Data = z.l.Next()
t.Text = z.l.Text()
if t.TokenType == html.AttributeToken {
t.AttrVal = z.l.AttrVal()
if len(t.AttrVal) > 1 && (t.AttrVal[0] == '"' || t.AttrVal[0] == '\'') {
t.AttrVal = parse.TrimWhitespace(t.AttrVal[1 : len(t.AttrVal)-1]) // quotes will be readded in attribute loop if necessary
}
t.Hash = html.ToHash(t.Text)
t.Traits = attrMap[t.Hash]
} else if t.TokenType == html.StartTagToken || t.TokenType == html.EndTagToken {
t.AttrVal = nil
t.Hash = html.ToHash(t.Text)
t.Traits = tagMap[t.Hash]
} else {
t.AttrVal = nil
t.Hash = 0
t.Traits = 0
}
}
// Peek returns the ith element and possibly does an allocation.
// Peeking past an error will panic.
func (z *TokenBuffer) Peek(pos int) *Token {
pos += z.pos
if pos >= len(z.buf) {
if len(z.buf) > 0 && z.buf[len(z.buf)-1].TokenType == html.ErrorToken {
return &z.buf[len(z.buf)-1]
}
c := cap(z.buf)
d := len(z.buf) - z.pos
p := pos - z.pos + 1 // required peek length
var buf []Token
if 2*p > c {
buf = make([]Token, 0, 2*c+p)
} else {
buf = z.buf
}
copy(buf[:d], z.buf[z.pos:])
buf = buf[:p]
pos -= z.pos
for i := d; i < p; i++ {
z.read(&buf[i])
if buf[i].TokenType == html.ErrorToken {
buf = buf[:i+1]
pos = i
break
}
}
z.pos, z.buf = 0, buf
}
return &z.buf[pos]
}
// Shift returns the first element and advances position.
func (z *TokenBuffer) Shift() *Token {
if z.pos >= len(z.buf) {
t := &z.buf[:1][0]
z.read(t)
return t
}
t := &z.buf[z.pos]
z.pos++
return t
}
// Attributes extracts the gives attribute hashes from a tag.
// It returns in the same order pointers to the requested token data or nil.
func (z *TokenBuffer) Attributes(hashes ...html.Hash) []*Token {
n := 0
for {
if t := z.Peek(n); t.TokenType != html.AttributeToken {
break
}
n++
}
if len(hashes) > cap(z.attrBuffer) {
z.attrBuffer = make([]*Token, len(hashes))
} else {
z.attrBuffer = z.attrBuffer[:len(hashes)]
for i := range z.attrBuffer {
z.attrBuffer[i] = nil
}
}
for i := z.pos; i < z.pos+n; i++ {
attr := &z.buf[i]
for j, hash := range hashes {
if hash == attr.Hash {
z.attrBuffer[j] = attr
}
}
}
return z.attrBuffer
}

37
vendor/github.com/tdewolff/minify/html/buffer_test.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package html // import "github.com/tdewolff/minify/html"
import (
"bytes"
"testing"
"github.com/tdewolff/parse/html"
"github.com/tdewolff/test"
)
func TestBuffer(t *testing.T) {
// 0 12 3 45 6 7 8 9 0
s := `<p><a href="//url">text</a>text<!--comment--></p>`
z := NewTokenBuffer(html.NewLexer(bytes.NewBufferString(s)))
tok := z.Shift()
test.That(t, tok.Hash == html.P, "first token is <p>")
test.That(t, z.pos == 0, "shift first token and restore position")
test.That(t, len(z.buf) == 0, "shift first token and restore length")
test.That(t, z.Peek(2).Hash == html.Href, "third token is href")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 3, "two tokens after peeking")
test.That(t, z.Peek(8).Hash == html.P, "ninth token is <p>")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 9, "nine tokens after peeking")
test.That(t, z.Peek(9).TokenType == html.ErrorToken, "tenth token is an error")
test.That(t, z.Peek(9) == z.Peek(10), "tenth and eleventh tokens are EOF")
test.That(t, len(z.buf) == 10, "ten tokens after peeking")
_ = z.Shift()
tok = z.Shift()
test.That(t, tok.Hash == html.A, "third token is <a>")
test.That(t, z.pos == 2, "don't change position after peeking")
}

463
vendor/github.com/tdewolff/minify/html/html.go generated vendored Normal file
View file

@ -0,0 +1,463 @@
// Package html minifies HTML5 following the specifications at http://www.w3.org/TR/html5/syntax.html.
package html // import "github.com/tdewolff/minify/html"
import (
"bytes"
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
"github.com/tdewolff/parse/html"
)
var (
gtBytes = []byte(">")
isBytes = []byte("=")
spaceBytes = []byte(" ")
doctypeBytes = []byte("<!doctype html>")
jsMimeBytes = []byte("text/javascript")
cssMimeBytes = []byte("text/css")
htmlMimeBytes = []byte("text/html")
svgMimeBytes = []byte("image/svg+xml")
mathMimeBytes = []byte("application/mathml+xml")
dataSchemeBytes = []byte("data:")
jsSchemeBytes = []byte("javascript:")
httpBytes = []byte("http")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is an HTML minifier.
type Minifier struct {
KeepConditionalComments bool
KeepDefaultAttrVals bool
KeepDocumentTags bool
KeepEndTags bool
KeepWhitespace bool
}
// Minify minifies HTML data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies HTML data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
var rawTagHash html.Hash
var rawTagMediatype []byte
omitSpace := true // if true the next leading space is omitted
inPre := false
defaultScriptType := jsMimeBytes
defaultScriptParams := map[string]string(nil)
defaultStyleType := cssMimeBytes
defaultStyleParams := map[string]string(nil)
defaultInlineStyleParams := map[string]string{"inline": "1"}
attrMinifyBuffer := buffer.NewWriter(make([]byte, 0, 64))
attrByteBuffer := make([]byte, 0, 64)
l := html.NewLexer(r)
defer l.Restore()
tb := NewTokenBuffer(l)
for {
t := *tb.Shift()
SWITCH:
switch t.TokenType {
case html.ErrorToken:
if l.Err() == io.EOF {
return nil
}
return l.Err()
case html.DoctypeToken:
if _, err := w.Write(doctypeBytes); err != nil {
return err
}
case html.CommentToken:
if o.KeepConditionalComments && len(t.Text) > 6 && (bytes.HasPrefix(t.Text, []byte("[if ")) || bytes.Equal(t.Text, []byte("[endif]"))) {
// [if ...] is always 7 or more characters, [endif] is only encountered for downlevel-revealed
// see https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).aspx#syntax
if bytes.HasPrefix(t.Data, []byte("<!--[if ")) { // downlevel-hidden
begin := bytes.IndexByte(t.Data, '>') + 1
end := len(t.Data) - len("<![endif]-->")
if _, err := w.Write(t.Data[:begin]); err != nil {
return err
}
if err := o.Minify(m, w, buffer.NewReader(t.Data[begin:end]), nil); err != nil {
return err
}
if _, err := w.Write(t.Data[end:]); err != nil {
return err
}
} else if _, err := w.Write(t.Data); err != nil { // downlevel-revealed
return err
}
}
case html.SvgToken:
if err := m.MinifyMimetype(svgMimeBytes, w, buffer.NewReader(t.Data), nil); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
case html.MathToken:
if err := m.MinifyMimetype(mathMimeBytes, w, buffer.NewReader(t.Data), nil); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
case html.TextToken:
// CSS and JS minifiers for inline code
if rawTagHash != 0 {
if rawTagHash == html.Style || rawTagHash == html.Script || rawTagHash == html.Iframe {
var mimetype []byte
var params map[string]string
if rawTagHash == html.Iframe {
mimetype = htmlMimeBytes
} else if len(rawTagMediatype) > 0 {
mimetype, params = parse.Mediatype(rawTagMediatype)
} else if rawTagHash == html.Script {
mimetype = defaultScriptType
params = defaultScriptParams
} else if rawTagHash == html.Style {
mimetype = defaultStyleType
params = defaultStyleParams
}
if err := m.MinifyMimetype(mimetype, w, buffer.NewReader(t.Data), params); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
} else if _, err := w.Write(t.Data); err != nil {
return err
}
} else if inPre {
if _, err := w.Write(t.Data); err != nil {
return err
}
} else {
t.Data = parse.ReplaceMultipleWhitespace(t.Data)
// whitespace removal; trim left
if omitSpace && (t.Data[0] == ' ' || t.Data[0] == '\n') {
t.Data = t.Data[1:]
}
// whitespace removal; trim right
omitSpace = false
if len(t.Data) == 0 {
omitSpace = true
} else if t.Data[len(t.Data)-1] == ' ' || t.Data[len(t.Data)-1] == '\n' {
omitSpace = true
i := 0
for {
next := tb.Peek(i)
// trim if EOF, text token with leading whitespace or block token
if next.TokenType == html.ErrorToken {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
break
} else if next.TokenType == html.TextToken {
// this only happens when a comment, doctype or phrasing end tag (only for !o.KeepWhitespace) was in between
// remove if the text token starts with a whitespace
if len(next.Data) > 0 && parse.IsWhitespace(next.Data[0]) {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
} else if next.TokenType == html.StartTagToken || next.TokenType == html.EndTagToken {
if o.KeepWhitespace {
break
}
// remove when followed up by a block tag
if next.Traits&nonPhrasingTag != 0 {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
break
} else if next.TokenType == html.StartTagToken {
break
}
}
i++
}
}
if _, err := w.Write(t.Data); err != nil {
return err
}
}
case html.StartTagToken, html.EndTagToken:
rawTagHash = 0
hasAttributes := false
if t.TokenType == html.StartTagToken {
if next := tb.Peek(0); next.TokenType == html.AttributeToken {
hasAttributes = true
}
if t.Traits&rawTag != 0 {
// ignore empty script and style tags
if !hasAttributes && (t.Hash == html.Script || t.Hash == html.Style) {
if next := tb.Peek(1); next.TokenType == html.EndTagToken {
tb.Shift()
tb.Shift()
break
}
}
rawTagHash = t.Hash
rawTagMediatype = nil
}
} else if t.Hash == html.Template {
omitSpace = true // EndTagToken
}
if t.Hash == html.Pre {
inPre = t.TokenType == html.StartTagToken
}
// remove superfluous tags, except for html, head and body tags when KeepDocumentTags is set
if !hasAttributes && (!o.KeepDocumentTags && (t.Hash == html.Html || t.Hash == html.Head || t.Hash == html.Body) || t.Hash == html.Colgroup) {
break
} else if t.TokenType == html.EndTagToken {
if !o.KeepEndTags {
if t.Hash == html.Thead || t.Hash == html.Tbody || t.Hash == html.Tfoot || t.Hash == html.Tr || t.Hash == html.Th || t.Hash == html.Td ||
t.Hash == html.Optgroup || t.Hash == html.Option || t.Hash == html.Dd || t.Hash == html.Dt ||
t.Hash == html.Li || t.Hash == html.Rb || t.Hash == html.Rt || t.Hash == html.Rtc || t.Hash == html.Rp {
break
} else if t.Hash == html.P {
i := 0
for {
next := tb.Peek(i)
i++
// continue if text token is empty or whitespace
if next.TokenType == html.TextToken && parse.IsAllWhitespace(next.Data) {
continue
}
if next.TokenType == html.ErrorToken || next.TokenType == html.EndTagToken && next.Traits&keepPTag == 0 || next.TokenType == html.StartTagToken && next.Traits&omitPTag != 0 {
break SWITCH // omit p end tag
}
break
}
}
}
if o.KeepWhitespace || t.Traits&objectTag != 0 {
omitSpace = false
} else if t.Traits&nonPhrasingTag != 0 {
omitSpace = true // omit spaces after block elements
}
if len(t.Data) > 3+len(t.Text) {
t.Data[2+len(t.Text)] = '>'
t.Data = t.Data[:3+len(t.Text)]
}
if _, err := w.Write(t.Data); err != nil {
return err
}
break
}
if o.KeepWhitespace || t.Traits&objectTag != 0 {
omitSpace = false
} else if t.Traits&nonPhrasingTag != 0 {
omitSpace = true // omit spaces after block elements
}
if _, err := w.Write(t.Data); err != nil {
return err
}
if hasAttributes {
if t.Hash == html.Meta {
attrs := tb.Attributes(html.Content, html.Http_Equiv, html.Charset, html.Name)
if content := attrs[0]; content != nil {
if httpEquiv := attrs[1]; httpEquiv != nil {
content.AttrVal = minify.ContentType(content.AttrVal)
if charset := attrs[2]; charset == nil && parse.EqualFold(httpEquiv.AttrVal, []byte("content-type")) && bytes.Equal(content.AttrVal, []byte("text/html;charset=utf-8")) {
httpEquiv.Text = nil
content.Text = []byte("charset")
content.Hash = html.Charset
content.AttrVal = []byte("utf-8")
} else if parse.EqualFold(httpEquiv.AttrVal, []byte("content-style-type")) {
defaultStyleType, defaultStyleParams = parse.Mediatype(content.AttrVal)
if defaultStyleParams != nil {
defaultInlineStyleParams = defaultStyleParams
defaultInlineStyleParams["inline"] = "1"
} else {
defaultInlineStyleParams = map[string]string{"inline": "1"}
}
} else if parse.EqualFold(httpEquiv.AttrVal, []byte("content-script-type")) {
defaultScriptType, defaultScriptParams = parse.Mediatype(content.AttrVal)
}
}
if name := attrs[3]; name != nil {
if parse.EqualFold(name.AttrVal, []byte("keywords")) {
content.AttrVal = bytes.Replace(content.AttrVal, []byte(", "), []byte(","), -1)
} else if parse.EqualFold(name.AttrVal, []byte("viewport")) {
content.AttrVal = bytes.Replace(content.AttrVal, []byte(" "), []byte(""), -1)
for i := 0; i < len(content.AttrVal); i++ {
if content.AttrVal[i] == '=' && i+2 < len(content.AttrVal) {
i++
if n := parse.Number(content.AttrVal[i:]); n > 0 {
minNum := minify.Number(content.AttrVal[i:i+n], -1)
if len(minNum) < n {
copy(content.AttrVal[i:i+len(minNum)], minNum)
copy(content.AttrVal[i+len(minNum):], content.AttrVal[i+n:])
content.AttrVal = content.AttrVal[:len(content.AttrVal)+len(minNum)-n]
}
i += len(minNum)
}
i-- // mitigate for-loop increase
}
}
}
}
}
} else if t.Hash == html.Script {
attrs := tb.Attributes(html.Src, html.Charset)
if attrs[0] != nil && attrs[1] != nil {
attrs[1].Text = nil
}
}
// write attributes
htmlEqualIdName := false
for {
attr := *tb.Shift()
if attr.TokenType != html.AttributeToken {
break
} else if attr.Text == nil {
continue // removed attribute
}
if t.Hash == html.A && (attr.Hash == html.Id || attr.Hash == html.Name) {
if attr.Hash == html.Id {
if name := tb.Attributes(html.Name)[0]; name != nil && bytes.Equal(attr.AttrVal, name.AttrVal) {
htmlEqualIdName = true
}
} else if htmlEqualIdName {
continue
} else if id := tb.Attributes(html.Id)[0]; id != nil && bytes.Equal(id.AttrVal, attr.AttrVal) {
continue
}
}
val := attr.AttrVal
if len(val) == 0 && (attr.Hash == html.Class ||
attr.Hash == html.Dir ||
attr.Hash == html.Id ||
attr.Hash == html.Lang ||
attr.Hash == html.Name ||
attr.Hash == html.Title ||
attr.Hash == html.Action && t.Hash == html.Form ||
attr.Hash == html.Value && t.Hash == html.Input) {
continue // omit empty attribute values
}
if attr.Traits&caselessAttr != 0 {
val = parse.ToLower(val)
if attr.Hash == html.Enctype || attr.Hash == html.Codetype || attr.Hash == html.Accept || attr.Hash == html.Type && (t.Hash == html.A || t.Hash == html.Link || t.Hash == html.Object || t.Hash == html.Param || t.Hash == html.Script || t.Hash == html.Style || t.Hash == html.Source) {
val = minify.ContentType(val)
}
}
if rawTagHash != 0 && attr.Hash == html.Type {
rawTagMediatype = parse.Copy(val)
}
// default attribute values can be omitted
if !o.KeepDefaultAttrVals && (attr.Hash == html.Type && (t.Hash == html.Script && bytes.Equal(val, []byte("text/javascript")) ||
t.Hash == html.Style && bytes.Equal(val, []byte("text/css")) ||
t.Hash == html.Link && bytes.Equal(val, []byte("text/css")) ||
t.Hash == html.Input && bytes.Equal(val, []byte("text")) ||
t.Hash == html.Button && bytes.Equal(val, []byte("submit"))) ||
attr.Hash == html.Language && t.Hash == html.Script ||
attr.Hash == html.Method && bytes.Equal(val, []byte("get")) ||
attr.Hash == html.Enctype && bytes.Equal(val, []byte("application/x-www-form-urlencoded")) ||
attr.Hash == html.Colspan && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Rowspan && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Shape && bytes.Equal(val, []byte("rect")) ||
attr.Hash == html.Span && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Clear && bytes.Equal(val, []byte("none")) ||
attr.Hash == html.Frameborder && bytes.Equal(val, []byte("1")) ||
attr.Hash == html.Scrolling && bytes.Equal(val, []byte("auto")) ||
attr.Hash == html.Valuetype && bytes.Equal(val, []byte("data")) ||
attr.Hash == html.Media && t.Hash == html.Style && bytes.Equal(val, []byte("all"))) {
continue
}
// CSS and JS minifiers for attribute inline code
if attr.Hash == html.Style {
attrMinifyBuffer.Reset()
if err := m.MinifyMimetype(defaultStyleType, attrMinifyBuffer, buffer.NewReader(val), defaultInlineStyleParams); err == nil {
val = attrMinifyBuffer.Bytes()
} else if err != minify.ErrNotExist {
return err
}
if len(val) == 0 {
continue
}
} else if len(attr.Text) > 2 && attr.Text[0] == 'o' && attr.Text[1] == 'n' {
if len(val) >= 11 && parse.EqualFold(val[:11], jsSchemeBytes) {
val = val[11:]
}
attrMinifyBuffer.Reset()
if err := m.MinifyMimetype(defaultScriptType, attrMinifyBuffer, buffer.NewReader(val), defaultScriptParams); err == nil {
val = attrMinifyBuffer.Bytes()
} else if err != minify.ErrNotExist {
return err
}
if len(val) == 0 {
continue
}
} else if len(val) > 5 && attr.Traits&urlAttr != 0 { // anchors are already handled
if parse.EqualFold(val[:4], httpBytes) {
if val[4] == ':' {
if m.URL != nil && m.URL.Scheme == "http" {
val = val[5:]
} else {
parse.ToLower(val[:4])
}
} else if (val[4] == 's' || val[4] == 'S') && val[5] == ':' {
if m.URL != nil && m.URL.Scheme == "https" {
val = val[6:]
} else {
parse.ToLower(val[:5])
}
}
} else if parse.EqualFold(val[:5], dataSchemeBytes) {
val = minify.DataURI(m, val)
}
}
if _, err := w.Write(spaceBytes); err != nil {
return err
}
if _, err := w.Write(attr.Text); err != nil {
return err
}
if len(val) > 0 && attr.Traits&booleanAttr == 0 {
if _, err := w.Write(isBytes); err != nil {
return err
}
// no quotes if possible, else prefer single or double depending on which occurs more often in value
val = html.EscapeAttrVal(&attrByteBuffer, attr.AttrVal, val)
if _, err := w.Write(val); err != nil {
return err
}
}
}
}
if _, err := w.Write(gtBytes); err != nil {
return err
}
}
}
}

408
vendor/github.com/tdewolff/minify/html/html_test.go generated vendored Normal file
View file

@ -0,0 +1,408 @@
package html // import "github.com/tdewolff/minify/html"
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/url"
"os"
"regexp"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/minify/js"
"github.com/tdewolff/minify/json"
"github.com/tdewolff/minify/svg"
"github.com/tdewolff/minify/xml"
"github.com/tdewolff/test"
)
func TestHTML(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`html`, `html`},
{`<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd">`, `<!doctype html>`},
{`<!-- comment -->`, ``},
{`<style><!--\ncss\n--></style>`, `<style><!--\ncss\n--></style>`},
{`<style>&</style>`, `<style>&</style>`},
{`<html><head></head><body>x</body></html>`, `x`},
{`<meta http-equiv="content-type" content="text/html; charset=utf-8">`, `<meta charset=utf-8>`},
{`<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />`, `<meta charset=utf-8>`},
{`<meta name="keywords" content="a, b">`, `<meta name=keywords content=a,b>`},
{`<meta name="viewport" content="width = 996" />`, `<meta name=viewport content="width=996">`},
{`<span attr="test"></span>`, `<span attr=test></span>`},
{`<span attr='test&apos;test'></span>`, `<span attr="test'test"></span>`},
{`<span attr="test&quot;test"></span>`, `<span attr='test"test'></span>`},
{`<span attr='test""&apos;&amp;test'></span>`, `<span attr='test""&#39;&amp;test'></span>`},
{`<span attr="test/test"></span>`, `<span attr=test/test></span>`},
{`<span>&amp;</span>`, `<span>&amp;</span>`},
{`<span clear=none method=GET></span>`, `<span></span>`},
{`<span onload="javascript:x;"></span>`, `<span onload=x;></span>`},
{`<span selected="selected"></span>`, `<span selected></span>`},
{`<noscript><html><img id="x"></noscript>`, `<noscript><img id=x></noscript>`},
{`<body id="main"></body>`, `<body id=main>`},
{`<link href="data:text/plain, data">`, `<link href=data:,+data>`},
{`<svg width="100" height="100"><circle cx="50" cy="50" r="40" stroke="green" stroke-width="4" fill="yellow" /></svg>`, `<svg width="100" height="100"><circle cx="50" cy="50" r="40" stroke="green" stroke-width="4" fill="yellow" /></svg>`},
{`</span >`, `</span>`},
{`<meta name=viewport content="width=0.1, initial-scale=1.0 , maximum-scale=1000">`, `<meta name=viewport content="width=.1,initial-scale=1,maximum-scale=1e3">`},
{`<br/>`, `<br>`},
// increase coverage
{`<script style="css">js</script>`, `<script style=css>js</script>`},
{`<script type="application/javascript">js</script>`, `<script type=application/javascript>js</script>`},
{`<meta http-equiv="content-type" content="text/plain, text/html">`, `<meta http-equiv=content-type content=text/plain,text/html>`},
{`<meta http-equiv="content-style-type" content="text/less">`, `<meta http-equiv=content-style-type content=text/less>`},
{`<meta http-equiv="content-style-type" content="text/less; charset=utf-8">`, `<meta http-equiv=content-style-type content="text/less;charset=utf-8">`},
{`<meta http-equiv="content-script-type" content="application/js">`, `<meta http-equiv=content-script-type content=application/js>`},
{`<span attr=""></span>`, `<span attr></span>`},
{`<code>x</code>`, `<code>x</code>`},
{`<p></p><p></p>`, `<p><p>`},
{`<ul><li></li> <li></li></ul>`, `<ul><li><li></ul>`},
{`<p></p><a></a>`, `<p></p><a></a>`},
{`<p></p>x<a></a>`, `<p></p>x<a></a>`},
{`<span style=>`, `<span>`},
{`<button onclick=>`, `<button>`},
// whitespace
{`cats and dogs `, `cats and dogs`},
{` <div> <i> test </i> <b> test </b> </div> `, `<div><i>test</i> <b>test</b></div>`},
{`<strong>x </strong>y`, `<strong>x </strong>y`},
{`<strong>x </strong> y`, `<strong>x</strong> y`},
{"<strong>x </strong>\ny", "<strong>x</strong>\ny"},
{`<p>x </p>y`, `<p>x</p>y`},
{`x <p>y</p>`, `x<p>y`},
{` <!doctype html> <!--comment--> <html> <body><p></p></body></html> `, `<!doctype html><p>`}, // spaces before html and at the start of html are dropped
{`<p>x<br> y`, `<p>x<br>y`},
{`<p>x </b> <b> y`, `<p>x</b> <b>y`},
{`a <code></code> b`, `a <code></code>b`},
{`a <code>code</code> b`, `a <code>code</code> b`},
{`a <code> code </code> b`, `a <code>code</code> b`},
{`a <script>script</script> b`, `a <script>script</script>b`},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"abc\n</body>\ndef", "abc\ndef"},
{"<x>\n<!--y-->\n</x>", "<x></x>"},
{"a <template> b </template> c", "a <template>b</template>c"},
// from HTML Minifier
{`<DIV TITLE="blah">boo</DIV>`, `<div title=blah>boo</div>`},
{"<p title\n\n\t =\n \"bar\">foo</p>", `<p title=bar>foo`},
{`<p class=" foo ">foo bar baz</p>`, `<p class=foo>foo bar baz`},
{`<input maxlength=" 5 ">`, `<input maxlength=5>`},
{`<input type="text">`, `<input>`},
{`<form method="get">`, `<form>`},
{`<script language="Javascript">alert(1)</script>`, `<script>alert(1)</script>`},
{`<script></script>`, ``},
{`<p onclick=" JavaScript: x">x</p>`, `<p onclick=" x">x`},
{`<span Selected="selected"></span>`, `<span selected></span>`},
{`<table><thead><tr><th>foo</th><th>bar</th></tr></thead><tfoot><tr><th>baz</th><th>qux</th></tr></tfoot><tbody><tr><td>boo</td><td>moo</td></tr></tbody></table>`,
`<table><thead><tr><th>foo<th>bar<tfoot><tr><th>baz<th>qux<tbody><tr><td>boo<td>moo</table>`},
{`<select><option>foo</option><option>bar</option></select>`, `<select><option>foo<option>bar</select>`},
{`<meta name="keywords" content="A, B">`, `<meta name=keywords content=A,B>`},
{`<iframe><html> <p> x </p> </html></iframe>`, `<iframe><p>x</iframe>`},
{`<math> &int;_a_^b^{f(x)<over>1+x} dx </math>`, `<math> &int;_a_^b^{f(x)<over>1+x} dx </math>`},
{`<script language="x" charset="x" src="y"></script>`, `<script src=y></script>`},
{`<style media="all">x</style>`, `<style>x</style>`},
{`<a id="abc" name="abc">y</a>`, `<a id=abc>y</a>`},
{`<a id="" value="">y</a>`, `<a value>y</a>`},
// from Kangax html-minfier
{`<span style="font-family:&quot;Helvetica Neue&quot;,&quot;Helvetica&quot;,Helvetica,Arial,sans-serif">text</span>`, `<span style='font-family:"Helvetica Neue","Helvetica",Helvetica,Arial,sans-serif'>text</span>`},
// go-fuzz
{`<meta e t n content=ful><a b`, `<meta e t n content=ful><a b>`},
{`<img alt=a'b="">`, `<img alt='a&#39;b=""'>`},
{`</b`, `</b`},
// bugs
{`<p>text</p><br>text`, `<p>text</p><br>text`}, // #122
{`text <img> text`, `text <img> text`}, // #89
{`text <progress></progress> text`, `text <progress></progress> text`}, // #89
{`<pre> <x> a b </x> </pre>`, `<pre> <x> a b </x> </pre>`}, // #82
{`<svg id="1"></svg>`, `<svg id="1"></svg>`}, // #67
}
m := minify.New()
m.AddFunc("text/html", Minify)
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
m.AddFunc("text/javascript", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLKeepEndTags(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`<p></p><p></p>`, `<p></p><p></p>`},
{`<ul><li></li><li></li></ul>`, `<ul><li></li><li></li></ul>`},
}
m := minify.New()
htmlMinifier := &Minifier{KeepEndTags: true}
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := htmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLKeepConditionalComments(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`<!--[if IE 6]> <b> </b> <![endif]-->`, `<!--[if IE 6]><b></b><![endif]-->`},
{`<![if IE 6]> <b> </b> <![endif]>`, `<![if IE 6]><b></b><![endif]>`},
}
m := minify.New()
htmlMinifier := &Minifier{KeepConditionalComments: true}
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := htmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLKeepWhitespace(t *testing.T) {
htmlTests := []struct {
html string
expected string
}{
{`cats and dogs `, `cats and dogs`},
{` <div> <i> test </i> <b> test </b> </div> `, `<div> <i> test </i> <b> test </b> </div>`},
{`<strong>x </strong>y`, `<strong>x </strong>y`},
{`<strong>x </strong> y`, `<strong>x </strong> y`},
{"<strong>x </strong>\ny", "<strong>x </strong>\ny"},
{`<p>x </p>y`, `<p>x </p>y`},
{`x <p>y</p>`, `x <p>y`},
{` <!doctype html> <!--comment--> <html> <body><p></p></body></html> `, `<!doctype html><p>`}, // spaces before html and at the start of html are dropped
{`<p>x<br> y`, `<p>x<br> y`},
{`<p>x </b> <b> y`, `<p>x </b> <b> y`},
{`a <code>code</code> b`, `a <code>code</code> b`},
{`a <code></code> b`, `a <code></code> b`},
{`a <script>script</script> b`, `a <script>script</script> b`},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"text\n<!--comment-->text<!--comment--> text", "text\ntext text"},
{"abc\n</body>\ndef", "abc\ndef"},
{"<x>\n<!--y-->\n</x>", "<x>\n</x>"},
{"<style>lala{color:red}</style>", "<style>lala{color:red}</style>"},
}
m := minify.New()
htmlMinifier := &Minifier{KeepWhitespace: true}
for _, tt := range htmlTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := htmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestHTMLURL(t *testing.T) {
htmlTests := []struct {
url string
html string
expected string
}{
{`http://example.com/`, `<a href=http://example.com/>link</a>`, `<a href=//example.com/>link</a>`},
{`https://example.com/`, `<a href=http://example.com/>link</a>`, `<a href=http://example.com/>link</a>`},
{`http://example.com/`, `<a href=https://example.com/>link</a>`, `<a href=https://example.com/>link</a>`},
{`https://example.com/`, `<a href=https://example.com/>link</a>`, `<a href=//example.com/>link</a>`},
{`http://example.com/`, `<a href=" http://example.com ">x</a>`, `<a href=//example.com>x</a>`},
{`http://example.com/`, `<link rel="stylesheet" type="text/css" href="http://example.com">`, `<link rel=stylesheet href=//example.com>`},
{`http://example.com/`, `<!doctype html> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head profile="http://dublincore.org/documents/dcq-html/"> <!-- Barlesque 2.75.0 --> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />`,
`<!doctype html><html xmlns=//www.w3.org/1999/xhtml xml:lang=en><head profile=//dublincore.org/documents/dcq-html/><meta charset=utf-8>`},
{`http://example.com/`, `<html xmlns="http://www.w3.org/1999/xhtml"></html>`, `<html xmlns=//www.w3.org/1999/xhtml>`},
{`https://example.com/`, `<html xmlns="http://www.w3.org/1999/xhtml"></html>`, `<html xmlns=http://www.w3.org/1999/xhtml>`},
{`http://example.com/`, `<html xmlns="https://www.w3.org/1999/xhtml"></html>`, `<html xmlns=https://www.w3.org/1999/xhtml>`},
{`https://example.com/`, `<html xmlns="https://www.w3.org/1999/xhtml"></html>`, `<html xmlns=//www.w3.org/1999/xhtml>`},
}
m := minify.New()
m.AddFunc("text/html", Minify)
for _, tt := range htmlTests {
t.Run(tt.url, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
m.URL, _ = url.Parse(tt.url)
err := Minify(m, w, r, nil)
test.Minify(t, tt.html, err, w.String(), tt.expected)
})
}
}
func TestSpecialTagClosing(t *testing.T) {
m := minify.New()
m.AddFunc("text/html", Minify)
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
b, err := ioutil.ReadAll(r)
test.Error(t, err, nil)
test.String(t, string(b), "</script>")
_, err = w.Write(b)
return err
})
html := `<style></script></style>`
r := bytes.NewBufferString(html)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, html, err, w.String(), html)
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
html string
n []int
}{
{`<!doctype>`, []int{0}},
{`text`, []int{0}},
{`<foo attr=val>`, []int{0, 1, 2, 3, 4, 5}},
{`</foo>`, []int{0}},
{`<style>x</style>`, []int{2}},
{`<textarea>x</textarea>`, []int{2}},
{`<code>x</code>`, []int{2}},
{`<pre>x</pre>`, []int{2}},
{`<svg>x</svg>`, []int{0}},
{`<math>x</math>`, []int{0}},
{`<!--[if IE 6]> text <![endif]-->`, []int{0, 1, 2}},
{`<![if IE 6]> text <![endif]>`, []int{0}},
}
m := minify.New()
m.Add("text/html", &Minifier{
KeepConditionalComments: true,
})
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.html, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := test.NewErrorWriter(n)
err := m.Minify("text/html", w, r)
test.T(t, err, test.ErrPlain)
})
}
}
}
func TestMinifyErrors(t *testing.T) {
errorTests := []struct {
html string
err error
}{
{`<style>abc</style>`, test.ErrPlain},
{`<path style="abc"/>`, test.ErrPlain},
{`<path onclick="abc"/>`, test.ErrPlain},
{`<svg></svg>`, test.ErrPlain},
{`<math></math>`, test.ErrPlain},
}
m := minify.New()
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
m.AddFunc("text/javascript", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
m.AddFunc("image/svg+xml", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
m.AddFunc("application/mathml+xml", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
for _, tt := range errorTests {
t.Run(tt.html, func(t *testing.T) {
r := bytes.NewBufferString(tt.html)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.T(t, err, tt.err)
})
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("text/html", Minify)
m.AddFunc("text/css", css.Minify)
m.AddFunc("text/javascript", js.Minify)
m.AddFunc("image/svg+xml", svg.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
// set URL to minify link locations too
m.URL, _ = url.Parse("https://www.example.com/")
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}
func ExampleMinify_options() {
m := minify.New()
m.Add("text/html", &Minifier{
KeepDefaultAttrVals: true,
KeepWhitespace: true,
})
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}
func ExampleMinify_reader() {
b := bytes.NewReader([]byte("<html><body><h1>Example</h1></body></html>"))
m := minify.New()
m.Add("text/html", &Minifier{})
r := m.Reader("text/html", b)
if _, err := io.Copy(os.Stdout, r); err != nil {
panic(err)
}
// Output: <h1>Example</h1>
}
func ExampleMinify_writer() {
m := minify.New()
m.Add("text/html", &Minifier{})
w := m.Writer("text/html", os.Stdout)
w.Write([]byte("<html><body><h1>Example</h1></body></html>"))
w.Close()
// Output: <h1>Example</h1>
}

187
vendor/github.com/tdewolff/minify/html/table.go generated vendored Normal file
View file

@ -0,0 +1,187 @@
package html // import "github.com/tdewolff/minify/html"
import "github.com/tdewolff/parse/html"
type traits uint8
const (
rawTag traits = 1 << iota
nonPhrasingTag
objectTag
booleanAttr
caselessAttr
urlAttr
omitPTag // omit p end tag if it is followed by this start tag
keepPTag // keep p end tag if it is followed by this end tag
)
var tagMap = map[html.Hash]traits{
html.A: keepPTag,
html.Address: nonPhrasingTag | omitPTag,
html.Article: nonPhrasingTag | omitPTag,
html.Aside: nonPhrasingTag | omitPTag,
html.Audio: objectTag | keepPTag,
html.Blockquote: nonPhrasingTag | omitPTag,
html.Body: nonPhrasingTag,
html.Br: nonPhrasingTag,
html.Button: objectTag,
html.Canvas: objectTag,
html.Caption: nonPhrasingTag,
html.Col: nonPhrasingTag,
html.Colgroup: nonPhrasingTag,
html.Dd: nonPhrasingTag,
html.Del: keepPTag,
html.Details: omitPTag,
html.Div: nonPhrasingTag | omitPTag,
html.Dl: nonPhrasingTag | omitPTag,
html.Dt: nonPhrasingTag,
html.Embed: nonPhrasingTag,
html.Fieldset: nonPhrasingTag | omitPTag,
html.Figcaption: nonPhrasingTag | omitPTag,
html.Figure: nonPhrasingTag | omitPTag,
html.Footer: nonPhrasingTag | omitPTag,
html.Form: nonPhrasingTag | omitPTag,
html.H1: nonPhrasingTag | omitPTag,
html.H2: nonPhrasingTag | omitPTag,
html.H3: nonPhrasingTag | omitPTag,
html.H4: nonPhrasingTag | omitPTag,
html.H5: nonPhrasingTag | omitPTag,
html.H6: nonPhrasingTag | omitPTag,
html.Head: nonPhrasingTag,
html.Header: nonPhrasingTag | omitPTag,
html.Hgroup: nonPhrasingTag,
html.Hr: nonPhrasingTag | omitPTag,
html.Html: nonPhrasingTag,
html.Iframe: rawTag | objectTag,
html.Img: objectTag,
html.Input: objectTag,
html.Ins: keepPTag,
html.Keygen: objectTag,
html.Li: nonPhrasingTag,
html.Main: nonPhrasingTag | omitPTag,
html.Map: keepPTag,
html.Math: rawTag,
html.Menu: omitPTag,
html.Meta: nonPhrasingTag,
html.Meter: objectTag,
html.Nav: nonPhrasingTag | omitPTag,
html.Noscript: nonPhrasingTag | keepPTag,
html.Object: objectTag,
html.Ol: nonPhrasingTag | omitPTag,
html.Output: nonPhrasingTag,
html.P: nonPhrasingTag | omitPTag,
html.Picture: objectTag,
html.Pre: nonPhrasingTag | omitPTag,
html.Progress: objectTag,
html.Q: objectTag,
html.Script: rawTag,
html.Section: nonPhrasingTag | omitPTag,
html.Select: objectTag,
html.Style: rawTag | nonPhrasingTag,
html.Svg: rawTag | objectTag,
html.Table: nonPhrasingTag | omitPTag,
html.Tbody: nonPhrasingTag,
html.Td: nonPhrasingTag,
html.Textarea: rawTag | objectTag,
html.Tfoot: nonPhrasingTag,
html.Th: nonPhrasingTag,
html.Thead: nonPhrasingTag,
html.Title: nonPhrasingTag,
html.Tr: nonPhrasingTag,
html.Ul: nonPhrasingTag | omitPTag,
html.Video: objectTag | keepPTag,
}
var attrMap = map[html.Hash]traits{
html.Accept: caselessAttr,
html.Accept_Charset: caselessAttr,
html.Action: urlAttr,
html.Align: caselessAttr,
html.Alink: caselessAttr,
html.Allowfullscreen: booleanAttr,
html.Async: booleanAttr,
html.Autofocus: booleanAttr,
html.Autoplay: booleanAttr,
html.Axis: caselessAttr,
html.Background: urlAttr,
html.Bgcolor: caselessAttr,
html.Charset: caselessAttr,
html.Checked: booleanAttr,
html.Cite: urlAttr,
html.Classid: urlAttr,
html.Clear: caselessAttr,
html.Codebase: urlAttr,
html.Codetype: caselessAttr,
html.Color: caselessAttr,
html.Compact: booleanAttr,
html.Controls: booleanAttr,
html.Data: urlAttr,
html.Declare: booleanAttr,
html.Default: booleanAttr,
html.DefaultChecked: booleanAttr,
html.DefaultMuted: booleanAttr,
html.DefaultSelected: booleanAttr,
html.Defer: booleanAttr,
html.Dir: caselessAttr,
html.Disabled: booleanAttr,
html.Draggable: booleanAttr,
html.Enabled: booleanAttr,
html.Enctype: caselessAttr,
html.Face: caselessAttr,
html.Formaction: urlAttr,
html.Formnovalidate: booleanAttr,
html.Frame: caselessAttr,
html.Hidden: booleanAttr,
html.Href: urlAttr,
html.Hreflang: caselessAttr,
html.Http_Equiv: caselessAttr,
html.Icon: urlAttr,
html.Inert: booleanAttr,
html.Ismap: booleanAttr,
html.Itemscope: booleanAttr,
html.Lang: caselessAttr,
html.Language: caselessAttr,
html.Link: caselessAttr,
html.Longdesc: urlAttr,
html.Manifest: urlAttr,
html.Media: caselessAttr,
html.Method: caselessAttr,
html.Multiple: booleanAttr,
html.Muted: booleanAttr,
html.Nohref: booleanAttr,
html.Noresize: booleanAttr,
html.Noshade: booleanAttr,
html.Novalidate: booleanAttr,
html.Nowrap: booleanAttr,
html.Open: booleanAttr,
html.Pauseonexit: booleanAttr,
html.Poster: urlAttr,
html.Profile: urlAttr,
html.Readonly: booleanAttr,
html.Rel: caselessAttr,
html.Required: booleanAttr,
html.Rev: caselessAttr,
html.Reversed: booleanAttr,
html.Rules: caselessAttr,
html.Scope: caselessAttr,
html.Scoped: booleanAttr,
html.Scrolling: caselessAttr,
html.Seamless: booleanAttr,
html.Selected: booleanAttr,
html.Shape: caselessAttr,
html.Sortable: booleanAttr,
html.Src: urlAttr,
html.Target: caselessAttr,
html.Text: caselessAttr,
html.Translate: booleanAttr,
html.Truespeed: booleanAttr,
html.Type: caselessAttr,
html.Typemustmatch: booleanAttr,
html.Undeterminate: booleanAttr,
html.Usemap: urlAttr,
html.Valign: caselessAttr,
html.Valuetype: caselessAttr,
html.Vlink: caselessAttr,
html.Visible: booleanAttr,
html.Xmlns: urlAttr,
}

88
vendor/github.com/tdewolff/minify/js/js.go generated vendored Normal file
View file

@ -0,0 +1,88 @@
// Package js minifies ECMAScript5.1 following the specifications at http://www.ecma-international.org/ecma-262/5.1/.
package js // import "github.com/tdewolff/minify/js"
import (
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/js"
)
var (
spaceBytes = []byte(" ")
newlineBytes = []byte("\n")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is a JS minifier.
type Minifier struct{}
// Minify minifies JS data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies JS data, it reads from r and writes to w.
func (o *Minifier) Minify(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
prev := js.LineTerminatorToken
prevLast := byte(' ')
lineTerminatorQueued := false
whitespaceQueued := false
l := js.NewLexer(r)
defer l.Restore()
for {
tt, data := l.Next()
if tt == js.ErrorToken {
if l.Err() != io.EOF {
return l.Err()
}
return nil
} else if tt == js.LineTerminatorToken {
lineTerminatorQueued = true
} else if tt == js.WhitespaceToken {
whitespaceQueued = true
} else if tt == js.CommentToken {
if len(data) > 5 && data[1] == '*' && data[2] == '!' {
if _, err := w.Write(data[:3]); err != nil {
return err
}
comment := parse.TrimWhitespace(parse.ReplaceMultipleWhitespace(data[3 : len(data)-2]))
if _, err := w.Write(comment); err != nil {
return err
}
if _, err := w.Write(data[len(data)-2:]); err != nil {
return err
}
}
} else {
first := data[0]
if (prev == js.IdentifierToken || prev == js.NumericToken || prev == js.PunctuatorToken || prev == js.StringToken || prev == js.RegexpToken) &&
(tt == js.IdentifierToken || tt == js.NumericToken || tt == js.StringToken || tt == js.PunctuatorToken || tt == js.RegexpToken) {
if lineTerminatorQueued && (prev != js.PunctuatorToken || prevLast == '}' || prevLast == ']' || prevLast == ')' || prevLast == '+' || prevLast == '-' || prevLast == '"' || prevLast == '\'') &&
(tt != js.PunctuatorToken || first == '{' || first == '[' || first == '(' || first == '+' || first == '-' || first == '!' || first == '~') {
if _, err := w.Write(newlineBytes); err != nil {
return err
}
} else if whitespaceQueued && (prev != js.StringToken && prev != js.PunctuatorToken && tt != js.PunctuatorToken || (prevLast == '+' || prevLast == '-') && first == prevLast) {
if _, err := w.Write(spaceBytes); err != nil {
return err
}
}
}
if _, err := w.Write(data); err != nil {
return err
}
prev = tt
prevLast = data[len(data)-1]
lineTerminatorQueued = false
whitespaceQueued = false
}
}
}

96
vendor/github.com/tdewolff/minify/js/js_test.go generated vendored Normal file
View file

@ -0,0 +1,96 @@
package js // import "github.com/tdewolff/minify/js"
import (
"bytes"
"fmt"
"os"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestJS(t *testing.T) {
jsTests := []struct {
js string
expected string
}{
{"/*comment*/", ""},
{"// comment\na", "a"},
{"/*! bang comment */", "/*!bang comment*/"},
{"function x(){}", "function x(){}"},
{"function x(a, b){}", "function x(a,b){}"},
{"a b", "a b"},
{"a\n\nb", "a\nb"},
{"a// comment\nb", "a\nb"},
{"''\na", "''\na"},
{"''\n''", "''\n''"},
{"]\n0", "]\n0"},
{"a\n{", "a\n{"},
{";\na", ";a"},
{",\na", ",a"},
{"}\na", "}\na"},
{"+\na", "+\na"},
{"+\n(", "+\n("},
{"+\n\"\"", "+\n\"\""},
{"a + ++b", "a+ ++b"}, // JSMin caution
{"var a=/\\s?auto?\\s?/i\nvar", "var a=/\\s?auto?\\s?/i\nvar"}, // #14
{"var a=0\n!function(){}", "var a=0\n!function(){}"}, // #107
{"function(){}\n\"string\"", "function(){}\n\"string\""}, // #109
{"false\n\"string\"", "false\n\"string\""}, // #109
{"`\n", "`"}, // go fuzz
{"a\n~b", "a\n~b"}, // #132
}
m := minify.New()
for _, tt := range jsTests {
t.Run(tt.js, func(t *testing.T) {
r := bytes.NewBufferString(tt.js)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.js, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
js string
n []int
}{
{"a\n{5 5", []int{0, 1, 4}},
{`/*!comment*/`, []int{0, 1, 2}},
{"false\n\"string\"", []int{1}}, // #109
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.js, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.js)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("text/javascript", Minify)
if err := m.Minify("text/javascript", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

63
vendor/github.com/tdewolff/minify/json/json.go generated vendored Normal file
View file

@ -0,0 +1,63 @@
// Package json minifies JSON following the specifications at http://json.org/.
package json // import "github.com/tdewolff/minify/json"
import (
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse/json"
)
var (
commaBytes = []byte(",")
colonBytes = []byte(":")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is a JSON minifier.
type Minifier struct{}
// Minify minifies JSON data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies JSON data, it reads from r and writes to w.
func (o *Minifier) Minify(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
skipComma := true
p := json.NewParser(r)
defer p.Restore()
for {
state := p.State()
gt, text := p.Next()
if gt == json.ErrorGrammar {
if p.Err() != io.EOF {
return p.Err()
}
return nil
}
if !skipComma && gt != json.EndObjectGrammar && gt != json.EndArrayGrammar {
if state == json.ObjectKeyState || state == json.ArrayState {
if _, err := w.Write(commaBytes); err != nil {
return err
}
} else if state == json.ObjectValueState {
if _, err := w.Write(colonBytes); err != nil {
return err
}
}
}
skipComma = gt == json.StartObjectGrammar || gt == json.StartArrayGrammar
if _, err := w.Write(text); err != nil {
return err
}
}
}

74
vendor/github.com/tdewolff/minify/json/json_test.go generated vendored Normal file
View file

@ -0,0 +1,74 @@
package json // import "github.com/tdewolff/minify/json"
import (
"bytes"
"fmt"
"os"
"regexp"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestJSON(t *testing.T) {
jsonTests := []struct {
json string
expected string
}{
{"{ \"a\": [1, 2] }", "{\"a\":[1,2]}"},
{"[{ \"a\": [{\"x\": null}, true] }]", "[{\"a\":[{\"x\":null},true]}]"},
{"{ \"a\": 1, \"b\": 2 }", "{\"a\":1,\"b\":2}"},
}
m := minify.New()
for _, tt := range jsonTests {
t.Run(tt.json, func(t *testing.T) {
r := bytes.NewBufferString(tt.json)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.json, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
json string
n []int
}{
//01 234 56 78
{`{"key":[100,200]}`, []int{0, 1, 2, 3, 4, 5, 7, 8}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.json, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.json)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), Minify)
if err := m.Minify("application/json", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

279
vendor/github.com/tdewolff/minify/minify.go generated vendored Normal file
View file

@ -0,0 +1,279 @@
// Package minify relates MIME type to minifiers. Several minifiers are provided in the subpackages.
package minify // import "github.com/tdewolff/minify"
import (
"errors"
"io"
"mime"
"net/http"
"net/url"
"os/exec"
"path"
"regexp"
"sync"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
)
// ErrNotExist is returned when no minifier exists for a given mimetype.
var ErrNotExist = errors.New("minifier does not exist for mimetype")
////////////////////////////////////////////////////////////////
// MinifierFunc is a function that implements Minifer.
type MinifierFunc func(*M, io.Writer, io.Reader, map[string]string) error
// Minify calls f(m, w, r, params)
func (f MinifierFunc) Minify(m *M, w io.Writer, r io.Reader, params map[string]string) error {
return f(m, w, r, params)
}
// Minifier is the interface for minifiers.
// The *M parameter is used for minifying embedded resources, such as JS within HTML.
type Minifier interface {
Minify(*M, io.Writer, io.Reader, map[string]string) error
}
////////////////////////////////////////////////////////////////
type patternMinifier struct {
pattern *regexp.Regexp
Minifier
}
type cmdMinifier struct {
cmd *exec.Cmd
}
func (c *cmdMinifier) Minify(_ *M, w io.Writer, r io.Reader, _ map[string]string) error {
cmd := &exec.Cmd{}
*cmd = *c.cmd // concurrency safety
cmd.Stdout = w
cmd.Stdin = r
return cmd.Run()
}
////////////////////////////////////////////////////////////////
// M holds a map of mimetype => function to allow recursive minifier calls of the minifier functions.
type M struct {
literal map[string]Minifier
pattern []patternMinifier
URL *url.URL
}
// New returns a new M.
func New() *M {
return &M{
map[string]Minifier{},
[]patternMinifier{},
nil,
}
}
// Add adds a minifier to the mimetype => function map (unsafe for concurrent use).
func (m *M) Add(mimetype string, minifier Minifier) {
m.literal[mimetype] = minifier
}
// AddFunc adds a minify function to the mimetype => function map (unsafe for concurrent use).
func (m *M) AddFunc(mimetype string, minifier MinifierFunc) {
m.literal[mimetype] = minifier
}
// AddRegexp adds a minifier to the mimetype => function map (unsafe for concurrent use).
func (m *M) AddRegexp(pattern *regexp.Regexp, minifier Minifier) {
m.pattern = append(m.pattern, patternMinifier{pattern, minifier})
}
// AddFuncRegexp adds a minify function to the mimetype => function map (unsafe for concurrent use).
func (m *M) AddFuncRegexp(pattern *regexp.Regexp, minifier MinifierFunc) {
m.pattern = append(m.pattern, patternMinifier{pattern, minifier})
}
// AddCmd adds a minify function to the mimetype => function map (unsafe for concurrent use) that executes a command to process the minification.
// It allows the use of external tools like ClosureCompiler, UglifyCSS, etc. for a specific mimetype.
func (m *M) AddCmd(mimetype string, cmd *exec.Cmd) {
m.literal[mimetype] = &cmdMinifier{cmd}
}
// AddCmdRegexp adds a minify function to the mimetype => function map (unsafe for concurrent use) that executes a command to process the minification.
// It allows the use of external tools like ClosureCompiler, UglifyCSS, etc. for a specific mimetype regular expression.
func (m *M) AddCmdRegexp(pattern *regexp.Regexp, cmd *exec.Cmd) {
m.pattern = append(m.pattern, patternMinifier{pattern, &cmdMinifier{cmd}})
}
// Match returns the pattern and minifier that gets matched with the mediatype.
// It returns nil when no matching minifier exists.
// It has the same matching algorithm as Minify.
func (m *M) Match(mediatype string) (string, map[string]string, MinifierFunc) {
mimetype, params := parse.Mediatype([]byte(mediatype))
if minifier, ok := m.literal[string(mimetype)]; ok { // string conversion is optimized away
return string(mimetype), params, minifier.Minify
}
for _, minifier := range m.pattern {
if minifier.pattern.Match(mimetype) {
return minifier.pattern.String(), params, minifier.Minify
}
}
return string(mimetype), params, nil
}
// Minify minifies the content of a Reader and writes it to a Writer (safe for concurrent use).
// An error is returned when no such mimetype exists (ErrNotExist) or when an error occurred in the minifier function.
// Mediatype may take the form of 'text/plain', 'text/*', '*/*' or 'text/plain; charset=UTF-8; version=2.0'.
func (m *M) Minify(mediatype string, w io.Writer, r io.Reader) error {
mimetype, params := parse.Mediatype([]byte(mediatype))
return m.MinifyMimetype(mimetype, w, r, params)
}
// MinifyMimetype minifies the content of a Reader and writes it to a Writer (safe for concurrent use).
// It is a lower level version of Minify and requires the mediatype to be split up into mimetype and parameters.
// It is mostly used internally by minifiers because it is faster (no need to convert a byte-slice to string and vice versa).
func (m *M) MinifyMimetype(mimetype []byte, w io.Writer, r io.Reader, params map[string]string) error {
err := ErrNotExist
if minifier, ok := m.literal[string(mimetype)]; ok { // string conversion is optimized away
err = minifier.Minify(m, w, r, params)
} else {
for _, minifier := range m.pattern {
if minifier.pattern.Match(mimetype) {
err = minifier.Minify(m, w, r, params)
break
}
}
}
return err
}
// Bytes minifies an array of bytes (safe for concurrent use). When an error occurs it return the original array and the error.
// It returns an error when no such mimetype exists (ErrNotExist) or any error occurred in the minifier function.
func (m *M) Bytes(mediatype string, v []byte) ([]byte, error) {
out := buffer.NewWriter(make([]byte, 0, len(v)))
if err := m.Minify(mediatype, out, buffer.NewReader(v)); err != nil {
return v, err
}
return out.Bytes(), nil
}
// String minifies a string (safe for concurrent use). When an error occurs it return the original string and the error.
// It returns an error when no such mimetype exists (ErrNotExist) or any error occurred in the minifier function.
func (m *M) String(mediatype string, v string) (string, error) {
out := buffer.NewWriter(make([]byte, 0, len(v)))
if err := m.Minify(mediatype, out, buffer.NewReader([]byte(v))); err != nil {
return v, err
}
return string(out.Bytes()), nil
}
// Reader wraps a Reader interface and minifies the stream.
// Errors from the minifier are returned by the reader.
func (m *M) Reader(mediatype string, r io.Reader) io.Reader {
pr, pw := io.Pipe()
go func() {
if err := m.Minify(mediatype, pw, r); err != nil {
pw.CloseWithError(err)
} else {
pw.Close()
}
}()
return pr
}
// minifyWriter makes sure that errors from the minifier are passed down through Close (can be blocking).
type minifyWriter struct {
pw *io.PipeWriter
wg sync.WaitGroup
err error
}
// Write intercepts any writes to the writer.
func (w *minifyWriter) Write(b []byte) (int, error) {
return w.pw.Write(b)
}
// Close must be called when writing has finished. It returns the error from the minifier.
func (w *minifyWriter) Close() error {
w.pw.Close()
w.wg.Wait()
return w.err
}
// Writer wraps a Writer interface and minifies the stream.
// Errors from the minifier are returned by Close on the writer.
// The writer must be closed explicitly.
func (m *M) Writer(mediatype string, w io.Writer) *minifyWriter {
pr, pw := io.Pipe()
mw := &minifyWriter{pw, sync.WaitGroup{}, nil}
mw.wg.Add(1)
go func() {
defer mw.wg.Done()
if err := m.Minify(mediatype, w, pr); err != nil {
io.Copy(w, pr)
mw.err = err
}
pr.Close()
}()
return mw
}
// minifyResponseWriter wraps an http.ResponseWriter and makes sure that errors from the minifier are passed down through Close (can be blocking).
// All writes to the response writer are intercepted and minified on the fly.
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
type minifyResponseWriter struct {
http.ResponseWriter
writer *minifyWriter
m *M
mediatype string
}
// WriteHeader intercepts any header writes and removes the Content-Length header.
func (w *minifyResponseWriter) WriteHeader(status int) {
w.ResponseWriter.Header().Del("Content-Length")
w.ResponseWriter.WriteHeader(status)
}
// Write intercepts any writes to the response writer.
// The first write will extract the Content-Type as the mediatype. Otherwise it falls back to the RequestURI extension.
func (w *minifyResponseWriter) Write(b []byte) (int, error) {
if w.writer == nil {
// first write
if mediatype := w.ResponseWriter.Header().Get("Content-Type"); mediatype != "" {
w.mediatype = mediatype
}
w.writer = w.m.Writer(w.mediatype, w.ResponseWriter)
}
return w.writer.Write(b)
}
// Close must be called when writing has finished. It returns the error from the minifier.
func (w *minifyResponseWriter) Close() error {
if w.writer != nil {
return w.writer.Close()
}
return nil
}
// ResponseWriter minifies any writes to the http.ResponseWriter.
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
// Minification might be slower than just sending the original file! Caching is advised.
func (m *M) ResponseWriter(w http.ResponseWriter, r *http.Request) *minifyResponseWriter {
mediatype := mime.TypeByExtension(path.Ext(r.RequestURI))
return &minifyResponseWriter{w, nil, m, mediatype}
}
// Middleware provides a middleware function that minifies content on the fly by intercepting writes to http.ResponseWriter.
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
// Minification might be slower than just sending the original file! Caching is advised.
func (m *M) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
mw := m.ResponseWriter(w, r)
defer mw.Close()
next.ServeHTTP(mw, r)
})
}

358
vendor/github.com/tdewolff/minify/minify_test.go generated vendored Normal file
View file

@ -0,0 +1,358 @@
package minify // import "github.com/tdewolff/minify"
import (
"bufio"
"bytes"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"os"
"os/exec"
"regexp"
"strings"
"testing"
"github.com/tdewolff/test"
)
var errDummy = errors.New("dummy error")
// from os/exec/exec_test.go
func helperCommand(t *testing.T, s ...string) *exec.Cmd {
cs := []string{"-test.run=TestHelperProcess", "--"}
cs = append(cs, s...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
return cmd
}
////////////////////////////////////////////////////////////////
var m *M
func init() {
m = New()
m.AddFunc("dummy/copy", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
io.Copy(w, r)
return nil
})
m.AddFunc("dummy/nil", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return nil
})
m.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
m.AddFunc("dummy/charset", func(m *M, w io.Writer, r io.Reader, params map[string]string) error {
w.Write([]byte(params["charset"]))
return nil
})
m.AddFunc("dummy/params", func(m *M, w io.Writer, r io.Reader, params map[string]string) error {
return m.Minify(params["type"]+"/"+params["sub"], w, r)
})
m.AddFunc("type/sub", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
w.Write([]byte("type/sub"))
return nil
})
m.AddFuncRegexp(regexp.MustCompile("^type/.+$"), func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
w.Write([]byte("type/*"))
return nil
})
m.AddFuncRegexp(regexp.MustCompile("^.+/.+$"), func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
w.Write([]byte("*/*"))
return nil
})
}
func TestMinify(t *testing.T) {
test.T(t, m.Minify("?", nil, nil), ErrNotExist, "minifier doesn't exist")
test.T(t, m.Minify("dummy/nil", nil, nil), nil)
test.T(t, m.Minify("dummy/err", nil, nil), errDummy)
b := []byte("test")
out, err := m.Bytes("dummy/nil", b)
test.T(t, err, nil)
test.Bytes(t, out, []byte{}, "dummy/nil returns empty byte slice")
out, err = m.Bytes("?", b)
test.T(t, err, ErrNotExist, "minifier doesn't exist")
test.Bytes(t, out, b, "return input when minifier doesn't exist")
s := "test"
out2, err := m.String("dummy/nil", s)
test.T(t, err, nil)
test.String(t, out2, "", "dummy/nil returns empty string")
out2, err = m.String("?", s)
test.T(t, err, ErrNotExist, "minifier doesn't exist")
test.String(t, out2, s, "return input when minifier doesn't exist")
}
type DummyMinifier struct{}
func (d *DummyMinifier) Minify(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
}
func TestAdd(t *testing.T) {
mAdd := New()
r := bytes.NewBufferString("test")
w := &bytes.Buffer{}
mAdd.Add("dummy/err", &DummyMinifier{})
test.T(t, mAdd.Minify("dummy/err", nil, nil), errDummy)
mAdd.AddRegexp(regexp.MustCompile("err1$"), &DummyMinifier{})
test.T(t, mAdd.Minify("dummy/err1", nil, nil), errDummy)
mAdd.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
test.T(t, mAdd.Minify("dummy/err", nil, nil), errDummy)
mAdd.AddFuncRegexp(regexp.MustCompile("err2$"), func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
test.T(t, mAdd.Minify("dummy/err2", nil, nil), errDummy)
mAdd.AddCmd("dummy/copy", helperCommand(t, "dummy/copy"))
mAdd.AddCmd("dummy/err", helperCommand(t, "dummy/err"))
mAdd.AddCmdRegexp(regexp.MustCompile("err6$"), helperCommand(t, "werr6"))
test.T(t, mAdd.Minify("dummy/copy", w, r), nil)
test.String(t, w.String(), "test", "dummy/copy command returns input")
test.String(t, mAdd.Minify("dummy/err", w, r).Error(), "exit status 1", "command returns status 1 for dummy/err")
test.String(t, mAdd.Minify("werr6", w, r).Error(), "exit status 2", "command returns status 2 when minifier doesn't exist")
test.String(t, mAdd.Minify("stderr6", w, r).Error(), "exit status 2", "command returns status 2 when minifier doesn't exist")
}
func TestMatch(t *testing.T) {
pattern, params, _ := m.Match("dummy/copy; a=b")
test.String(t, pattern, "dummy/copy")
test.String(t, params["a"], "b")
pattern, _, _ = m.Match("type/foobar")
test.String(t, pattern, "^type/.+$")
_, _, minifier := m.Match("dummy/")
test.That(t, minifier == nil)
}
func TestWildcard(t *testing.T) {
mimetypeTests := []struct {
mimetype string
expected string
}{
{"type/sub", "type/sub"},
{"type/*", "type/*"},
{"*/*", "*/*"},
{"type/sub2", "type/*"},
{"type2/sub", "*/*"},
{"dummy/charset;charset=UTF-8", "UTF-8"},
{"dummy/charset; charset = UTF-8 ", "UTF-8"},
{"dummy/params;type=type;sub=two2", "type/*"},
}
for _, tt := range mimetypeTests {
r := bytes.NewBufferString("")
w := &bytes.Buffer{}
err := m.Minify(tt.mimetype, w, r)
test.Error(t, err)
test.Minify(t, tt.mimetype, nil, w.String(), tt.expected)
}
}
func TestReader(t *testing.T) {
m := New()
m.AddFunc("dummy/dummy", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
m.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
w := &bytes.Buffer{}
r := bytes.NewBufferString("test")
mr := m.Reader("dummy/dummy", r)
_, err := io.Copy(w, mr)
test.Error(t, err)
test.String(t, w.String(), "test", "equal input after dummy minify reader")
mr = m.Reader("dummy/err", r)
_, err = io.Copy(w, mr)
test.T(t, err, errDummy)
}
func TestWriter(t *testing.T) {
m := New()
m.AddFunc("dummy/dummy", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
m.AddFunc("dummy/err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
return errDummy
})
m.AddFunc("dummy/late-err", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, _ = ioutil.ReadAll(r)
return errDummy
})
w := &bytes.Buffer{}
mw := m.Writer("dummy/dummy", w)
_, _ = mw.Write([]byte("test"))
test.Error(t, mw.Close())
test.String(t, w.String(), "test", "equal input after dummy minify writer")
w = &bytes.Buffer{}
mw = m.Writer("dummy/err", w)
_, _ = mw.Write([]byte("test"))
test.T(t, mw.Close(), errDummy)
test.String(t, w.String(), "test", "equal input after dummy minify writer")
w = &bytes.Buffer{}
mw = m.Writer("dummy/late-err", w)
_, _ = mw.Write([]byte("test"))
test.T(t, mw.Close(), errDummy)
test.String(t, w.String(), "")
}
type responseWriter struct {
writer io.Writer
header http.Header
}
func (w *responseWriter) Header() http.Header {
return w.header
}
func (w *responseWriter) WriteHeader(_ int) {}
func (w *responseWriter) Write(b []byte) (int, error) {
return w.writer.Write(b)
}
func TestResponseWriter(t *testing.T) {
m := New()
m.AddFunc("text/html", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
b := &bytes.Buffer{}
w := &responseWriter{b, http.Header{}}
r := &http.Request{RequestURI: "/index.html"}
mw := m.ResponseWriter(w, r)
test.Error(t, mw.Close())
_, _ = mw.Write([]byte("test"))
test.Error(t, mw.Close())
test.String(t, b.String(), "test", "equal input after dummy minify response writer")
b = &bytes.Buffer{}
w = &responseWriter{b, http.Header{}}
r = &http.Request{RequestURI: "/index"}
mw = m.ResponseWriter(w, r)
mw.Header().Add("Content-Type", "text/html")
_, _ = mw.Write([]byte("test"))
test.Error(t, mw.Close())
test.String(t, b.String(), "test", "equal input after dummy minify response writer")
}
func TestMiddleware(t *testing.T) {
m := New()
m.AddFunc("text/html", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
_, err := io.Copy(w, r)
return err
})
b := &bytes.Buffer{}
w := &responseWriter{b, http.Header{}}
r := &http.Request{RequestURI: "/index.html"}
m.Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("test"))
})).ServeHTTP(w, r)
test.String(t, b.String(), "test", "equal input after dummy minify middleware")
}
func TestHelperProcess(*testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
args := os.Args
for len(args) > 0 {
if args[0] == "--" {
args = args[1:]
break
}
args = args[1:]
}
if len(args) == 0 {
fmt.Fprintf(os.Stderr, "No command\n")
os.Exit(2)
}
switch args[0] {
case "dummy/copy":
io.Copy(os.Stdout, os.Stdin)
case "dummy/err":
os.Exit(1)
default:
os.Exit(2)
}
os.Exit(0)
}
////////////////////////////////////////////////////////////////
func ExampleM_Minify_custom() {
m := New()
m.AddFunc("text/plain", func(m *M, w io.Writer, r io.Reader, _ map[string]string) error {
// remove all newlines and spaces
rb := bufio.NewReader(r)
for {
line, err := rb.ReadString('\n')
if err != nil && err != io.EOF {
return err
}
if _, errws := io.WriteString(w, strings.Replace(line, " ", "", -1)); errws != nil {
return errws
}
if err == io.EOF {
break
}
}
return nil
})
in := "Because my coffee was too cold, I heated it in the microwave."
out, err := m.String("text/plain", in)
if err != nil {
panic(err)
}
fmt.Println(out)
// Output: Becausemycoffeewastoocold,Iheateditinthemicrowave.
}
func ExampleM_Reader() {
b := bytes.NewReader([]byte("input"))
m := New()
// add minfiers
r := m.Reader("mime/type", b)
if _, err := io.Copy(os.Stdout, r); err != nil {
if _, err := io.Copy(os.Stdout, b); err != nil {
panic(err)
}
}
}
func ExampleM_Writer() {
m := New()
// add minfiers
w := m.Writer("mime/type", os.Stdout)
if _, err := w.Write([]byte("input")); err != nil {
panic(err)
}
if err := w.Close(); err != nil {
panic(err)
}
}

130
vendor/github.com/tdewolff/minify/svg/buffer.go generated vendored Normal file
View file

@ -0,0 +1,130 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/svg"
"github.com/tdewolff/parse/xml"
)
// Token is a single token unit with an attribute value (if given) and hash of the data.
type Token struct {
xml.TokenType
Hash svg.Hash
Data []byte
Text []byte
AttrVal []byte
}
// TokenBuffer is a buffer that allows for token look-ahead.
type TokenBuffer struct {
l *xml.Lexer
buf []Token
pos int
attrBuffer []*Token
}
// NewTokenBuffer returns a new TokenBuffer.
func NewTokenBuffer(l *xml.Lexer) *TokenBuffer {
return &TokenBuffer{
l: l,
buf: make([]Token, 0, 8),
}
}
func (z *TokenBuffer) read(t *Token) {
t.TokenType, t.Data = z.l.Next()
t.Text = z.l.Text()
if t.TokenType == xml.AttributeToken {
t.AttrVal = z.l.AttrVal()
if len(t.AttrVal) > 1 && (t.AttrVal[0] == '"' || t.AttrVal[0] == '\'') {
t.AttrVal = parse.ReplaceMultipleWhitespace(parse.TrimWhitespace(t.AttrVal[1 : len(t.AttrVal)-1])) // quotes will be readded in attribute loop if necessary
}
t.Hash = svg.ToHash(t.Text)
} else if t.TokenType == xml.StartTagToken || t.TokenType == xml.EndTagToken {
t.AttrVal = nil
t.Hash = svg.ToHash(t.Text)
} else {
t.AttrVal = nil
t.Hash = 0
}
}
// Peek returns the ith element and possibly does an allocation.
// Peeking past an error will panic.
func (z *TokenBuffer) Peek(pos int) *Token {
pos += z.pos
if pos >= len(z.buf) {
if len(z.buf) > 0 && z.buf[len(z.buf)-1].TokenType == xml.ErrorToken {
return &z.buf[len(z.buf)-1]
}
c := cap(z.buf)
d := len(z.buf) - z.pos
p := pos - z.pos + 1 // required peek length
var buf []Token
if 2*p > c {
buf = make([]Token, 0, 2*c+p)
} else {
buf = z.buf
}
copy(buf[:d], z.buf[z.pos:])
buf = buf[:p]
pos -= z.pos
for i := d; i < p; i++ {
z.read(&buf[i])
if buf[i].TokenType == xml.ErrorToken {
buf = buf[:i+1]
pos = i
break
}
}
z.pos, z.buf = 0, buf
}
return &z.buf[pos]
}
// Shift returns the first element and advances position.
func (z *TokenBuffer) Shift() *Token {
if z.pos >= len(z.buf) {
t := &z.buf[:1][0]
z.read(t)
return t
}
t := &z.buf[z.pos]
z.pos++
return t
}
// Attributes extracts the gives attribute hashes from a tag.
// It returns in the same order pointers to the requested token data or nil.
func (z *TokenBuffer) Attributes(hashes ...svg.Hash) ([]*Token, *Token) {
n := 0
for {
if t := z.Peek(n); t.TokenType != xml.AttributeToken {
break
}
n++
}
if len(hashes) > cap(z.attrBuffer) {
z.attrBuffer = make([]*Token, len(hashes))
} else {
z.attrBuffer = z.attrBuffer[:len(hashes)]
for i := range z.attrBuffer {
z.attrBuffer[i] = nil
}
}
var replacee *Token
for i := z.pos; i < z.pos+n; i++ {
attr := &z.buf[i]
for j, hash := range hashes {
if hash == attr.Hash {
z.attrBuffer[j] = attr
replacee = attr
}
}
}
return z.attrBuffer, replacee
}

68
vendor/github.com/tdewolff/minify/svg/buffer_test.go generated vendored Normal file
View file

@ -0,0 +1,68 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"bytes"
"strconv"
"testing"
"github.com/tdewolff/parse/svg"
"github.com/tdewolff/parse/xml"
"github.com/tdewolff/test"
)
func TestBuffer(t *testing.T) {
// 0 12 3 4 5 6 7 8 9 01
s := `<svg><path d="M0 0L1 1z"/>text<tag/>text</svg>`
z := NewTokenBuffer(xml.NewLexer(bytes.NewBufferString(s)))
tok := z.Shift()
test.That(t, tok.Hash == svg.Svg, "first token is <svg>")
test.That(t, z.pos == 0, "shift first token and restore position")
test.That(t, len(z.buf) == 0, "shift first token and restore length")
test.That(t, z.Peek(2).Hash == svg.D, "third token is d")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 3, "mtwo tokens after peeking")
test.That(t, z.Peek(8).Hash == svg.Svg, "ninth token is <svg>")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 9, "nine tokens after peeking")
test.That(t, z.Peek(9).TokenType == xml.ErrorToken, "tenth token is an error")
test.That(t, z.Peek(9) == z.Peek(10), "tenth and eleventh token are EOF")
test.That(t, len(z.buf) == 10, "ten tokens after peeking")
_ = z.Shift()
tok = z.Shift()
test.That(t, tok.Hash == svg.Path, "third token is <path>")
test.That(t, z.pos == 2, "don't change position after peeking")
}
func TestAttributes(t *testing.T) {
r := bytes.NewBufferString(`<rect x="0" y="1" width="2" height="3" rx="4" ry="5"/>`)
l := xml.NewLexer(r)
tb := NewTokenBuffer(l)
tb.Shift()
for k := 0; k < 2; k++ { // run twice to ensure similar results
attrs, _ := tb.Attributes(svg.X, svg.Y, svg.Width, svg.Height, svg.Rx, svg.Ry)
for i := 0; i < 6; i++ {
test.That(t, attrs[i] != nil, "attr must not be nil")
val := string(attrs[i].AttrVal)
j, _ := strconv.ParseInt(val, 10, 32)
test.That(t, int(j) == i, "attr data is bad at position", i)
}
}
}
////////////////////////////////////////////////////////////////
func BenchmarkAttributes(b *testing.B) {
r := bytes.NewBufferString(`<rect x="0" y="1" width="2" height="3" rx="4" ry="5"/>`)
l := xml.NewLexer(r)
tb := NewTokenBuffer(l)
tb.Shift()
tb.Peek(6)
for i := 0; i < b.N; i++ {
tb.Attributes(svg.X, svg.Y, svg.Width, svg.Height, svg.Rx, svg.Ry)
}
}

282
vendor/github.com/tdewolff/minify/svg/pathdata.go generated vendored Normal file
View file

@ -0,0 +1,282 @@
package svg
import (
strconvStdlib "strconv"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/strconv"
)
type PathData struct {
o *Minifier
x, y float64
coords [][]byte
coordFloats []float64
state PathDataState
curBuffer []byte
altBuffer []byte
coordBuffer []byte
}
type PathDataState struct {
cmd byte
prevDigit bool
prevDigitIsInt bool
}
func NewPathData(o *Minifier) *PathData {
return &PathData{
o: o,
}
}
func (p *PathData) ShortenPathData(b []byte) []byte {
var x0, y0 float64
var cmd byte
p.x, p.y = 0.0, 0.0
p.coords = p.coords[:0]
p.coordFloats = p.coordFloats[:0]
p.state = PathDataState{}
j := 0
for i := 0; i < len(b); i++ {
c := b[i]
if c == ' ' || c == ',' || c == '\n' || c == '\r' || c == '\t' {
continue
} else if c >= 'A' && (cmd == 0 || cmd != c || c == 'M' || c == 'm') { // any command
if cmd != 0 {
j += p.copyInstruction(b[j:], cmd)
if cmd == 'M' || cmd == 'm' {
x0 = p.x
y0 = p.y
} else if cmd == 'Z' || cmd == 'z' {
p.x = x0
p.y = y0
}
}
cmd = c
p.coords = p.coords[:0]
p.coordFloats = p.coordFloats[:0]
} else if n := parse.Number(b[i:]); n > 0 {
f, _ := strconv.ParseFloat(b[i : i+n])
p.coords = append(p.coords, b[i:i+n])
p.coordFloats = append(p.coordFloats, f)
i += n - 1
}
}
if cmd != 0 {
j += p.copyInstruction(b[j:], cmd)
}
return b[:j]
}
func (p *PathData) copyInstruction(b []byte, cmd byte) int {
n := len(p.coords)
if n == 0 {
if cmd == 'Z' || cmd == 'z' {
b[0] = 'z'
return 1
}
return 0
}
isRelCmd := cmd >= 'a'
// get new cursor coordinates
di := 0
if (cmd == 'M' || cmd == 'm' || cmd == 'L' || cmd == 'l' || cmd == 'T' || cmd == 't') && n%2 == 0 {
di = 2
// reprint M always, as the first pair is a move but subsequent pairs are L
if cmd == 'M' || cmd == 'm' {
p.state.cmd = byte(0)
}
} else if cmd == 'H' || cmd == 'h' || cmd == 'V' || cmd == 'v' {
di = 1
} else if (cmd == 'S' || cmd == 's' || cmd == 'Q' || cmd == 'q') && n%4 == 0 {
di = 4
} else if (cmd == 'C' || cmd == 'c') && n%6 == 0 {
di = 6
} else if (cmd == 'A' || cmd == 'a') && n%7 == 0 {
di = 7
} else {
return 0
}
j := 0
origCmd := cmd
ax, ay := 0.0, 0.0
for i := 0; i < n; i += di {
// subsequent coordinate pairs for M are really L
if i > 0 && (origCmd == 'M' || origCmd == 'm') {
origCmd = 'L' + (origCmd - 'M')
}
cmd = origCmd
coords := p.coords[i : i+di]
coordFloats := p.coordFloats[i : i+di]
if cmd == 'H' || cmd == 'h' {
ax = coordFloats[di-1]
if isRelCmd {
ay = 0
} else {
ay = p.y
}
} else if cmd == 'V' || cmd == 'v' {
if isRelCmd {
ax = 0
} else {
ax = p.x
}
ay = coordFloats[di-1]
} else {
ax = coordFloats[di-2]
ay = coordFloats[di-1]
}
// switch from L to H or V whenever possible
if cmd == 'L' || cmd == 'l' {
if isRelCmd {
if coordFloats[0] == 0 {
cmd = 'v'
coords = coords[1:]
coordFloats = coordFloats[1:]
} else if coordFloats[1] == 0 {
cmd = 'h'
coords = coords[:1]
coordFloats = coordFloats[:1]
}
} else {
if coordFloats[0] == p.x {
cmd = 'V'
coords = coords[1:]
coordFloats = coordFloats[1:]
} else if coordFloats[1] == p.y {
cmd = 'H'
coords = coords[:1]
coordFloats = coordFloats[:1]
}
}
}
// make a current and alternated path with absolute/relative altered
var curState, altState PathDataState
curState = p.shortenCurPosInstruction(cmd, coords)
if isRelCmd {
altState = p.shortenAltPosInstruction(cmd-'a'+'A', coordFloats, p.x, p.y)
} else {
altState = p.shortenAltPosInstruction(cmd-'A'+'a', coordFloats, -p.x, -p.y)
}
// choose shortest, relative or absolute path?
if len(p.altBuffer) < len(p.curBuffer) {
j += copy(b[j:], p.altBuffer)
p.state = altState
} else {
j += copy(b[j:], p.curBuffer)
p.state = curState
}
if isRelCmd {
p.x += ax
p.y += ay
} else {
p.x = ax
p.y = ay
}
}
return j
}
func (p *PathData) shortenCurPosInstruction(cmd byte, coords [][]byte) PathDataState {
state := p.state
p.curBuffer = p.curBuffer[:0]
if cmd != state.cmd && !(state.cmd == 'M' && cmd == 'L' || state.cmd == 'm' && cmd == 'l') {
p.curBuffer = append(p.curBuffer, cmd)
state.cmd = cmd
state.prevDigit = false
state.prevDigitIsInt = false
}
for i, coord := range coords {
isFlag := false
if (cmd == 'A' || cmd == 'a') && (i%7 == 3 || i%7 == 4) {
isFlag = true
}
coord = minify.Number(coord, p.o.Decimals)
state.copyNumber(&p.curBuffer, coord, isFlag)
}
return state
}
func (p *PathData) shortenAltPosInstruction(cmd byte, coordFloats []float64, x, y float64) PathDataState {
state := p.state
p.altBuffer = p.altBuffer[:0]
if cmd != state.cmd && !(state.cmd == 'M' && cmd == 'L' || state.cmd == 'm' && cmd == 'l') {
p.altBuffer = append(p.altBuffer, cmd)
state.cmd = cmd
state.prevDigit = false
state.prevDigitIsInt = false
}
for i, f := range coordFloats {
isFlag := false
if cmd == 'L' || cmd == 'l' || cmd == 'C' || cmd == 'c' || cmd == 'S' || cmd == 's' || cmd == 'Q' || cmd == 'q' || cmd == 'T' || cmd == 't' || cmd == 'M' || cmd == 'm' {
if i%2 == 0 {
f += x
} else {
f += y
}
} else if cmd == 'H' || cmd == 'h' {
f += x
} else if cmd == 'V' || cmd == 'v' {
f += y
} else if cmd == 'A' || cmd == 'a' {
if i%7 == 5 {
f += x
} else if i%7 == 6 {
f += y
} else if i%7 == 3 || i%7 == 4 {
isFlag = true
}
}
p.coordBuffer = strconvStdlib.AppendFloat(p.coordBuffer[:0], f, 'g', -1, 64)
coord := minify.Number(p.coordBuffer, p.o.Decimals)
state.copyNumber(&p.altBuffer, coord, isFlag)
}
return state
}
func (state *PathDataState) copyNumber(buffer *[]byte, coord []byte, isFlag bool) {
if state.prevDigit && (coord[0] >= '0' && coord[0] <= '9' || coord[0] == '.' && state.prevDigitIsInt) {
if coord[0] == '0' && !state.prevDigitIsInt {
if isFlag {
*buffer = append(*buffer, ' ', '0')
state.prevDigitIsInt = true
} else {
*buffer = append(*buffer, '.', '0') // aggresively add dot so subsequent numbers could drop leading space
// prevDigit stays true and prevDigitIsInt stays false
}
return
}
*buffer = append(*buffer, ' ')
}
state.prevDigit = true
state.prevDigitIsInt = true
if len(coord) > 2 && coord[len(coord)-2] == '0' && coord[len(coord)-1] == '0' {
coord[len(coord)-2] = 'e'
coord[len(coord)-1] = '2'
state.prevDigitIsInt = false
} else {
for _, c := range coord {
if c == '.' || c == 'e' || c == 'E' {
state.prevDigitIsInt = false
break
}
}
}
*buffer = append(*buffer, coord...)
}

60
vendor/github.com/tdewolff/minify/svg/pathdata_test.go generated vendored Normal file
View file

@ -0,0 +1,60 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"testing"
"github.com/tdewolff/test"
)
func TestPathData(t *testing.T) {
var pathDataTests = []struct {
pathData string
expected string
}{
{"M10 10 20 10", "M10 10H20"},
{"M10 10 10 20", "M10 10V20"},
{"M50 50 100 100", "M50 50l50 50"},
{"m50 50 40 40m50 50", "m50 50 40 40m50 50"},
{"M10 10zM15 15", "M10 10zm5 5"},
{"M50 50H55V55", "M50 50h5v5"},
{"M10 10L11 10 11 11", "M10 10h1v1"},
{"M10 10l1 0 0 1", "M10 10h1v1"},
{"M10 10L11 11 0 0", "M10 10l1 1L0 0"},
{"M246.614 51.028L246.614-5.665 189.922-5.665", "M246.614 51.028V-5.665H189.922"},
{"M100,200 C100,100 250,100 250,200 S400,300 400,200", "M1e2 2e2c0-1e2 150-1e2 150 0s150 1e2 150 0"},
{"M200,300 Q400,50 600,300 T1000,300", "M2e2 3e2q2e2-250 4e2.0t4e2.0"},
{"M300,200 h-150 a150,150 0 1,0 150,-150 z", "M3e2 2e2H150A150 150 0 1 0 3e2 50z"},
{"x5 5L10 10", "L10 10"},
{"M.0.1", "M0 .1"},
{"M200.0.1", "M2e2.1"},
{"M0 0a3.28 3.28.0.0.0 3.279 3.28", "M0 0a3.28 3.28.0 0 0 3.279 3.28"}, // #114
{"A1.1.0.0.0.0.2.3", "A1.1.0.0 0 0 .2."}, // bad input (sweep and large-arc are not booleans) gives bad output
// fuzz
{"", ""},
{"ML", ""},
{".8.00c0", ""},
{".1.04h0e6.0e6.0e0.0", "h0 0 0 0"},
{"M.1.0.0.2Z", "M.1.0.0.2z"},
{"A.0.0.0.0.3.2e3.7.0.0.0.0.0.1.3.0.0.0.0.2.3.2.0.0.0.0.20.2e-10.0.0.0.0.0.0.0.0", "A0 0 0 0 .3 2e2.7.0.0.0 0 0 .1.3 30 0 0 0 .2.3.2 3 20 0 0 .2 2e-1100 11 0 0 0 "}, // bad input (sweep and large-arc are not booleans) gives bad output
}
p := NewPathData(&Minifier{Decimals: -1})
for _, tt := range pathDataTests {
t.Run(tt.pathData, func(t *testing.T) {
path := p.ShortenPathData([]byte(tt.pathData))
test.Minify(t, tt.pathData, nil, string(path), tt.expected)
})
}
}
////////////////////////////////////////////////////////////////
func BenchmarkShortenPathData(b *testing.B) {
p := NewPathData(&Minifier{})
r := []byte("M8.64,223.948c0,0,143.468,3.431,185.777-181.808c2.673-11.702-1.23-20.154,1.316-33.146h16.287c0,0-3.14,17.248,1.095,30.848c21.392,68.692-4.179,242.343-204.227,196.59L8.64,223.948z")
for i := 0; i < b.N; i++ {
p.ShortenPathData(r)
}
}

434
vendor/github.com/tdewolff/minify/svg/svg.go generated vendored Normal file
View file

@ -0,0 +1,434 @@
// Package svg minifies SVG1.1 following the specifications at http://www.w3.org/TR/SVG11/.
package svg // import "github.com/tdewolff/minify/svg"
import (
"bytes"
"io"
"github.com/tdewolff/minify"
minifyCSS "github.com/tdewolff/minify/css"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
"github.com/tdewolff/parse/css"
"github.com/tdewolff/parse/svg"
"github.com/tdewolff/parse/xml"
)
var (
voidBytes = []byte("/>")
isBytes = []byte("=")
spaceBytes = []byte(" ")
cdataEndBytes = []byte("]]>")
pathBytes = []byte("<path")
dBytes = []byte("d")
zeroBytes = []byte("0")
cssMimeBytes = []byte("text/css")
urlBytes = []byte("url(")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{Decimals: -1}
// Minifier is an SVG minifier.
type Minifier struct {
Decimals int
}
// Minify minifies SVG data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies SVG data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
var tag svg.Hash
defaultStyleType := cssMimeBytes
defaultStyleParams := map[string]string(nil)
defaultInlineStyleParams := map[string]string{"inline": "1"}
p := NewPathData(o)
minifyBuffer := buffer.NewWriter(make([]byte, 0, 64))
attrByteBuffer := make([]byte, 0, 64)
gStack := make([]bool, 0)
l := xml.NewLexer(r)
defer l.Restore()
tb := NewTokenBuffer(l)
for {
t := *tb.Shift()
SWITCH:
switch t.TokenType {
case xml.ErrorToken:
if l.Err() == io.EOF {
return nil
}
return l.Err()
case xml.DOCTYPEToken:
if len(t.Text) > 0 && t.Text[len(t.Text)-1] == ']' {
if _, err := w.Write(t.Data); err != nil {
return err
}
}
case xml.TextToken:
t.Data = parse.ReplaceMultipleWhitespace(parse.TrimWhitespace(t.Data))
if tag == svg.Style && len(t.Data) > 0 {
if err := m.MinifyMimetype(defaultStyleType, w, buffer.NewReader(t.Data), defaultStyleParams); err != nil {
if err != minify.ErrNotExist {
return err
} else if _, err := w.Write(t.Data); err != nil {
return err
}
}
} else if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.CDATAToken:
if tag == svg.Style {
minifyBuffer.Reset()
if err := m.MinifyMimetype(defaultStyleType, minifyBuffer, buffer.NewReader(t.Text), defaultStyleParams); err == nil {
t.Data = append(t.Data[:9], minifyBuffer.Bytes()...)
t.Text = t.Data[9:]
t.Data = append(t.Data, cdataEndBytes...)
} else if err != minify.ErrNotExist {
return err
}
}
var useText bool
if t.Text, useText = xml.EscapeCDATAVal(&attrByteBuffer, t.Text); useText {
t.Text = parse.ReplaceMultipleWhitespace(parse.TrimWhitespace(t.Text))
if _, err := w.Write(t.Text); err != nil {
return err
}
} else if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.StartTagPIToken:
for {
if t := *tb.Shift(); t.TokenType == xml.StartTagClosePIToken || t.TokenType == xml.ErrorToken {
break
}
}
case xml.StartTagToken:
tag = t.Hash
if containerTagMap[tag] { // skip empty containers
i := 0
for {
next := tb.Peek(i)
i++
if next.TokenType == xml.EndTagToken && next.Hash == tag || next.TokenType == xml.StartTagCloseVoidToken || next.TokenType == xml.ErrorToken {
for j := 0; j < i; j++ {
tb.Shift()
}
break SWITCH
} else if next.TokenType != xml.AttributeToken && next.TokenType != xml.StartTagCloseToken {
break
}
}
if tag == svg.G {
if tb.Peek(0).TokenType == xml.StartTagCloseToken {
gStack = append(gStack, false)
tb.Shift()
break
}
gStack = append(gStack, true)
}
} else if tag == svg.Metadata {
skipTag(tb, tag)
break
} else if tag == svg.Line {
o.shortenLine(tb, &t, p)
} else if tag == svg.Rect && !o.shortenRect(tb, &t, p) {
skipTag(tb, tag)
break
} else if tag == svg.Polygon || tag == svg.Polyline {
o.shortenPoly(tb, &t, p)
}
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.AttributeToken:
if len(t.AttrVal) == 0 || t.Text == nil { // data is nil when attribute has been removed
continue
}
attr := t.Hash
val := t.AttrVal
if n, m := parse.Dimension(val); n+m == len(val) && attr != svg.Version { // TODO: inefficient, temporary measure
val, _ = o.shortenDimension(val)
}
if attr == svg.Xml_Space && bytes.Equal(val, []byte("preserve")) ||
tag == svg.Svg && (attr == svg.Version && bytes.Equal(val, []byte("1.1")) ||
attr == svg.X && bytes.Equal(val, []byte("0")) ||
attr == svg.Y && bytes.Equal(val, []byte("0")) ||
attr == svg.Width && bytes.Equal(val, []byte("100%")) ||
attr == svg.Height && bytes.Equal(val, []byte("100%")) ||
attr == svg.PreserveAspectRatio && bytes.Equal(val, []byte("xMidYMid meet")) ||
attr == svg.BaseProfile && bytes.Equal(val, []byte("none")) ||
attr == svg.ContentScriptType && bytes.Equal(val, []byte("application/ecmascript")) ||
attr == svg.ContentStyleType && bytes.Equal(val, []byte("text/css"))) ||
tag == svg.Style && attr == svg.Type && bytes.Equal(val, []byte("text/css")) {
continue
}
if _, err := w.Write(spaceBytes); err != nil {
return err
}
if _, err := w.Write(t.Text); err != nil {
return err
}
if _, err := w.Write(isBytes); err != nil {
return err
}
if tag == svg.Svg && attr == svg.ContentStyleType {
val = minify.ContentType(val)
defaultStyleType = val
} else if attr == svg.Style {
minifyBuffer.Reset()
if err := m.MinifyMimetype(defaultStyleType, minifyBuffer, buffer.NewReader(val), defaultInlineStyleParams); err == nil {
val = minifyBuffer.Bytes()
} else if err != minify.ErrNotExist {
return err
}
} else if attr == svg.D {
val = p.ShortenPathData(val)
} else if attr == svg.ViewBox {
j := 0
newVal := val[:0]
for i := 0; i < 4; i++ {
if i != 0 {
if j >= len(val) || val[j] != ' ' && val[j] != ',' {
newVal = append(newVal, val[j:]...)
break
}
newVal = append(newVal, ' ')
j++
}
if dim, n := o.shortenDimension(val[j:]); n > 0 {
newVal = append(newVal, dim...)
j += n
} else {
newVal = append(newVal, val[j:]...)
break
}
}
val = newVal
} else if colorAttrMap[attr] && len(val) > 0 && (len(val) < 5 || !parse.EqualFold(val[:4], urlBytes)) {
parse.ToLower(val)
if val[0] == '#' {
if name, ok := minifyCSS.ShortenColorHex[string(val)]; ok {
val = name
} else if len(val) == 7 && val[1] == val[2] && val[3] == val[4] && val[5] == val[6] {
val[2] = val[3]
val[3] = val[5]
val = val[:4]
}
} else if hex, ok := minifyCSS.ShortenColorName[css.ToHash(val)]; ok {
val = hex
// } else if len(val) > 5 && bytes.Equal(val[:4], []byte("rgb(")) && val[len(val)-1] == ')' {
// TODO: handle rgb(x, y, z) and hsl(x, y, z)
}
}
// prefer single or double quotes depending on what occurs more often in value
val = xml.EscapeAttrVal(&attrByteBuffer, val)
if _, err := w.Write(val); err != nil {
return err
}
case xml.StartTagCloseToken:
next := tb.Peek(0)
skipExtra := false
if next.TokenType == xml.TextToken && parse.IsAllWhitespace(next.Data) {
next = tb.Peek(1)
skipExtra = true
}
if next.TokenType == xml.EndTagToken {
// collapse empty tags to single void tag
tb.Shift()
if skipExtra {
tb.Shift()
}
if _, err := w.Write(voidBytes); err != nil {
return err
}
} else {
if _, err := w.Write(t.Data); err != nil {
return err
}
}
case xml.StartTagCloseVoidToken:
tag = 0
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.EndTagToken:
tag = 0
if t.Hash == svg.G && len(gStack) > 0 {
if !gStack[len(gStack)-1] {
gStack = gStack[:len(gStack)-1]
break
}
gStack = gStack[:len(gStack)-1]
}
if len(t.Data) > 3+len(t.Text) {
t.Data[2+len(t.Text)] = '>'
t.Data = t.Data[:3+len(t.Text)]
}
if _, err := w.Write(t.Data); err != nil {
return err
}
}
}
}
func (o *Minifier) shortenDimension(b []byte) ([]byte, int) {
if n, m := parse.Dimension(b); n > 0 {
unit := b[n : n+m]
b = minify.Number(b[:n], o.Decimals)
if len(b) != 1 || b[0] != '0' {
if m == 2 && unit[0] == 'p' && unit[1] == 'x' {
unit = nil
} else if m > 1 { // only percentage is length 1
parse.ToLower(unit)
}
b = append(b, unit...)
}
return b, n + m
}
return b, 0
}
func (o *Minifier) shortenLine(tb *TokenBuffer, t *Token, p *PathData) {
x1, y1, x2, y2 := zeroBytes, zeroBytes, zeroBytes, zeroBytes
if attrs, replacee := tb.Attributes(svg.X1, svg.Y1, svg.X2, svg.Y2); replacee != nil {
if attrs[0] != nil {
x1 = minify.Number(attrs[0].AttrVal, o.Decimals)
attrs[0].Text = nil
}
if attrs[1] != nil {
y1 = minify.Number(attrs[1].AttrVal, o.Decimals)
attrs[1].Text = nil
}
if attrs[2] != nil {
x2 = minify.Number(attrs[2].AttrVal, o.Decimals)
attrs[2].Text = nil
}
if attrs[3] != nil {
y2 = minify.Number(attrs[3].AttrVal, o.Decimals)
attrs[3].Text = nil
}
d := make([]byte, 0, 5+len(x1)+len(y1)+len(x2)+len(y2))
d = append(d, 'M')
d = append(d, x1...)
d = append(d, ' ')
d = append(d, y1...)
d = append(d, 'L')
d = append(d, x2...)
d = append(d, ' ')
d = append(d, y2...)
d = append(d, 'z')
d = p.ShortenPathData(d)
t.Data = pathBytes
replacee.Text = dBytes
replacee.AttrVal = d
}
}
func (o *Minifier) shortenRect(tb *TokenBuffer, t *Token, p *PathData) bool {
if attrs, replacee := tb.Attributes(svg.X, svg.Y, svg.Width, svg.Height, svg.Rx, svg.Ry); replacee != nil && attrs[4] == nil && attrs[5] == nil {
x, y, w, h := zeroBytes, zeroBytes, zeroBytes, zeroBytes
if attrs[0] != nil {
x = minify.Number(attrs[0].AttrVal, o.Decimals)
attrs[0].Text = nil
}
if attrs[1] != nil {
y = minify.Number(attrs[1].AttrVal, o.Decimals)
attrs[1].Text = nil
}
if attrs[2] != nil {
w = minify.Number(attrs[2].AttrVal, o.Decimals)
attrs[2].Text = nil
}
if attrs[3] != nil {
h = minify.Number(attrs[3].AttrVal, o.Decimals)
attrs[3].Text = nil
}
if len(w) == 0 || w[0] == '0' || len(h) == 0 || h[0] == '0' {
return false
}
d := make([]byte, 0, 6+2*len(x)+len(y)+len(w)+len(h))
d = append(d, 'M')
d = append(d, x...)
d = append(d, ' ')
d = append(d, y...)
d = append(d, 'h')
d = append(d, w...)
d = append(d, 'v')
d = append(d, h...)
d = append(d, 'H')
d = append(d, x...)
d = append(d, 'z')
d = p.ShortenPathData(d)
t.Data = pathBytes
replacee.Text = dBytes
replacee.AttrVal = d
}
return true
}
func (o *Minifier) shortenPoly(tb *TokenBuffer, t *Token, p *PathData) {
if attrs, replacee := tb.Attributes(svg.Points); replacee != nil && attrs[0] != nil {
points := attrs[0].AttrVal
i := 0
for i < len(points) && !(points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
for i < len(points) && (points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
for i < len(points) && !(points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
endMoveTo := i
for i < len(points) && (points[i] == ' ' || points[i] == ',' || points[i] == '\n' || points[i] == '\r' || points[i] == '\t') {
i++
}
startLineTo := i
if i == len(points) {
return
}
d := make([]byte, 0, len(points)+3)
d = append(d, 'M')
d = append(d, points[:endMoveTo]...)
d = append(d, 'L')
d = append(d, points[startLineTo:]...)
if t.Hash == svg.Polygon {
d = append(d, 'z')
}
d = p.ShortenPathData(d)
t.Data = pathBytes
replacee.Text = dBytes
replacee.AttrVal = d
}
}
////////////////////////////////////////////////////////////////
func skipTag(tb *TokenBuffer, tag svg.Hash) {
for {
if t := *tb.Shift(); (t.TokenType == xml.EndTagToken || t.TokenType == xml.StartTagCloseVoidToken) && t.Hash == tag || t.TokenType == xml.ErrorToken {
break
}
}
}

199
vendor/github.com/tdewolff/minify/svg/svg_test.go generated vendored Normal file
View file

@ -0,0 +1,199 @@
package svg // import "github.com/tdewolff/minify/svg"
import (
"bytes"
"fmt"
"io"
"os"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/minify/css"
"github.com/tdewolff/test"
)
func TestSVG(t *testing.T) {
svgTests := []struct {
svg string
expected string
}{
{`<!-- comment -->`, ``},
{`<!DOCTYPE svg SYSTEM "foo.dtd">`, ``},
{`<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "foo.dtd" [ <!ENTITY x "bar"> ]>`, `<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "foo.dtd" [ <!ENTITY x "bar"> ]>`},
{`<!DOCTYPE svg SYSTEM "foo.dtd">`, ``},
{`<?xml version="1.0" ?>`, ``},
{`<style> <![CDATA[ x ]]> </style>`, `<style>x</style>`},
{`<style> <![CDATA[ <<<< ]]> </style>`, `<style>&lt;&lt;&lt;&lt;</style>`},
{`<style> <![CDATA[ <<<<< ]]> </style>`, `<style><![CDATA[ <<<<< ]]></style>`},
{`<style/><![CDATA[ <<<<< ]]>`, `<style/><![CDATA[ <<<<< ]]>`},
{`<svg version="1.0"></svg>`, `<svg version="1.0"/>`},
{`<svg version="1.1" x="0" y="0px" width="100%" height="100%"><path/></svg>`, `<svg><path/></svg>`},
{`<path x="a"> </path>`, `<path x="a"/>`},
{`<path x=" a "/>`, `<path x="a"/>`},
{"<path x=\" a \n b \"/>", `<path x="a b"/>`},
{`<path x="5.0px" y="0%"/>`, `<path x="5" y="0"/>`},
{`<svg viewBox="5.0px 5px 240IN px"><path/></svg>`, `<svg viewBox="5 5 240in px"><path/></svg>`},
{`<svg viewBox="5.0!5px"><path/></svg>`, `<svg viewBox="5!5px"><path/></svg>`},
{`<path d="M 100 100 L 300 100 L 200 100 z"/>`, `<path d="M1e2 1e2H3e2 2e2z"/>`},
{`<path d="M100 -100M200 300z"/>`, `<path d="M1e2-1e2M2e2 3e2z"/>`},
{`<path d="M0.5 0.6 M -100 0.5z"/>`, `<path d="M.5.6M-1e2.5z"/>`},
{`<path d="M01.0 0.6 z"/>`, `<path d="M1 .6z"/>`},
{`<path d="M20 20l-10-10z"/>`, `<path d="M20 20 10 10z"/>`},
{`<?xml version="1.0" encoding="utf-8"?>`, ``},
{`<svg viewbox="0 0 16 16"><path/></svg>`, `<svg viewbox="0 0 16 16"><path/></svg>`},
{`<g></g>`, ``},
{`<g><path/></g>`, `<path/>`},
{`<g id="a"><g><path/></g></g>`, `<g id="a"><path/></g>`},
{`<path fill="#ffffff"/>`, `<path fill="#fff"/>`},
{`<path fill="#fff"/>`, `<path fill="#fff"/>`},
{`<path fill="white"/>`, `<path fill="#fff"/>`},
{`<path fill="#ff0000"/>`, `<path fill="red"/>`},
{`<line x1="5" y1="10" x2="20" y2="40"/>`, `<path d="M5 10 20 40z"/>`},
{`<rect x="5" y="10" width="20" height="40"/>`, `<path d="M5 10h20v40H5z"/>`},
{`<rect x="-5.669" y="147.402" fill="#843733" width="252.279" height="14.177"/>`, `<path fill="#843733" d="M-5.669 147.402h252.279v14.177H-5.669z"/>`},
{`<rect x="5" y="10" rx="2" ry="3"/>`, `<rect x="5" y="10" rx="2" ry="3"/>`},
{`<rect x="5" y="10" height="40"/>`, ``},
{`<rect x="5" y="10" width="30" height="0"/>`, ``},
{`<polygon points="1,2 3,4"/>`, `<path d="M1 2 3 4z"/>`},
{`<polyline points="1,2 3,4"/>`, `<path d="M1 2 3 4"/>`},
{`<svg contentStyleType="text/json ; charset=iso-8859-1"><style>{a : true}</style></svg>`, `<svg contentStyleType="text/json;charset=iso-8859-1"><style>{a : true}</style></svg>`},
{`<metadata><dc:title /></metadata>`, ``},
// from SVGO
{`<!DOCTYPE bla><?xml?><!-- comment --><metadata/>`, ``},
{`<polygon fill="none" stroke="#000" points="-0.1,"/>`, `<polygon fill="none" stroke="#000" points="-0.1,"/>`}, // #45
{`<path stroke="url(#UPPERCASE)"/>`, `<path stroke="url(#UPPERCASE)"/>`}, // #117
// go fuzz
{`<0 d=09e9.6e-9e0`, `<0 d=""`},
{`<line`, `<line`},
}
m := minify.New()
for _, tt := range svgTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.svg, err, w.String(), tt.expected)
})
}
}
func TestSVGStyle(t *testing.T) {
svgTests := []struct {
svg string
expected string
}{
{`<style> a > b {} </style>`, `<style>a>b{}</style>`},
{`<style> <![CDATA[ @media x < y {} ]]> </style>`, `<style>@media x &lt; y{}</style>`},
{`<style> <![CDATA[ * { content: '<<<<<'; } ]]> </style>`, `<style><![CDATA[*{content:'<<<<<'}]]></style>`},
{`<style/><![CDATA[ * { content: '<<<<<'; ]]>`, `<style/><![CDATA[ * { content: '<<<<<'; ]]>`},
{`<path style="fill: black; stroke: #ff0000;"/>`, `<path style="fill:#000;stroke:red"/>`},
}
m := minify.New()
m.AddFunc("text/css", css.Minify)
for _, tt := range svgTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.svg, err, w.String(), tt.expected)
})
}
}
func TestSVGDecimals(t *testing.T) {
var svgTests = []struct {
svg string
expected string
}{
{`<svg x="1.234" y="0.001" width="1.001"><path/></svg>`, `<svg x="1.2" width="1"><path/></svg>`},
}
m := minify.New()
o := &Minifier{Decimals: 1}
for _, tt := range svgTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := o.Minify(m, w, r, nil)
test.Minify(t, tt.svg, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
svg string
n []int
}{
{`<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "foo.dtd" [ <!ENTITY x "bar"> ]>`, []int{0}},
{`abc`, []int{0}},
{`<style>abc</style>`, []int{2}},
{`<![CDATA[ <<<< ]]>`, []int{0}},
{`<![CDATA[ <<<<< ]]>`, []int{0}},
{`<path d="x"/>`, []int{0, 1, 2, 3, 4, 5}},
{`<path></path>`, []int{1}},
{`<svg>x</svg>`, []int{1, 3}},
{`<svg>x</svg >`, []int{3}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.svg, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
func TestMinifyErrors(t *testing.T) {
errorTests := []struct {
svg string
err error
}{
{`<style>abc</style>`, test.ErrPlain},
{`<style><![CDATA[abc]]></style>`, test.ErrPlain},
{`<path style="abc"/>`, test.ErrPlain},
}
m := minify.New()
m.AddFunc("text/css", func(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
return test.ErrPlain
})
for _, tt := range errorTests {
t.Run(tt.svg, func(t *testing.T) {
r := bytes.NewBufferString(tt.svg)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.T(t, err, tt.err)
})
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFunc("image/svg+xml", Minify)
m.AddFunc("text/css", css.Minify)
if err := m.Minify("image/svg+xml", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}

96
vendor/github.com/tdewolff/minify/svg/table.go generated vendored Normal file
View file

@ -0,0 +1,96 @@
package svg // import "github.com/tdewolff/minify/svg"
import "github.com/tdewolff/parse/svg"
var containerTagMap = map[svg.Hash]bool{
svg.A: true,
svg.Defs: true,
svg.G: true,
svg.Marker: true,
svg.Mask: true,
svg.Missing_Glyph: true,
svg.Pattern: true,
svg.Switch: true,
svg.Symbol: true,
}
var colorAttrMap = map[svg.Hash]bool{
svg.Color: true,
svg.Fill: true,
svg.Stroke: true,
svg.Stop_Color: true,
svg.Flood_Color: true,
svg.Lighting_Color: true,
}
// var styleAttrMap = map[svg.Hash]bool{
// svg.Font: true,
// svg.Font_Family: true,
// svg.Font_Size: true,
// svg.Font_Size_Adjust: true,
// svg.Font_Stretch: true,
// svg.Font_Style: true,
// svg.Font_Variant: true,
// svg.Font_Weight: true,
// svg.Direction: true,
// svg.Letter_Spacing: true,
// svg.Text_Decoration: true,
// svg.Unicode_Bidi: true,
// svg.White_Space: true,
// svg.Word_Spacing: true,
// svg.Clip: true,
// svg.Color: true,
// svg.Cursor: true,
// svg.Display: true,
// svg.Overflow: true,
// svg.Visibility: true,
// svg.Clip_Path: true,
// svg.Clip_Rule: true,
// svg.Mask: true,
// svg.Opacity: true,
// svg.Enable_Background: true,
// svg.Filter: true,
// svg.Flood_Color: true,
// svg.Flood_Opacity: true,
// svg.Lighting_Color: true,
// svg.Solid_Color: true,
// svg.Solid_Opacity: true,
// svg.Stop_Color: true,
// svg.Stop_Opacity: true,
// svg.Pointer_Events: true,
// svg.Buffered_Rendering: true,
// svg.Color_Interpolation: true,
// svg.Color_Interpolation_Filters: true,
// svg.Color_Profile: true,
// svg.Color_Rendering: true,
// svg.Fill: true,
// svg.Fill_Opacity: true,
// svg.Fill_Rule: true,
// svg.Image_Rendering: true,
// svg.Marker: true,
// svg.Marker_End: true,
// svg.Marker_Mid: true,
// svg.Marker_Start: true,
// svg.Shape_Rendering: true,
// svg.Stroke: true,
// svg.Stroke_Dasharray: true,
// svg.Stroke_Dashoffset: true,
// svg.Stroke_Linecap: true,
// svg.Stroke_Linejoin: true,
// svg.Stroke_Miterlimit: true,
// svg.Stroke_Opacity: true,
// svg.Stroke_Width: true,
// svg.Paint_Order: true,
// svg.Vector_Effect: true,
// svg.Viewport_Fill: true,
// svg.Viewport_Fill_Opacity: true,
// svg.Text_Rendering: true,
// svg.Alignment_Baseline: true,
// svg.Baseline_Shift: true,
// svg.Dominant_Baseline: true,
// svg.Glyph_Orientation_Horizontal: true,
// svg.Glyph_Orientation_Vertical: true,
// svg.Kerning: true,
// svg.Text_Anchor: true,
// svg.Writing_Mode: true,
// }

84
vendor/github.com/tdewolff/minify/xml/buffer.go generated vendored Normal file
View file

@ -0,0 +1,84 @@
package xml // import "github.com/tdewolff/minify/xml"
import "github.com/tdewolff/parse/xml"
// Token is a single token unit with an attribute value (if given) and hash of the data.
type Token struct {
xml.TokenType
Data []byte
Text []byte
AttrVal []byte
}
// TokenBuffer is a buffer that allows for token look-ahead.
type TokenBuffer struct {
l *xml.Lexer
buf []Token
pos int
}
// NewTokenBuffer returns a new TokenBuffer.
func NewTokenBuffer(l *xml.Lexer) *TokenBuffer {
return &TokenBuffer{
l: l,
buf: make([]Token, 0, 8),
}
}
func (z *TokenBuffer) read(t *Token) {
t.TokenType, t.Data = z.l.Next()
t.Text = z.l.Text()
if t.TokenType == xml.AttributeToken {
t.AttrVal = z.l.AttrVal()
} else {
t.AttrVal = nil
}
}
// Peek returns the ith element and possibly does an allocation.
// Peeking past an error will panic.
func (z *TokenBuffer) Peek(pos int) *Token {
pos += z.pos
if pos >= len(z.buf) {
if len(z.buf) > 0 && z.buf[len(z.buf)-1].TokenType == xml.ErrorToken {
return &z.buf[len(z.buf)-1]
}
c := cap(z.buf)
d := len(z.buf) - z.pos
p := pos - z.pos + 1 // required peek length
var buf []Token
if 2*p > c {
buf = make([]Token, 0, 2*c+p)
} else {
buf = z.buf
}
copy(buf[:d], z.buf[z.pos:])
buf = buf[:p]
pos -= z.pos
for i := d; i < p; i++ {
z.read(&buf[i])
if buf[i].TokenType == xml.ErrorToken {
buf = buf[:i+1]
pos = i
break
}
}
z.pos, z.buf = 0, buf
}
return &z.buf[pos]
}
// Shift returns the first element and advances position.
func (z *TokenBuffer) Shift() *Token {
if z.pos >= len(z.buf) {
t := &z.buf[:1][0]
z.read(t)
return t
}
t := &z.buf[z.pos]
z.pos++
return t
}

37
vendor/github.com/tdewolff/minify/xml/buffer_test.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package xml // import "github.com/tdewolff/minify/xml"
import (
"bytes"
"testing"
"github.com/tdewolff/parse/xml"
"github.com/tdewolff/test"
)
func TestBuffer(t *testing.T) {
// 0 12 3 45 6 7 8 9 0
s := `<p><a href="//url">text</a>text<!--comment--></p>`
z := NewTokenBuffer(xml.NewLexer(bytes.NewBufferString(s)))
tok := z.Shift()
test.That(t, string(tok.Text) == "p", "first token is <p>")
test.That(t, z.pos == 0, "shift first token and restore position")
test.That(t, len(z.buf) == 0, "shift first token and restore length")
test.That(t, string(z.Peek(2).Text) == "href", "third token is href")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 3, "two tokens after peeking")
test.That(t, string(z.Peek(8).Text) == "p", "ninth token is <p>")
test.That(t, z.pos == 0, "don't change position after peeking")
test.That(t, len(z.buf) == 9, "nine tokens after peeking")
test.That(t, z.Peek(9).TokenType == xml.ErrorToken, "tenth token is an error")
test.That(t, z.Peek(9) == z.Peek(10), "tenth and eleventh token are EOF")
test.That(t, len(z.buf) == 10, "ten tokens after peeking")
_ = z.Shift()
tok = z.Shift()
test.That(t, string(tok.Text) == "a", "third token is <a>")
test.That(t, z.pos == 2, "don't change position after peeking")
}

193
vendor/github.com/tdewolff/minify/xml/xml.go generated vendored Normal file
View file

@ -0,0 +1,193 @@
// Package xml minifies XML1.0 following the specifications at http://www.w3.org/TR/xml/.
package xml // import "github.com/tdewolff/minify/xml"
import (
"io"
"github.com/tdewolff/minify"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/xml"
)
var (
isBytes = []byte("=")
spaceBytes = []byte(" ")
voidBytes = []byte("/>")
)
////////////////////////////////////////////////////////////////
// DefaultMinifier is the default minifier.
var DefaultMinifier = &Minifier{}
// Minifier is an XML minifier.
type Minifier struct {
KeepWhitespace bool
}
// Minify minifies XML data, it reads from r and writes to w.
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
return DefaultMinifier.Minify(m, w, r, params)
}
// Minify minifies XML data, it reads from r and writes to w.
func (o *Minifier) Minify(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
omitSpace := true // on true the next text token must not start with a space
attrByteBuffer := make([]byte, 0, 64)
l := xml.NewLexer(r)
defer l.Restore()
tb := NewTokenBuffer(l)
for {
t := *tb.Shift()
if t.TokenType == xml.CDATAToken {
if text, useText := xml.EscapeCDATAVal(&attrByteBuffer, t.Text); useText {
t.TokenType = xml.TextToken
t.Data = text
}
}
switch t.TokenType {
case xml.ErrorToken:
if l.Err() == io.EOF {
return nil
}
return l.Err()
case xml.DOCTYPEToken:
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.CDATAToken:
if _, err := w.Write(t.Data); err != nil {
return err
}
if len(t.Text) > 0 && parse.IsWhitespace(t.Text[len(t.Text)-1]) {
omitSpace = true
}
case xml.TextToken:
t.Data = parse.ReplaceMultipleWhitespace(t.Data)
// whitespace removal; trim left
if omitSpace && (t.Data[0] == ' ' || t.Data[0] == '\n') {
t.Data = t.Data[1:]
}
// whitespace removal; trim right
omitSpace = false
if len(t.Data) == 0 {
omitSpace = true
} else if t.Data[len(t.Data)-1] == ' ' || t.Data[len(t.Data)-1] == '\n' {
omitSpace = true
i := 0
for {
next := tb.Peek(i)
// trim if EOF, text token with whitespace begin or block token
if next.TokenType == xml.ErrorToken {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
break
} else if next.TokenType == xml.TextToken {
// this only happens when a comment, doctype, cdata startpi tag was in between
// remove if the text token starts with a whitespace
if len(next.Data) > 0 && parse.IsWhitespace(next.Data[0]) {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
} else if next.TokenType == xml.CDATAToken {
if len(next.Text) > 0 && parse.IsWhitespace(next.Text[0]) {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
} else if next.TokenType == xml.StartTagToken || next.TokenType == xml.EndTagToken {
if !o.KeepWhitespace {
t.Data = t.Data[:len(t.Data)-1]
omitSpace = false
}
break
}
i++
}
}
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.StartTagToken:
if o.KeepWhitespace {
omitSpace = false
}
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.StartTagPIToken:
if _, err := w.Write(t.Data); err != nil {
return err
}
case xml.AttributeToken:
if _, err := w.Write(spaceBytes); err != nil {
return err
}
if _, err := w.Write(t.Text); err != nil {
return err
}
if _, err := w.Write(isBytes); err != nil {
return err
}
if len(t.AttrVal) < 2 {
if _, err := w.Write(t.AttrVal); err != nil {
return err
}
} else {
// prefer single or double quotes depending on what occurs more often in value
val := xml.EscapeAttrVal(&attrByteBuffer, t.AttrVal[1:len(t.AttrVal)-1])
if _, err := w.Write(val); err != nil {
return err
}
}
case xml.StartTagCloseToken:
next := tb.Peek(0)
skipExtra := false
if next.TokenType == xml.TextToken && parse.IsAllWhitespace(next.Data) {
next = tb.Peek(1)
skipExtra = true
}
if next.TokenType == xml.EndTagToken {
// collapse empty tags to single void tag
tb.Shift()
if skipExtra {
tb.Shift()
}
if _, err := w.Write(voidBytes); err != nil {
return err
}
} else {
if _, err := w.Write(t.Text); err != nil {
return err
}
}
case xml.StartTagCloseVoidToken:
if _, err := w.Write(t.Text); err != nil {
return err
}
case xml.StartTagClosePIToken:
if _, err := w.Write(t.Text); err != nil {
return err
}
case xml.EndTagToken:
if o.KeepWhitespace {
omitSpace = false
}
if len(t.Data) > 3+len(t.Text) {
t.Data[2+len(t.Text)] = '>'
t.Data = t.Data[:3+len(t.Text)]
}
if _, err := w.Write(t.Data); err != nil {
return err
}
}
}
}

129
vendor/github.com/tdewolff/minify/xml/xml_test.go generated vendored Normal file
View file

@ -0,0 +1,129 @@
package xml // import "github.com/tdewolff/minify/xml"
import (
"bytes"
"fmt"
"os"
"regexp"
"testing"
"github.com/tdewolff/minify"
"github.com/tdewolff/test"
)
func TestXML(t *testing.T) {
xmlTests := []struct {
xml string
expected string
}{
{"<!-- comment -->", ""},
{"<A>x</A>", "<A>x</A>"},
{"<a><b>x</b></a>", "<a><b>x</b></a>"},
{"<a><b>x\ny</b></a>", "<a><b>x\ny</b></a>"},
{"<a> <![CDATA[ a ]]> </a>", "<a>a</a>"},
{"<a >a</a >", "<a>a</a>"},
{"<?xml version=\"1.0\" ?>", "<?xml version=\"1.0\"?>"},
{"<x></x>", "<x/>"},
{"<x> </x>", "<x/>"},
{"<x a=\"b\"></x>", "<x a=\"b\"/>"},
{"<x a=\"\"></x>", "<x a=\"\"/>"},
{"<x a=a></x>", "<x a=a/>"},
{"<x a=\" a \n\r\t b \"/>", "<x a=\" a b \"/>"},
{"<x a=\"&apos;b&quot;\"></x>", "<x a=\"'b&#34;\"/>"},
{"<x a=\"&quot;&quot;'\"></x>", "<x a='\"\"&#39;'/>"},
{"<!DOCTYPE foo SYSTEM \"Foo.dtd\">", "<!DOCTYPE foo SYSTEM \"Foo.dtd\">"},
{"text <!--comment--> text", "text text"},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"<!doctype html>", "<!doctype html=>"}, // bad formatted, doctype must be uppercase and html must have attribute value
{"<x>\n<!--y-->\n</x>", "<x></x>"},
{"<style>lala{color:red}</style>", "<style>lala{color:red}</style>"},
{`cats and dogs `, `cats and dogs`},
{`</0`, `</0`}, // go fuzz
}
m := minify.New()
for _, tt := range xmlTests {
t.Run(tt.xml, func(t *testing.T) {
r := bytes.NewBufferString(tt.xml)
w := &bytes.Buffer{}
err := Minify(m, w, r, nil)
test.Minify(t, tt.xml, err, w.String(), tt.expected)
})
}
}
func TestXMLKeepWhitespace(t *testing.T) {
xmlTests := []struct {
xml string
expected string
}{
{`cats and dogs `, `cats and dogs`},
{` <div> <i> test </i> <b> test </b> </div> `, `<div> <i> test </i> <b> test </b> </div>`},
{"text\n<!--comment-->\ntext", "text\ntext"},
{"text\n<!--comment-->text<!--comment--> text", "text\ntext text"},
{"<x>\n<!--y-->\n</x>", "<x>\n</x>"},
{"<style>lala{color:red}</style>", "<style>lala{color:red}</style>"},
{"<x> <?xml?> </x>", "<x><?xml?> </x>"},
{"<x> <![CDATA[ x ]]> </x>", "<x> x </x>"},
{"<x> <![CDATA[ <<<<< ]]> </x>", "<x><![CDATA[ <<<<< ]]></x>"},
}
m := minify.New()
xmlMinifier := &Minifier{KeepWhitespace: true}
for _, tt := range xmlTests {
t.Run(tt.xml, func(t *testing.T) {
r := bytes.NewBufferString(tt.xml)
w := &bytes.Buffer{}
err := xmlMinifier.Minify(m, w, r, nil)
test.Minify(t, tt.xml, err, w.String(), tt.expected)
})
}
}
func TestReaderErrors(t *testing.T) {
r := test.NewErrorReader(0)
w := &bytes.Buffer{}
m := minify.New()
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain, "return error at first read")
}
func TestWriterErrors(t *testing.T) {
errorTests := []struct {
xml string
n []int
}{
{`<!DOCTYPE foo>`, []int{0}},
{`<?xml?>`, []int{0, 1}},
{`<a x=y z="val">`, []int{0, 1, 2, 3, 4, 8, 9}},
{`<foo/>`, []int{1}},
{`</foo>`, []int{0}},
{`<foo></foo>`, []int{1}},
{`<![CDATA[data<<<<<]]>`, []int{0}},
{`text`, []int{0}},
}
m := minify.New()
for _, tt := range errorTests {
for _, n := range tt.n {
t.Run(fmt.Sprint(tt.xml, " ", tt.n), func(t *testing.T) {
r := bytes.NewBufferString(tt.xml)
w := test.NewErrorWriter(n)
err := Minify(m, w, r, nil)
test.T(t, err, test.ErrPlain)
})
}
}
}
////////////////////////////////////////////////////////////////
func ExampleMinify() {
m := minify.New()
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), Minify)
if err := m.Minify("text/xml", os.Stdout, os.Stdin); err != nil {
panic(err)
}
}