- Use proper variable names for `key=value` strings parts
- Explicitly assign false to the `match` boolean
- Use an explicit `len(parts) == 2` assertion to help the compiler remove
`isSliceInBounds` calls.
- Refactor identical code into a containsRegexPattern function.
- Early exit when parsing the first date fails when using the `Between`
operator, instead of trying to parse the second one.
As youtubeVideoID is assigned to getVideoIDFromYouTubeURL(entry.URL),
there is no need to call the latter again when we can simly use youtubeVideoID
instead.
There is no need to use SHA256 everywhere, especially on small inputs where we
don't care about its cryptographic properties. We're using FNV as it's the
faster available hash in go's standard library, and we're picking its "a"
version as it's slightly better avalanche characteristics, which are
relevant for small inputs.
This commit has the side-effect of invalidating all favicons saved in the
database, which is desirable to benefit from the resize process implemented in
777d0dd2, as it didn't apply retro-actively.
We're also making use of hex.EncodeToString instead of fmt.Sprintf, as it's
marginally faster.
Note that we can't change the usage of sha256 for feed.Hash as it's used to
deduplicate entries in the database.
- Factorize some conditions
- Remove useless `default` case and move the return at the end of the functions
- Use strings.CutPrefix instead of strings.HasPrefix + strings.TrimPrefix
- Use switch-case constructs instead of slices.Contains, as this reduces the
complexity of the functions and allows them to be inlined, as well as helping
the compiler to optimize them, as it sucks at interprocedural optimizations.
The previous regex was using the [ABC..D]*[ABC] pattern, resulting in a lot of
backtracking. The new regex is stopping the matching at the first space or end
of text (and removes the trailing `.` should one be present).
The backtracking was taking around 50% of the CPU time spent in atom.Parse
Previously, url.Parse(baseUrl) was called on every self-closing tags, and on
most opening tags, accounting for around 15% of the CPU time spent in
processor.ProcessFeedEntries
The `sanitizer.StripTags` function is calling `html.NewTokenizer`, which is
allocating a 4096 bytes buffer on the heap, as well a running a complex state
machine to tokenize html. There is no need to do all of this for empty strings.
This commit also fixes a TrimSpace/StripTags call inversion.
A bit more than 10% of processor.ProcessFeedEntries' CPU time is spent in
urlcleaner.RemoveTrackingParameters, specifically calling url.Parse, so let's
extract this operation outside of it, and do it once before calling
urlcleaner.RemoveTrackingParameters multiple times.
Co-authored-by: Frédéric Guillot <f@miniflux.net>
Calls to urllib.AbsoluteURL take a bit less than 10% of the time spent in
parser.ParseFeed, completely parsing an url only to check if it's absolute, and
if not, to make it so.
Checking if it starts with `https://` or `http://` is usually enough to find if
an url is absolute, and if is doesn't, it's always possible to fall back to
urllib.AbsoluteURL.
This also comes with the advantage of reducing heap allocations, as most of the
time spent in urllib.AbsoluteURL is heap-related (de)allocations.
Instead of using bytes.Map which is returning a copy of the provided []byte,
use a custom in-place implementation, as the bytes.Map call is taking around
25% of rss.Parse
io.ReadAll is growing the underlying buffer progressively, while
io.Copy is able to allocate it in one go, which is significantly faster.
io.ReadAll is currently accounting for around 10% of the CPU time of rss.Parse
Rationale: Opening links in the current tab is the default browser behavior.
Using `target="_blank"` on external links can lead to accessibility issues and override user preferences. It may also interfere with assistive technologies and expected browser behavior.
To maintain backward compatibility, this option is enabled by default (`true`), which adds `target="_blank"` to links.