When a server returns a 304 response with a strong validator, any other
stored fields must be updated if they are also present in the response.
This behaviour is described in RFC9111, sections 3.2 and 4.3.4.
As per the MDN article on HTTP caching:
During cache revalidation, if both If-Modified-Since and If-None-Match
are present, then If-None-Match takes precedence for the validator.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching
Previously Miniflux would consider a resource unmodified if the
Last-Modified header had not changed, even if the ETag had changed.
With this commit, Miniflux will consider a resource modified if the ETag
header has changed, even if Last-Modified has not.
This fixes Bug 1 in https://rachelbythebay.com/w/2024/06/11/fsr/
Some server on the wild are badly configured. Either by mistake or lack
of maintenance. Safe and unsafe Ciphers change overtime based on new
discoveries.
This proposition will include considered unsafe ciphers when `Allow self-signed or invalid certificates` is used.
It could be put into a separate option but, I felt this could fit in.
fix#2671
In order to be more resilient to YouTube URLs variation and
to address this feature_request: https://github.com/miniflux/v2/issues/2628
I've reworked a bit the way the YouTube feed extraction is done.
I've kept all the `FindSubscriptionsFromYouTube*` in order
to keep all the existing unit tests as-is ensuring little to no
regressions. By doing so, I had to call twice `youtubeURLIDExtractor`.
Small performance penalty for peace of mind in my opinion.
`youtubeURLIDExtractor` is made in a way only one kind
of page can be detected at a time. This mean I can
solve the "video in a playlist" feature_request
by prioritizing the playlist ID over the Video ID
Also, by using `url.Parse()` to get ids, it's safer
to url mangle and variation. The most common variation
being the `t=42` parameters that start the playback
at a given position. Previously, this kind of url
would not be detected as "YouTube URL".
I deliberately ignored the url parsing error
to keep previous behavior (skip the YouTube analysis and follow with the other analysis)
I also tried to keep debug logs the same as before as much as I could.
I manually tested all the YouTube cases (video,channel,playlist)
and they all work as expected except for the video. But this one
does not work either on main. The `meta` html tag that was searched for
does not seem to exist anymore.
fix: #2628
It's possible to specify a rewrite regex that validates but doesn't compile such
as:
rewrite("(((unmatched-capture-group"|"rewrite)))")
In case we encounter one, exit early instead of letting the server panic.
This adds a new "description" field to the feed settings. This allows to
save custom description regarding a feed. It is also exported and
imported as "description" in OPML.
Compress the html of feed entries before storing it. This should reduce the
size of the database a bit, but more importantly, reduce the amount of data
sent to clients
minify being [stupidly fast](https://github.com/tdewolff/minify/?tab=readme-ov-file#performance), the performance impact should be in the noise level.
By default, Oglaf show some disclaimer/warning about its content, and this
doesn't play well with rss readers, so let's rewrite it to show the actual
comic instead of a placeholder.
This commit adds a bunch of checks to prevent reader/rss from adding empty tags
to rss items, as well as some minor refactors like nested conditions and loops
unrolling.
- Move the population of the feed's entries into a new function, to make
`BuildFeed` easier to understand/separate concerns/implementation details
- Use `sort+compact` instead of `compact+sort` to remove duplicates
- Change `if !a { a = } if !a {a = }` constructs into `if !a { a = ; if !a {a = }}`.
This reduce the number of comparisons, but also improves a tad the
control-flow readability.
- Simplify a switch-case by moving a common condition above it.
- Remove a superfluous error-check: `strconv.ParseInt` returns `0` when passed
an empty string.
- Online some one-line functions
- Transform a free-standing function into a method
- Massively simplify `removeClickbait`
- Use a proper constant instead of a magic number in `applyFuncOnTextContent`
No need to compile them once for matching on the url,
once per tag, once per title, once per author, … one time is enough.
It also simplify error handling, since while regexp compilation can fail,
matching can't.