<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Process on Roland Huß</title><link>https://ro14nd.de/tags/process/</link><description>Recent content in Process on Roland Huß</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>© 2026 Roland Huß</copyright><lastBuildDate>Wed, 13 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://ro14nd.de/tags/process/index.xml" rel="self" type="application/rss+xml"/><item><title>Know Your Limits: Quiz Yourself Before You Trust AI</title><link>https://ro14nd.de/know-your-limits/</link><pubDate>Wed, 13 May 2026 00:00:00 +0000</pubDate><guid>https://ro14nd.de/know-your-limits/</guid><description>&lt;p&gt;The conversation was going well. We were working out how to integrate &lt;a href="https://github.com/NVIDIA/OpenShell" target="_blank" rel="noreferrer"&gt;OpenShell&lt;/a&gt;&amp;rsquo;s network isolation into our agent platform. The AI had produced an overlap analysis, identified shared capabilities, proposed a feature breakdown for the integration. Everything sounded reasonable, the trade-offs clearly articulated, the architecture diagrams sensible. I was nodding along, ready to take the recommendation to the team.&lt;/p&gt;
&lt;p&gt;Then a small voice in the back of my head asked: do you actually know enough about veth pairs and TLS MITM proxying to tell whether any of this is correct?&lt;/p&gt;

&lt;figure&gt;
 &lt;img class="my-0 rounded-md" src="https://ro14nd.de/images/know-your-limits/og.png" alt="Watercolor illustration of a sheep sitting at a small wooden desk in a meadow, taking a quiz with a pencil in its hoof. A chalkboard behind it shows a complex network diagram. Other sheep graze in the background, one wearing tiny reading glasses." /&gt;
 
 
 &lt;/figure&gt;

&lt;h2 class="relative group"&gt;The comfort zone trap
 &lt;div id="the-comfort-zone-trap" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#the-comfort-zone-trap" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;When you use AI for technical architecture or feature evaluation, the conversation flows naturally. The AI explains trade-offs, proposes designs, writes implementation plans. Everything sounds plausible, because LLMs are exceptionally good at producing plausible-sounding output. That&amp;rsquo;s what they&amp;rsquo;re optimized for.&lt;/p&gt;
&lt;p&gt;But plausible isn&amp;rsquo;t correct, and you can only tell the difference within your own domain expertise. Shaping context helps the AI produce better output (that&amp;rsquo;s what the &lt;a href="https://ro14nd.de/context-engineering-101/" &gt;101 post&lt;/a&gt; covered). This post is about a different problem: what happens when the output is good, maybe even correct, but you lack the expertise to verify it?&lt;/p&gt;
&lt;p&gt;The risk isn&amp;rsquo;t that AI gets it wrong. The risk is that you can&amp;rsquo;t tell when it does. And &lt;a href="https://www.psypost.org/users-of-generative-ai-struggle-to-accurately-assess-their-own-competence/" target="_blank" rel="noreferrer"&gt;research from Aalto University&lt;/a&gt; suggests the problem is worse than we think: when interacting with AI tools, everyone overestimates their performance, regardless of skill level. Higher AI literacy correlates with &lt;em&gt;more&lt;/em&gt; overconfidence, not less, because users trust the system&amp;rsquo;s output without checking whether they could have reached the same conclusion on their own.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://stackoverflow.blog/2026/02/18/closing-the-developer-ai-trust-gap/" target="_blank" rel="noreferrer"&gt;Stack Overflow&amp;rsquo;s 2025 developer survey&lt;/a&gt; found a matching paradox: 84% of developers use or plan to use AI tools, but only 29% trust them, down 11 percentage points from 2024. Developers know they should verify. The problem is they don&amp;rsquo;t have a fast way to assess whether they&amp;rsquo;re &lt;em&gt;capable&lt;/em&gt; of verifying in a given domain.&lt;/p&gt;

&lt;h2 class="relative group"&gt;The five-minute honesty check
 &lt;div id="the-five-minute-honesty-check" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#the-five-minute-honesty-check" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Here&amp;rsquo;s a technique that takes about five minutes and reliably reveals where your expertise ends. When AI drives you into unfamiliar territory, ask it to generate a quiz. Five questions, four answers each, targeting the specific skills you&amp;rsquo;d need to implement or review the thing being proposed.&lt;/p&gt;
&lt;p&gt;The prompt is straightforward. Here&amp;rsquo;s the shape of what I used:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;We&amp;rsquo;ve just analyzed [project]&amp;rsquo;s [capability] and how it overlaps with our platform. My concern is that integrating this goes deeper into [domain] than our team&amp;rsquo;s expertise. Create a quiz with 4-5 tough multiple-choice questions targeting the underlying concepts we&amp;rsquo;d need to understand. Don&amp;rsquo;t test [project]-specific internals, test whether we understand the fundamentals it builds on. Include a scoring rubric that maps scores to actionable team decisions.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;The quiz isn&amp;rsquo;t a certification. It&amp;rsquo;s a mirror. If you score 2 out of 5, you now know something important: you&amp;rsquo;re past the boundary where you can meaningfully review AI output in this domain. And knowing that is the whole point.&lt;/p&gt;

&lt;h2 class="relative group"&gt;What the quiz actually reveals
 &lt;div id="what-the-quiz-actually-reveals" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#what-the-quiz-actually-reveals" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;We ran this during the &lt;a href="https://github.com/NVIDIA/OpenShell" target="_blank" rel="noreferrer"&gt;OpenShell&lt;/a&gt; evaluation. The AI generated four questions targeting the exact features the team would need to implement: veth pairs for container networking, &lt;code&gt;/proc&lt;/code&gt;-based process inspection, TLS MITM proxying for traffic analysis, and SSRF hardening for outbound request filtering. (The full quiz is in the &lt;a href="https://ro14nd.de/know-your-limits/#appendix-the-actual-quiz" &gt;appendix&lt;/a&gt; if you want to try it yourself.)&lt;/p&gt;
&lt;p&gt;The questions themselves were informative even before we answered them. One walked through setting up a network namespace with a veth pair, configuring IP addresses and routes, and then asked why the agent still can&amp;rsquo;t reach the proxy. I didn&amp;rsquo;t just get the answer wrong, I wasn&amp;rsquo;t confident about what a veth pair even does in this context, and that gap in understanding is itself useful information. If the question doesn&amp;rsquo;t make sense to you, you&amp;rsquo;ve found a boundary.&lt;/p&gt;
&lt;p&gt;Wrong answers reveal something different: misconceptions. Good quiz questions have plausible distractors. If you pick one confidently and it&amp;rsquo;s wrong, you&amp;rsquo;ve found a blind spot, a place where you think you know more than you do. Those blind spots are more dangerous than the things you know you don&amp;rsquo;t know, because they&amp;rsquo;re where you&amp;rsquo;ll accept bad AI output without questioning it.&lt;/p&gt;
&lt;p&gt;The quiz didn&amp;rsquo;t tell us whether our integration approach was sound. It told us whether we could evaluate that question ourselves. The quiz doesn&amp;rsquo;t test whether the AI is right. It tests whether &lt;em&gt;you&lt;/em&gt; can tell if the AI is right.&lt;/p&gt;

&lt;h2 class="relative group"&gt;It&amp;rsquo;s OK to hit your limit
 &lt;div id="its-ok-to-hit-your-limit" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#its-ok-to-hit-your-limit" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;This is the cultural part, and it matters as much as the technique itself. Hitting your limit on the quiz is the desired outcome. The purpose isn&amp;rsquo;t to prove you&amp;rsquo;re smart enough to review every AI recommendation. It&amp;rsquo;s to find the boundary so you can make honest decisions about what happens next.&lt;/p&gt;
&lt;p&gt;&amp;ldquo;We need a networking expert for this part&amp;rdquo; is a better outcome than shipping code nobody on the team can debug. &amp;ldquo;This evaluation looks reasonable but I can&amp;rsquo;t verify the TLS interception claims, so let&amp;rsquo;s get a second opinion before committing&amp;rdquo; is a better decision than nodding along because the trade-off analysis sounded convincing.&lt;/p&gt;
&lt;p&gt;The quiz gives you a concrete, defensible reason to slow down. Instead of a vague feeling that you might be out of your depth (which is easy to dismiss), you have a score. You answered 1 out of 4 on the networking quiz. That&amp;rsquo;s not a feeling. That&amp;rsquo;s a measurement.&lt;/p&gt;
&lt;p&gt;Sometimes the quiz tells you the opposite: you &lt;em&gt;do&lt;/em&gt; know enough. You answer 4 out of 5, and the one you missed was a genuine edge case, not a fundamental gap. That confirmation is just as valuable, because it means your review of the AI&amp;rsquo;s output is informed, not performative. Either way, you&amp;rsquo;re making decisions based on measurement rather than assumption.&lt;/p&gt;

&lt;h2 class="relative group"&gt;When to reach for this
 &lt;div id="when-to-reach-for-this" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#when-to-reach-for-this" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Not every AI conversation needs a quiz. The technique is most useful at specific moments:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AI proposes adopting a library or pattern you haven&amp;rsquo;t worked with.&lt;/strong&gt; The analysis sounds good, but you&amp;rsquo;ve never actually used the thing being recommended. Quiz yourself on the concepts you&amp;rsquo;d need to maintain it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A technical evaluation pushes into unfamiliar territory.&lt;/strong&gt; You started evaluating a feature and ended up discussing kernel capabilities or cryptographic protocols. If the conversation has moved beyond your comfort zone, the quiz will confirm it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;You catch yourself skimming AI-generated code.&lt;/strong&gt; This one is subtle. If you&amp;rsquo;re reviewing code and you notice you&amp;rsquo;re reading it like prose rather than tracing the logic, you might not have the domain knowledge to review it properly. A quick quiz on the underlying concepts will tell you.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A decision depends on expertise you&amp;rsquo;re not sure you have.&lt;/strong&gt; The AI produced a recommendation. You&amp;rsquo;re about to share it with the team. Before you do, spend five minutes checking whether you can defend it under questioning.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 class="relative group"&gt;Appendix: the actual quiz
 &lt;div id="appendix-the-actual-quiz" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#appendix-the-actual-quiz" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;This is the quiz we generated using &lt;a href="https://ro14nd.de/know-your-limits/#the-five-minute-honesty-check" &gt;the prompt above&lt;/a&gt; during the OpenShell integration evaluation. It was produced entirely by AI, targeting the underlying Linux networking concepts the team would need to understand. Try it yourself before reading the answers.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Question 1: Network namespace isolation (veth pairs)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You need to create an isolated network namespace for an agent process and connect it to the supervisor via a veth pair. The agent should only be able to reach the proxy at 10.200.0.1:3128. After creating the netns and veth pair, you configure the agent-side interface with &lt;code&gt;ip addr add 10.200.0.2/24 dev veth-s&lt;/code&gt; and add &lt;code&gt;ip route add default via 10.200.0.1&lt;/code&gt;. The agent still cannot reach the proxy. What did you forget?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A) You need to run &lt;code&gt;ip link set veth-s up&lt;/code&gt; and &lt;code&gt;ip link set veth-h up&lt;/code&gt; to bring both ends of the veth pair to an operational state&lt;/li&gt;
&lt;li&gt;B) You need to add an iptables MASQUERADE rule on the supervisor side so the proxy can NAT the agent&amp;rsquo;s traffic&lt;/li&gt;
&lt;li&gt;C) You need to set &lt;code&gt;net.ipv4.ip_forward=1&lt;/code&gt; on the supervisor&amp;rsquo;s namespace and add a FORWARD chain rule allowing traffic from the veth-h interface&lt;/li&gt;
&lt;li&gt;D) You need to create a bridge device and attach both the veth-h end and the supervisor&amp;rsquo;s eth0 to it&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Question 2: Identifying which process initiated an outbound connection&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Your egress proxy intercepts a CONNECT request on source port 48372. You need to determine which binary on the system initiated this connection, so you can apply per-binary network policies. Using only the Linux &lt;code&gt;/proc&lt;/code&gt; filesystem (no eBPF, no LD_PRELOAD), what is the correct sequence of lookups?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A) Read &lt;code&gt;/proc/net/tcp&lt;/code&gt; to find the socket inode for source port 48372, scan &lt;code&gt;/proc/*/fd/&lt;/code&gt; symlinks to find which PID owns that inode, then &lt;code&gt;readlink /proc/&amp;lt;pid&amp;gt;/exe&lt;/code&gt; to identify the binary&lt;/li&gt;
&lt;li&gt;B) Read &lt;code&gt;/proc/net/tcp&lt;/code&gt; to find the destination IP, look up the IP in &lt;code&gt;/proc/*/net/route&lt;/code&gt; to find the owning PID, then read &lt;code&gt;/proc/&amp;lt;pid&amp;gt;/cmdline&lt;/code&gt; for the binary path&lt;/li&gt;
&lt;li&gt;C) Scan &lt;code&gt;/proc/*/net/tcp&lt;/code&gt; per process to find which process has source port 48372 in its network namespace, then read &lt;code&gt;/proc/&amp;lt;pid&amp;gt;/exe&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;D) Read &lt;code&gt;/proc/net/tcp6&lt;/code&gt; to get the socket&amp;rsquo;s UID field, map the UID to a username via &lt;code&gt;/etc/passwd&lt;/code&gt;, then find all processes owned by that user in &lt;code&gt;/proc/*/status&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Question 3: TLS MITM for L7 inspection&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You want to add HTTP method/path level inspection to the proxy (e.g., allow GET but block POST to api.github.com). Since GitHub uses HTTPS, you need to TLS-terminate the connection. The agent sends &lt;code&gt;CONNECT api.github.com:443&lt;/code&gt;. Your proxy needs to inspect the HTTP request inside the TLS tunnel. What is the correct sequence of operations?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A) Accept the CONNECT, return 200, generate an ephemeral certificate for api.github.com signed by a sandbox-local CA, TLS-accept from the agent using that cert, parse the plaintext HTTP request, evaluate the OPA policy, then open a new TLS connection to the real api.github.com and relay&lt;/li&gt;
&lt;li&gt;B) Accept the CONNECT, open a TLS connection to the real api.github.com, perform the TLS handshake with GitHub, extract GitHub&amp;rsquo;s certificate, re-sign it with the sandbox CA, then present the re-signed cert to the agent and relay all traffic bidirectionally&lt;/li&gt;
&lt;li&gt;C) Accept the CONNECT, return 200, then passively read the TLS ClientHello from the agent to extract the SNI field, use the SNI to make the policy decision without decrypting the traffic&lt;/li&gt;
&lt;li&gt;D) Reject the CONNECT, redirect the agent to an HTTP (non-TLS) version of the proxy endpoint where the request can be inspected in plaintext, then the proxy initiates the real HTTPS connection to GitHub&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Question 4: SSRF protection and DNS rebinding&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Your proxy blocks connections to internal IPs (RFC 1918, loopback, link-local, cloud metadata 169.254.169.254). The agent sends &lt;code&gt;CONNECT attacker.com:443&lt;/code&gt;. Your proxy resolves attacker.com to 203.0.113.50 (a public IP), passes the SSRF check, and opens a TCP connection. However, the attacker&amp;rsquo;s DNS server is configured with a 0-second TTL. By the time the TLS handshake completes, DNS resolves attacker.com to 169.254.169.254 (the cloud metadata service). The attacker&amp;rsquo;s server never completes the TLS handshake, and the agent&amp;rsquo;s HTTP library retries, this time resolving to the metadata endpoint. What is the correct mitigation?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A) Pin the DNS resolution at the proxy level: resolve once, connect to that IP, and never re-resolve for the lifetime of the connection&lt;/li&gt;
&lt;li&gt;B) Add the resolved IP address to the TLS SNI extension so the upstream server must prove it controls that IP&lt;/li&gt;
&lt;li&gt;C) Set a minimum TTL floor of 60 seconds in the proxy&amp;rsquo;s DNS resolver cache, ignoring the server&amp;rsquo;s 0-second TTL&lt;/li&gt;
&lt;li&gt;D) Perform the SSRF check twice: once after DNS resolution and once after the TCP connection is established by checking &lt;code&gt;getpeername()&lt;/code&gt; on the connected socket&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Scoring&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Score&lt;/th&gt;
 &lt;th&gt;Assessment&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;4/4&lt;/td&gt;
 &lt;td&gt;Team has deep networking chops. OpenShell-style features are realistic.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;3/4&lt;/td&gt;
 &lt;td&gt;Solid foundation, but edge cases will slow you down.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;2/4&lt;/td&gt;
 &lt;td&gt;Significant ramp-up needed. Budget for research spikes before committing.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;0-1/4&lt;/td&gt;
 &lt;td&gt;The team needs networking expertise (hire or partner) before attempting this.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;details&gt;
&lt;summary&gt;&lt;strong&gt;Show answers&lt;/strong&gt;&lt;/summary&gt;
&lt;p&gt;&lt;strong&gt;Q1: A.&lt;/strong&gt; Newly created veth interfaces default to DOWN state. Both ends must be explicitly brought up with &lt;code&gt;ip link set ... up&lt;/code&gt;. B and C would be needed for kernel-level IP forwarding, but OpenShell routes all traffic through an HTTP CONNECT proxy (application-level), not through kernel routing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Q2: A.&lt;/strong&gt; The standard &lt;code&gt;/proc&lt;/code&gt; lookup chain is: parse &lt;code&gt;/proc/net/tcp&lt;/code&gt; to find the inode for the source port, scan &lt;code&gt;/proc/*/fd/&lt;/code&gt; symlinks to find which PID holds that inode, then &lt;code&gt;readlink /proc/&amp;lt;pid&amp;gt;/exe&lt;/code&gt; to get the binary path. B is wrong because &lt;code&gt;/proc/*/net/route&lt;/code&gt; doesn't map to individual connections. C would work in theory but is expensive (scanning every process's network namespace). D only gives you the user, not the specific binary.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Q3: A.&lt;/strong&gt; This is the standard TLS MITM approach. The proxy generates a per-hostname ephemeral certificate signed by a CA that the agent's trust store accepts. Between the two independent TLS endpoints, the proxy sees plaintext HTTP and can inspect method, path, headers, and body. C only gives you domain-level filtering (which is what a Squid proxy already does via SNI).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Q4: D.&lt;/strong&gt; The most reliable mitigation is checking the actual connected IP via &lt;code&gt;getpeername()&lt;/code&gt; after the TCP handshake succeeds, not just the resolved IP before connecting. This catches DNS rebinding regardless of caching behavior. A (DNS pinning) helps but doesn't cover all rebinding variants (e.g., dual-stack responses). Note: OpenShell currently does NOT implement DNS rebinding protection. The security audit flagged this as a gap.&lt;/p&gt;
&lt;/details&gt;
&lt;div class="ai-attribution"&gt;
&lt;p&gt;Author: Roland Huß &lt;a href="https://aiattribution.github.io/statements/AIA-HAb-CeNc-Hin-R-?model=Claude%20Opus%204.6-v1.0" target="_blank" rel="noreferrer"&gt;AIA HAb CeNc Hin R Claude Opus 4.6 v1.0&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</description></item></channel></rss>