1p3a Question · Apr 2026

Eightfold AI Senior Engineer AI Assisted Coding Round Experience

SWE Technical Senior

Question Details

Recently went through this round. Generic FYI for future candidates - The interviewers there are mostly inexperienced (3-4 years). They dont possess the ability to judge you by your code quality or th

Full Details

Recently went through this round. Generic FYI for future candidates - The interviewers there are mostly inexperienced (3-4 years). They dont possess the ability to judge you by your code quality or thought process or anything you feel should have been acknowledged. Their focus - just make it run, they wont even cross question you for anything (no why about anything). Just finish it anyhow. Try to give minimum time to any intro talk at start of the interview otherwise you loose time on main problem. result - Passed, moved to next round Expectation - Have a local IDE ready with any AI coding assisstant. I used Claude in VS Code. I also precreated a instrcutions file for Claude(again interviewer didnt care , just get me the solution). Problem Statement -- Build a system for a cloud service provider that processes incoming API traffic from millions of clients. The traffic consists of API request logs streamed in real time from various sources (e.g., microservices, edge servers) in different formats. Your goal is to design and implement a software solution that analyzes these logs, computes usage metrics, enforces dynamic rate limits, and outputs actionable insights to a designated directory—all while adapting to evolving log formats and scaling to handle high throughput. Problem Details Input Structure: API request logs are written as files (e.g., JSON, CSV) into an input directory. Logs arrive from multiple sources, each with its own format, and are organized into subfolders by source ID (e.g., input/sourceA/log1.json). New log files are continuously appended or updated. Dynamic Nature: New log files are added in real time (e.g., every few seconds). Existing log files may be appended with new entries. Sources can disappear, and new sources with entirely new log formats can appear without notice. Requirements: Design a generic output format for API usage insights (e.g., per-client request counts, latency stats, rate limit status). Continuously monitor the input directory for new or updated log files and process them within a 5-second window. Compute metrics such as: Total requests per client (identified by an API key or client ID). Average latency per client. Top 5 most frequent endpoints per source. Enforce dynamic rate limits: Each client has a configurable limit (e.g., 1000 requests/hour), stored in a config file (limits.json). Flag clients exceeding their limits in the output. Handle multiple input formats (e.g., JSON, CSV) and allow new formats to be processed without code changes. Optimize for high throughput (millions of requests/hour) and low processing lag. Output insights as JSON files in an output directory (e.g., output/sourceA/insights.json), updated in real time. Sample input ( note that csv and json might have diff key values, we need a configuration to map it to right internal names/fields) Format 1 (JSON - Source: "EdgeServer"): { "timestamp": "2025-04-04T10:00:01Z", "requests": [ { "api_key": "abc123", "endpoint": "/v1/users", "method": "GET", "latency_ms": 120, "status": 200 }, { "api_key": "xyz789", "endpoint": "/v1/orders", "method": "POST", "latency_ms": 250, "status": 201 } ] } Format 2 (CSV - Source: "Microservice"): timestamp,client_id,endpoint,method,response_time_ms,status_code 2025-04-04T10:00:02Z,def456,/v1/auth,POST,180,200 2025-04-04T10:00:03Z,ghi789,/v1/users,GET,90,404

Free preview. Unlock all questions →

Topics

System Design Networking