Bloom AI is a modern intelligence firm that accelerates decision-making through AI-driven synthesized intelligence. We empower enterprises to unlock the value of data with human-like synthesis and decision intelligence at scale. Our proprietary tools and solutions are trusted by investment managers, insurance, private equity, and Fortune 1000 companies for more informed, efficient, and productive business practices. Bloom AI is in Raleigh (U.S.) and New Delhi (India). Responsibilities: Design, develop, and maintain scalable Python-based applications with a focus on data scraping, data ingestion, and API integration. Build and manage web scrapers that are robust, fault-tolerant, and adaptable to changing website structures. Develop and integrate RESTful APIs to facilitate data exchange between internal systems and external services. Work with AWS services to deploy and scale scraping and data processing pipelines. Monitor scraper performance, implement logging and alerting, and ensure compliance with relevant data handling policies. Provide code documentation and other inputs to technical documents. Collaborate with cross-functional teams to define project requirements and scope. Requirements: 1-2 years of relevant experience. Strong experience with Python and libraries such as requests, BeautifulSoup, Scrapy, Selenium or Puppeteer for web scraping. Proven experience in designing and consuming RESTful APIs. Familiarity with Docker and CI/CD pipelines for automated testing and deployment. Understanding of version control systems, preferably Git.