Responsibilities Design and develop high-scale data architectures, considering factors such as performance, scalability, reliability, and maintainability. Architect and implement database designs from scratch, optimizing for query performance and data integrity. Leverage your deep understanding of web concepts, web services, asynchronous and parallel processing, and database connectivity to create efficient data pipelines. Apply data science principles to solve complex business problems and drive data-driven decision-making. Work with Large Language Models (LLMs) to develop intelligent data applications. Write clean, well-tested code and contribute to the development of automated testing frameworks. Manage large databases and indexes in production environments. Develop and execute data crawling strategies to extract relevant data from various sources. Collaborate with cross-functional teams to ensure alignment with business objectives and technical requirements. We need you to have: Must-Have 8+ years of experience in data related field. Strong foundation in computer science, with expertise in data structures, algorithms, and software design. Hands-on experience with Python for data analysis and development. Proven ability to design and implement high-scale data architectures. Deep understanding of web concepts, web services, asynchronous and parallel processing, and database connectivity. Knowledge of data science principles and their application in data engineering. Experience working with Large Language Models (LLMs). Proficiency in unit testing and automation test case development. Experience managing large databases or indexes in production environments. Data crawling experience. Good-to-Have Experience with MongoDB, ElasticSearch, or Neo4J. Familiarity with Flask, Django, Express, or microservices architectures. Exposure to Hadoop, Spark, or Kafka. Experience with SCM tools (branching, CI, deployment using Jenkins). Cloud experience with GCP, AWS, or Azure. You will be based out of: Pune, India.