Python Full Stack Developer (React & Angular Expertise) Job Description: We are seeking a highly skilled and motivated Python Full Stack Developer with a strong understanding of modern front-end frameworks, particularly React and Angular. You will be responsible for designing, developing, and maintaining robust and scalable web applications, contributing to all layers of the development lifecycle. This role requires a proactive individual with excellent problem-solving abilities and a passion for delivering high-quality software. Responsibilities: Design, develop, and maintain efficient and reliable back-end applications using Python and relevant frameworks (e.g., Django, Flask). Develop and integrate RESTful APIs and other backend services. Build responsive and user-friendly front-end interfaces using React and Angular. Collaborate closely with UI/UX designers to translate designs into functional web applications. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to best practices. Troubleshoot and debug applications across the full stack. Optimize applications for performance and scalability. Work with databases (both relational and NoSQL) and ensure data integrity. Contribute to the entire software development lifecycle, including requirements gathering, planning, development, testing, deployment, and maintenance. Stay up-to-date with the latest technology trends and best practices in Python, React, Angular, and full-stack development. Collaborate effectively with cross-functional teams, including product managers, designers, and other developers. Contribute to the continuous improvement of our development processes. Qualifications: Proven experience as a Full Stack Developer with a strong focus on Python for the backend. Solid understanding and practical experience with Python frameworks such as Django or Flask. Extensive knowledge and hands-on experience in developing user interfaces with React. Significant experience in developing user interfaces with Angular (version [Specify preferred version or range]). Proficiency in core front-end technologies including HTML, CSS, and JavaScript (ES6+). Experience with state management libraries (e.g., Redux, Context API for React; NgRx, RxJS for Angular). Familiarity with RESTful API design and development. Experience working with databases such as PostgreSQL, MySQL, MongoDB, etc. Understanding of version control systems, particularly Git. Experience with testing frameworks (e.g., pytest, unittest for Python; Jest, Enzyme, Cypress for React; Jasmine, Karma, Cypress for Angular).
Sr Solutions Engineer About the job Mid-Level Position based out of Pune (5-7 years) Senior Solutions Engineer Job Description : We are looking for a Developer good in Python, Spark along with ETL, Complex SQL, Cloud. Primary Skills: Python, Database, Spark Secondary: Azure/AWS, APIs In This Role, You Will Develop and maintain scalable and efficient backend systems, ensuring high performance and responsiveness to requests from the front-end. Design and implement cloud-based solutions, primarily on Microsoft Azure. Manage and optimize CI/CD pipelines for rapid and reliable deployment of software updates. Collaborate with frontend developers and other team members to establish objectives and design more functional, cohesive codes to enhance the user experience. Develop and maintain databases and server-side applications. Ensure the security of the backend infrastructure. Preferred Qualifications 5-7 years of experience developing with Python. Experience in Automation using Python. Experience building rest APIs. Experience with mobile application development is advantageous. Experience working within a Cloud environment. Bachelor's degree in computer science or a related field or related experience Experience with CI/CD tools like Jenkins, GitLab CI, or Azure DevOps. In-depth understanding of database technologies (SQL and NoSQL) and web server technologies. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).
Job Summary: We are seeking a highly skilled and motivated Data Engineer to design, build, and maintain robust, scalable, and efficient data pipelines and infrastructure. You will be instrumental in transforming raw data into actionable insights, enabling our data scientists, analysts, and business stakeholders to drive strategic initiatives. Key Responsibilities: Data Pipeline Development: Design, develop, and maintain efficient and reliable ETL/ELT pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data from diverse sources (e.g., databases, APIs, streaming sources, cloud storage). Data Architecture & Modeling: Design and maintain optimal data models, schemas, and database structures for relational (SQL) and NoSQL databases, data warehouses, and data lakes to support analytical and operational use cases. Big Data & Cloud Solutions: Work with big data technologies such as Apache Spark, Hadoop, Kafka, or Flink for large-scale data processing (batch and streaming). Collaboration & Support: Collaborate closely with data scientists to productionize machine learning models and ensure data availability for model training and inference. Data Governance & Security: Implement and enforce data security controls, access management policies, and best practices to protect sensitive information. Monitoring & Optimization: Monitor data pipeline performance, troubleshoot issues, and implement optimizations to enhance reliability, efficiency, and cost-effectiveness. Documentation: Document technical designs, data flows, workflows, and best practices to facilitate knowledge sharing and maintain comprehensive system documentation. Required Qualifications: [8+] years of proven experience as a Data Engineer, Software Engineer with a data focus, or a similar role. Strong proficiency in at least one programming language commonly used in data engineering (e.g., Python is highly preferred, Java, Scala). Expertise in SQL and strong experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and database design. Hands-on experience with building and optimizing ETL/ELT pipelines. Familiarity with big data technologies (e.g., Apache Spark, Hadoop, Kafka). Experience with at least one major cloud platform (e.g., AWS , GCP , or Azure ) and its data services. Solid understanding of data warehousing concepts and data modeling techniques (dimensional modeling, Kimball, Inmon). Experience with workflow orchestration tools (e.g., Apache Airflow, dbt). Proficiency with version control systems (e.g., Git).
Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.
Job Description AI Engineer We are seeking an experienced AI Engineer with expertise in Python and prompt engineering. The ideal candidate will have a minimum of 3+ years of relevant experience, a good understanding of LLMs, LangGraph, LangChain, and AutoGen is a professional who designs, develops, and deploys intelligent systems utilizing large language models (LLMs) and advanced AI frameworks. Python Proficiency: Strong programming skills in Python are fundamental for developing and implementing AI solutions. Prompt Engineering: Expertise in crafting effective prompts to guide LLMs towards generating desired and accurate responses, often involving techniques like prompt chaining and optimization. LLM Application Development: Hands-on experience in building applications powered by various LLMs (e.g., GPT, LLaMA, Mistral). This includes understanding LLM architecture, memory management, and function/tool calling. Agentic AI Frameworks: Proficiency with frameworks designed for building AI agents and multi-agent systems, such as: LangChain: A framework for developing applications powered by language models, enabling chaining of components and integration with various tools and data sources. LangGraph: An extension of LangChain specifically designed for building stateful, multi-actor applications using LLMs, often visualized as a graph of interconnected nodes representing agents or logical steps. AutoGen: A Microsoft framework that facilitates multi-agent collaboration, allowing specialized agents to work together to solve complex problems through task decomposition and recursive feedback loops. Retrieval-Augmented Generation (RAG): Experience in implementing and optimizing RAG pipelines, which combine LLMs with external knowledge bases (e.g., vector databases) to enhance generation with retrieved information. Deployment and MLOps: Practical knowledge of deploying AI models and agents into production environments, including containerization (Docker), orchestration (Kubernetes), cloud platforms (AWS, Azure, GCP), and CI/CD pipelines.