Role: Python Architect Experience: 10+ Years Location: Chennai Notice Period: 30-60 days Job Description: Job Responsibilities: At least 8+ years of SW development experience in Java or Python , ReactJS, Docker and Kubernetes preferably in the telecom domain Able to articulate customer requirements into high-level/low-level specification documents. Able to guide and mentor the team of 6-8 people from the technical side and strictly adhere to the process guidelines during the implementation phase of the project. Develop and design python programs, flask based micro services based on customer requirements. Strong knowledge and good experience required on Advanced Python programming is must. Experience in the development of high-performance requirement python projects is a must Good to have knowledge in ELK/Grafana Good knowledge in Kubernetes, Dockers, and Cloud native principles. Able to communicate directly with customers and can participate in direct technical discussions with the customer Bachelor or Master's degree in CSE/ECE/EEE/EI/IT Experience in working on Agile projects is value addition Knowledge and experience with the professional services project lifecycle (scoping, requirements, construction, QA/test) Design, implementation, test, integration and debugging of Python based micro services & applications Convert requirements to high-quality code while working closely with a team of other highly skilled professionals to deliver top-quality software to the Eden NET customer base. Proven commercial Python development experience (more than just scripting), Excellent understanding of Object-Oriented Methodology and design, implementation, and debugging skills. Understanding /Hands on Experience on Machine learning and important Algorithms Proven experience working with relational database systems such as MySQL Experience writing automated unit and integration testing using Python Experience using and creating RESTful APIs Good experience in continuous refactoring First level of effort estimation for feature, Interface with system architects to understand the impact of the system level features on the modules. Reviews specifications, architecture and design, code, test strategy, and test cases for the feature. Work towards continuously improving the quality of the code and automation of the test cases. Acts as a mentor for team members, Good communication and teamwork skills Ability to take initiative and lead complex activities along with a team Experience in working in a multi-cultural environment Highly motivated and constantly seeks avenues for continuous improvement Job Type: Permanent Pay: ₹3,500,000.00 - ₹5,000,000.00 per year Benefits: Paid sick time Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Experience: Java & Python : 8 years (Required) ReactJS: 7 years (Required) Dockers and Kubernetes : 5 years (Required) Telecom Domain: 5 years (Required) Advanced python : 5 years (Required) Machine learning and important Algorithms: 4 years (Required) MySQL: 1 year (Required) RESTful APIs: 5 years (Required) Work Location: In person
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Work Location: Hybrid remote in Haryana, Haryana
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 6 years (Required) AWS: 4 years (Required) Python: 4 years (Required) Work Location: Hybrid remote in Haryana, Haryana
Role: Gen AI Engineer Location: Delhi Mode of Work : Hybrid Notice Period : 0-25 Days Job Description: Key Responsibilities: Designing and Developing AI models: This includes creating architectures, algorithms, and frameworks for generative AI. Implementing AI models: This involves building and integrating AI models into existing systems and applications. Working with LLMs and other AI technologies: This includes using tools and techniques like LangChain, Haystack, and prompt engineering. Data preprocessing and analysis: This involves preparing data for use in AI models. Collaborating with other teams: This includes working with data scientists, product managers, and other stakeholders. Testing and deploying AI models: This involves evaluating model performance and deploying them to production environments. Monitoring and optimizing AI models: This involves tracking model performance, identifying issues, and optimizing models for better results. Staying up to date with the latest advancements in Gen AI: This includes learning about new techniques, models, and frameworks. Required Skills: Strong programming skills in Python: Python is the preferred language for AI development. Knowledge of Generative AI, NLP, and LLMs: This includes understanding the principles behind these technologies and how to use them effectively. Experience with RAG pipelines and vector databases: This includes understanding how to build and use retrieval-augmented generation pipelines. Familiarity with AI frameworks and libraries: This includes knowledge of frameworks like LangChain, Haystack, and open-source libraries. Understanding of prompt engineering and tokenization: This includes understanding how to optimize prompts and manage tokenization. Experience in integrating and fine-tuning AI models: This includes knowledge of deploying and maintaining AI models in production environments. Excellent communication and problem-solving skills: This includes the ability to communicate complex technical concepts to non-technical stakeholders. Optional Skills: Experience with cloud computing platforms (GCP, AWS, Azure): This can be helpful for deploying and managing AI models. Familiarity with MLOps practices: This can help with building and deploying AI models in a scalable and reliable manner. Experience with DevOps practices: This can help with automating the development and deployment of AI models. Job Type: Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Schedule: Day shift Experience: Total: 6 years (Required) GenAI: 5 years (Required) Python : 3 years (Required) LLM: 4 years (Required) OpenAI, Claude, Gemini: 3 years (Preferred) Azure: 3 years (Required) Work Location: In person
Role: AWS Data Engineer Location: Delhi Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Experience: Data Engineering: 6 years (Required) Python: 3 years (Required) Pyspark/Spark: 3 years (Required) AWS: 5 years (Required) Work Location: In person
Role: AWS Data Engineer Location: Delhi Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Experience: Data Engineering: 6 years (Required) Python: 3 years (Required) Pyspark/Spark: 3 years (Required) AWS: 5 years (Required) Work Location: In person
Role: Principal Data Engineer Location: Delhi Mode: Hybrid Type: Contract Job Description: We are seeking a Principal Data Engineer with strong enterprise-level data architecture and engineering experience, specifically within the logistics and manufacturing domains . This role is heavily focused on architecting and developing scalable data solutions using the Azure ecosystem , including Databricks , ADF , Synapse , and PySpark . Key Responsibilities: 90% focus on data architecture and hands-on development 10% involvement in stakeholder collaboration Design and build scalable data pipelines and models in Azure Implement best practices in data engineering and architecture Required Skills: Strong experience in Azure Data Engineering (ADF, Databricks, Synapse) Proficient in PySpark , Python , and SQL Solid understanding of data modeling (conceptual/logical/physical) Experience with tools like Erwin , Toad , and Snowflake is a plus Domain expertise in logistics/supply chain or HR domain Job Type: Permanent Pay: ₹3,000,000.00 - ₹4,000,000.00 per year Experience: Databricks: 4 years (Required) Total: 10 years (Required) ADF: 3 years (Required) Pyspark: 2 years (Required) Python: 2 years (Required) Data Modelling: 3 years (Required) Snowflake tools : 3 years (Required) Logistics or HR domain: 2 years (Required) Work Location: In person
Role: Lead BI Engineer/BI Architect Location: Delhi NCR Mode: Hybrid Type: Permanent Job Description: Duties and responsibilities: Lead Business Analytics Development: Design and deliver end-to-end data solutions, including data models, reports, and dashboards tailored to the needs of business partners across functions. Strategic Collaboration: Serve as a trusted analytics partner to business leadership and COEs, understanding their business needs and translating them into data strategies and products. Data Stewardship & Governance : Ensure data quality, consistency, and security in line with privacy laws (e.g., GDPR, CCPA) and internal governance policies. Automation & Efficiency : Create scalable pipelines and automation that streamline the delivery of recurring business metrics and eliminate manual reporting. Insights & Storytelling : Translate complex data into clear, actionable insights using visual storytelling techniques that inform strategy and engage stakeholders at all levels. Advanced Analytics: Support workforce planning, compensation analysis, and predictive modeling initiatives by collaborating with Data Science, Data Engineering, and Talent Analytics teams. Tool Ownership: Lead the deployment and optimization of BI tools like Power BI or Tableau for HR data visualization, and partner with IT to manage backend infrastructure (Azure, Databricks). People-Centered Design: Ensure insights are accessible, equitable, and designed to empower leaders with intuitive, story-driven visuals. Agile Delivery: Manage analytics projects using agile methodologies, facilitate sprint planning, and ensure timely delivery of high-impact solutions. TOOLS: Years of Experience : 6+ years of BI, data engineering, or analytics roles, including hands-on development and architecture of enterprise-level BI platforms. Advanced Power BI Expertise : Deep knowledge of Power BI, including report creation, data visualization, DAX calculations, and publishing dashboards to deliver actionable insights. Data Modeling and SQL Proficiency : Expertise in designing scalable data models and advanced SQL skills for querying, transforming, and analyzing data. Azure Ecosystem Knowledge : Hands-on experience with Azure services for managing cloud-based data platforms and familiarity with Databricks for collaborative data workflows. DevOps and Collaboration Tools : Experience with DevOps practices (e.g., Azure DevOps) and team collaboration tools like Microsoft Teams to streamline workflows and communication. Innovation in BI Processes : A track record of driving innovation through automation, optimization, or leveraging emerging technologies such as AI/ML (nice to have). Communication and Stakeholder Management : Strong ability to engage and influence stakeholders at all levels of the organization. Proven experience collaborating cross-functionally, and aligning analytics work with evolving business priorities. Problem-Solving and Strategic Thinking : Strong critical thinking skills to troubleshoot issues, optimize systems, and deliver solutions that meet business needs efficiently. Adaptability to New Tools : Familiarity with other BI tools (e.g., Tableau or Looker) and a willingness to explore new technologies as needed to enhance BI capabilities. Job Type: Permanent Pay: ₹3,000,000.00 - ₹4,000,000.00 per year Experience: Power BI: 6 years (Required) Data Modeling: 5 years (Required) SQL Proficiency: 6 years (Required) Total: 10 years (Required) Azure Ecosystem: 3 years (Required) Work Location: In person
Role: Marketing Coordinator/Consultant/Executive Location: Delhi Mode: Hybrid Type: Permanent Job Description: We are seeking a Marketing Coordinator/Consultant/Executive with strong exposure to SCS (Strategy, Consulting & Solutions) and Demand Management (DM) to support our growth in the B2B space. The ideal candidate will have a versatile background in marketing coordination and founder’s office/associate roles , with the ability to manage cross-functional priorities, support leadership, and drive operational efficiency. Key Responsibilities SCS Exposure: Support strategic initiatives, market analysis, and client engagement to strengthen B2B presence across Pharma, SaaS, PaaS, Fintech, and Insurance domains. Demand Management (DM): Monitor, track, and optimize demand pipelines while ensuring alignment with sales and product teams. Marketing Coordination: Execute marketing campaigns, coordinate with internal teams, and manage communication strategies to increase brand visibility. Founder’s Associate: Act as a trusted partner to the leadership team, assisting in research, presentations, reporting, and execution of special projects. Collaborate with cross-functional teams to deliver measurable outcomes in client acquisition and retention. Prepare and present periodic performance reports, highlighting growth opportunities and challenges. Requirements Proven experience/exposure in SCS (Strategy, Consulting, Solutions) within B2B or relevant industries. Strong understanding of demand management processes and sales/marketing alignment. Hands-on experience as a Marketing Coordinator or Founder’s Associate . Familiarity with B2B markets , preferably in Pharma, SaaS, PaaS, Fintech, or Insurance . Excellent communication, presentation, and stakeholder management skills. Ability to multi-task, prioritize, and adapt to a fast-paced environment. Preferred Skills Analytical and problem-solving mindset with data-driven decision-making skills. Knowledge of digital marketing tools, CRM platforms, and reporting dashboards. Strong project management and organizational abilities. Why Join Us? Opportunity to work across diverse B2B domains (Pharma, SaaS, PaaS, Fintech, Insurance). Exposure to strategy, demand management, and leadership-level initiatives. Collaborative and growth-driven work culture. Job Type: Permanent Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Experience: Event management: 3 years (Required) Booking the venue/tickets/travel : 3 years (Required) Exposure and SCS: 5 years (Required) Marketing Coordinator: 4 years (Required) Pharma, SaaS, PaaS, Fintech, or Insurance Domain: 3 years (Required) Work Location: In person
Role: Data Architect Location: Delhi Mode: Hybrid Type: Contract Job Description: We are looking for an experienced Data Architect with strong expertise in data modeling , Azure data services , and data pipeline development . The ideal candidate will have hands-on experience with ADF , Databricks , and PySpark (mandatory), and be proficient in SQL . Strong stakeholder management skills are essential for collaborating with cross-functional teams and translating business needs into scalable data solutions. Key Skills Required: Proven experience as a Data Architect Strong data modeling skills (conceptual, logical, physical) Hands-on with Azure Tech Stack (e.g., Azure Data Factory, Azure Databricks) Proficiency in PySpark (MANDATORY) Solid knowledge of SQL and relational databases Experience building and optimizing data pipelines Excellent stakeholder management and communication skills Job Type: Permanent Pay: ₹2,500,000.00 - ₹4,000,000.00 per year Experience: Data Modelling: 6 years (Required) Azure Tech Stack: 6 years (Required) ADF: 5 years (Required) Databricks: 5 years (Required) Total: 10 years (Required) Pyspark : 4 years (Required) SQL: 3 years (Required) Data Pipelines: 3 years (Required) Work Location: In person
Role: Data Architect Location: Delhi Mode: Hybrid Type: Contract Job Description: We are looking for an experienced Data Architect with strong expertise in data modeling , Azure data services , and data pipeline development . The ideal candidate will have hands-on experience with ADF , Databricks , and PySpark (mandatory), and be proficient in SQL . Strong stakeholder management skills are essential for collaborating with cross-functional teams and translating business needs into scalable data solutions. Key Skills Required: Proven experience as a Data Architect Strong data modeling skills (conceptual, logical, physical) Hands-on with Azure Tech Stack (e.g., Azure Data Factory, Azure Databricks) Proficiency in PySpark (MANDATORY) Solid knowledge of SQL and relational databases Experience building and optimizing data pipelines Excellent stakeholder management and communication skills Job Type: Permanent Pay: ₹2,500,000.00 - ₹4,000,000.00 per year Experience: Data Modelling: 6 years (Required) Azure Tech Stack: 6 years (Required) ADF: 5 years (Required) Databricks: 5 years (Required) Total: 10 years (Required) Pyspark : 4 years (Required) SQL: 3 years (Required) Data Pipelines: 3 years (Required) Work Location: In person
Role: Principal Data Engineer Location: Delhi Mode: Hybrid Type: Contract Job Description: We are seeking a Principal Data Engineer with strong enterprise-level data architecture and engineering experience, specifically within the logistics and manufacturing domains . This role is heavily focused on architecting and developing scalable data solutions using the Azure ecosystem , including Databricks , ADF , Synapse , and PySpark . Key Responsibilities: 90% focus on data architecture and hands-on development 10% involvement in stakeholder collaboration Design and build scalable data pipelines and models in Azure Implement best practices in data engineering and architecture Required Skills: Strong experience in Azure Data Engineering (ADF, Databricks, Synapse) Proficient in PySpark , Python , and SQL Solid understanding of data modeling (conceptual/logical/physical) Experience with tools like Erwin , Toad , and Snowflake is a plus Domain expertise in logistics/supply chain or HR domain Job Type: Permanent Pay: ₹3,000,000.00 - ₹4,000,000.00 per year Experience: Databricks: 4 years (Required) Total: 10 years (Required) ADF: 3 years (Required) Pyspark: 2 years (Required) Python: 2 years (Required) Data Modelling: 3 years (Required) Snowflake tools : 3 years (Required) Logistics or HR domain: 2 years (Required) Work Location: In person
Role: Marketing Coordinator/Consultant/Executive Location: Delhi Mode: Hybrid Type: Permanent Job Description: We are seeking a Marketing Coordinator/Consultant/Executive with strong exposure to SCS (Strategy, Consulting & Solutions) and Demand Management (DM) to support our growth in the B2B space. The ideal candidate will have a versatile background in marketing coordination and founder’s office/associate roles , with the ability to manage cross-functional priorities, support leadership, and drive operational efficiency. Key Responsibilities SCS Exposure: Support strategic initiatives, market analysis, and client engagement to strengthen B2B presence across Pharma, SaaS, PaaS, Fintech, and Insurance domains. Demand Management (DM): Monitor, track, and optimize demand pipelines while ensuring alignment with sales and product teams. Marketing Coordination: Execute marketing campaigns, coordinate with internal teams, and manage communication strategies to increase brand visibility. Founder’s Associate: Act as a trusted partner to the leadership team, assisting in research, presentations, reporting, and execution of special projects. Collaborate with cross-functional teams to deliver measurable outcomes in client acquisition and retention. Prepare and present periodic performance reports, highlighting growth opportunities and challenges. Requirements Proven experience/exposure in SCS (Strategy, Consulting, Solutions) within B2B or relevant industries. Strong understanding of demand management processes and sales/marketing alignment. Hands-on experience as a Marketing Coordinator or Founder’s Associate . Familiarity with B2B markets , preferably in Pharma, SaaS, PaaS, Fintech, or Insurance . Excellent communication, presentation, and stakeholder management skills. Ability to multi-task, prioritize, and adapt to a fast-paced environment. Preferred Skills Analytical and problem-solving mindset with data-driven decision-making skills. Knowledge of digital marketing tools, CRM platforms, and reporting dashboards. Strong project management and organizational abilities. Why Join Us? Opportunity to work across diverse B2B domains (Pharma, SaaS, PaaS, Fintech, Insurance). Exposure to strategy, demand management, and leadership-level initiatives. Collaborative and growth-driven work culture. Job Type: Permanent Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Experience: Event management: 3 years (Required) Booking the venue/tickets/travel : 3 years (Required) Exposure and SCS: 5 years (Required) Marketing Coordinator: 4 years (Required) Pharma, SaaS, PaaS, Fintech, or Insurance Domain: 3 years (Required) Work Location: In person
Role: Lead BI Engineer/BI Architect Location: Delhi Mode: Hybrid Type: Permanent Job Description: Duties and responsibilities: Lead Business Analytics Development: Design and deliver end-to-end data solutions, including data models, reports, and dashboards tailored to the needs of business partners across functions. Strategic Collaboration: Serve as a trusted analytics partner to business leadership and COEs, understanding their business needs and translating them into data strategies and products. Data Stewardship & Governance : Ensure data quality, consistency, and security in line with privacy laws (e.g., GDPR, CCPA) and internal governance policies. Automation & Efficiency : Create scalable pipelines and automation that streamline the delivery of recurring business metrics and eliminate manual reporting. Insights & Storytelling : Translate complex data into clear, actionable insights using visual storytelling techniques that inform strategy and engage stakeholders at all levels. Advanced Analytics: Support workforce planning, compensation analysis, and predictive modeling initiatives by collaborating with Data Science, Data Engineering, and Talent Analytics teams. Tool Ownership: Lead the deployment and optimization of BI tools like Power BI or Tableau for HR data visualization, and partner with IT to manage backend infrastructure (Azure, Databricks). People-Centered Design: Ensure insights are accessible, equitable, and designed to empower leaders with intuitive, story-driven visuals. Agile Delivery: Manage analytics projects using agile methodologies, facilitate sprint planning, and ensure timely delivery of high-impact solutions. TOOLS: Years of Experience : 6+ years of BI, data engineering, or analytics roles, including hands-on development and architecture of enterprise-level BI platforms. Advanced Power BI Expertise : Deep knowledge of Power BI, including report creation, data visualization, DAX calculations, and publishing dashboards to deliver actionable insights. Data Modeling and SQL Proficiency : Expertise in designing scalable data models and advanced SQL skills for querying, transforming, and analyzing data. Azure Ecosystem Knowledge : Hands-on experience with Azure services for managing cloud-based data platforms and familiarity with Databricks for collaborative data workflows. DevOps and Collaboration Tools : Experience with DevOps practices (e.g., Azure DevOps) and team collaboration tools like Microsoft Teams to streamline workflows and communication. Innovation in BI Processes : A track record of driving innovation through automation, optimization, or leveraging emerging technologies such as AI/ML (nice to have). Communication and Stakeholder Management : Strong ability to engage and influence stakeholders at all levels of the organization. Proven experience collaborating cross-functionally, and aligning analytics work with evolving business priorities. Problem-Solving and Strategic Thinking : Strong critical thinking skills to troubleshoot issues, optimize systems, and deliver solutions that meet business needs efficiently. Adaptability to New Tools : Familiarity with other BI tools (e.g., Tableau or Looker) and a willingness to explore new technologies as needed to enhance BI capabilities. Job Type: Permanent Pay: ₹3,000,000.00 - ₹4,000,000.00 per year Experience: Power BI: 6 years (Required) Data Modeling: 5 years (Required) SQL Proficiency: 6 years (Required) Total: 10 years (Required) Azure Ecosystem: 3 years (Required) Work Location: In person
Role: Data Architect Location: Delhi Mode: Hybrid Type: Contract Job Description: We are looking for an experienced Data Architect with strong expertise in data modeling , Azure data services , and data pipeline development . The ideal candidate will have hands-on experience with ADF , Databricks , and PySpark (mandatory), and be proficient in SQL . Strong stakeholder management skills are essential for collaborating with cross-functional teams and translating business needs into scalable data solutions. Key Skills Required: Proven experience as a Data Architect Strong data modeling skills (conceptual, logical, physical) Hands-on with Azure Tech Stack (e.g., Azure Data Factory, Azure Databricks) Proficiency in PySpark (MANDATORY) Solid knowledge of SQL and relational databases Experience building and optimizing data pipelines Excellent stakeholder management and communication skills Job Type: Permanent Pay: ₹2,500,000.00 - ₹4,000,000.00 per year Experience: Data Modelling: 6 years (Required) Azure Tech Stack: 6 years (Required) ADF: 5 years (Required) Databricks: 5 years (Required) Total: 10 years (Required) Pyspark : 4 years (Required) SQL: 3 years (Required) Data Pipelines: 3 years (Required) Work Location: In person
Role: Data Architect Location: Delhi Mode: Hybrid Type: Contract Job Description: We are looking for an experienced Data Architect with strong expertise in data modeling , Azure data services , and data pipeline development . The ideal candidate will have hands-on experience with ADF , Databricks , and PySpark (mandatory), and be proficient in SQL . Strong stakeholder management skills are essential for collaborating with cross-functional teams and translating business needs into scalable data solutions. Key Skills Required: Proven experience as a Data Architect Strong data modeling skills (conceptual, logical, physical) Hands-on with Azure Tech Stack (e.g., Azure Data Factory, Azure Databricks) Proficiency in PySpark (MANDATORY) Solid knowledge of SQL and relational databases Experience building and optimizing data pipelines Excellent stakeholder management and communication skills Job Type: Permanent Pay: ₹2,500,000.00 - ₹4,000,000.00 per year Experience: Data Modelling: 6 years (Required) Azure Tech Stack: 6 years (Required) ADF: 5 years (Required) Databricks: 5 years (Required) Total: 10 years (Required) Pyspark : 4 years (Required) SQL: 3 years (Required) Data Pipelines: 3 years (Required) Work Location: In person
Role: Principal Data Engineer Location: Delhi Mode: Hybrid Type: Contract Job Description: We are seeking a Principal Data Engineer with strong enterprise-level data architecture and engineering experience, specifically within the logistics and manufacturing domains . This role is heavily focused on architecting and developing scalable data solutions using the Azure ecosystem , including Databricks , ADF , Synapse , and PySpark . Key Responsibilities: 90% focus on data architecture and hands-on development 10% involvement in stakeholder collaboration Design and build scalable data pipelines and models in Azure Implement best practices in data engineering and architecture Required Skills: Strong experience in Azure Data Engineering (ADF, Databricks, Synapse) Proficient in PySpark , Python , and SQL Solid understanding of data modeling (conceptual/logical/physical) Experience with tools like Erwin , Toad , and Snowflake is a plus Domain expertise in logistics/supply chain or HR domain Job Type: Permanent Pay: ₹3,000,000.00 - ₹4,000,000.00 per year Experience: Databricks: 4 years (Required) Total: 10 years (Required) ADF: 3 years (Required) Pyspark: 2 years (Required) Python: 2 years (Required) Data Modelling: 3 years (Required) Snowflake tools : 3 years (Required) Logistics or HR domain: 2 years (Required) Work Location: In person
Role: Engineering Manager Location: Delhi Mode: Hybrid Type: Contract Job Description: Responsibility 1. Ownership and improvement of end-to-end Architecture of the platform o Ensure fully integrated talent mining engine and chatbot features in the product over the year 2. Provide technical leadership to Data Science, UX, Frontend, Infrastructure, Analytics, and Customer Support teams o Motivate the team to roll out at least 1 major innovation every month across each functional area o Do code-reviews and provide hands-on support and guidance to all functional areas. 3. Promote overall automation and SAAS architecture (features need to be rolled out across all clients and not customized to one client) o Automate all clients' onboarding activities over the year o Automate reporting/analytic activities over the year 4. Ownership and delivery of platform roadmap across all areas o Brainstorm at least 1 major idea every month across all functional areas (data science, UX, frontend, infrastructure, analytics and customer support) 5. Optimal usage of infrastructure and other collaboration tool costs o Decrease in infrastructure costs by 50% over the year o Improve performance of the infrastructure by 30% over the year 6. Hire and build a team of A players for the entire product team o Double the team size across all functional areas (data science, UX, Frontend, Infrastructure, analytics, and customer support) over the year by actively taking ownership of the hiring function 7. Ensure customers are happy with a fully functioning product o Improve the accuracy of the platform to 90% (measured as co-relation between user feedback and algorithm scoring) o Implement testing automation to improve robustness of the platform, speed up deployment and reduce recurring defects. 8. Provide technical/architecture support to all the pre-sales bids for new client opportunities o Create re-usable collateral and a knowledge bank of technical documentation to support pre-sale RFPs of potential clients. 9. Hiring Engineering team leads and managing team morale and ensuring attrition is managed. Required Skill 1. 8 to 10 years of technical experience in Python, full-stack development, Microservices architecture, and PHP. 2. Bachelor of Engineering/ B Tech from IIT/ NIT/ Top 10 engineering college. 3. Work ethic: Possesses a strong willingness to work hard and sometimes long hours to get the job done. Has a track record of working hard. 4. Communication: Speaks and writes clearly and articulately without being overly verbose or talkative. Maintains this standard in all forms of written communication, including e-mail. 5. Honesty/Integrity: Does not cut corners ethically. Earns trust and maintains 6. Proactivity: Acts without being told what to do. Brings new ideas to the company. Job Type: Permanent Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Experience: Full stack development: 8 years (Required) Python: 5 years (Required) Microservices architecture: 4 years (Required) Work Location: In person
Role: Gen AI Engineer Location: Delhi Mode of Work : Hybrid Notice Period : 0-25 Days Job Description: Key Responsibilities: Designing and Developing AI Models: This includes creating architectures, algorithms, and frameworks for generative AI applications. Implementing AI models: This involves building and integrating AI models into existing systems and applications. Working with LLMs and other AI technologies: This includes using tools and techniques like LangChain, Haystack, and prompt engineering. Data preprocessing and analysis: This involves preparing data for use in AI models. Collaborating with other teams: This includes working with data scientists, product managers, and other stakeholders. Testing and deploying AI models: This involves evaluating model performance and deploying them to production environments. Monitoring and optimizing AI models: This involves tracking model performance, identifying issues, and optimizing models for better results. Staying up to date with the latest advancements in Gen AI: This includes learning about new techniques, models, and frameworks. Required Skills: Strong programming skills in Python: Python is the preferred language for AI development. Knowledge of Generative AI, NLP, and LLMs: This includes understanding the principles behind these technologies and how to use them effectively. Experience with RAG pipelines and vector databases: This includes understanding how to build and use retrieval-augmented generation pipelines. Familiarity with AI frameworks and libraries: This includes knowledge of frameworks like LangChain, Haystack, and open-source libraries. Understanding of prompt engineering and tokenization: This includes understanding how to optimize prompts and manage tokenization. Experience in integrating and fine-tuning AI models: This includes knowledge of deploying and maintaining AI models in production environments. Excellent communication and problem-solving skills: This includes the ability to communicate complex technical concepts to non-technical stakeholders. Optional Skills: Experience with cloud computing platforms (GCP, AWS, Azure): This can help deploy and manage AI models. Familiarity with MLOps practices: This can help with building and deploying AI models in a scalable and reliable manner. Experience with DevOps practices: This can help with automating the development and deployment of AI models. Job Type: Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Experience: Total: 6 years (Required) GenAI : 3 years (Required) Python : 4 years (Required) LLM : 3 years (Required) OpenAI, Claude, Gemini : 4 years (Required) Azure : 3 years (Required) LangChain and LangGraph: 1 year (Preferred) Work Location: In person
Role: Engineering Manager Location: Delhi Mode: Hybrid Type: Contract Job Description: Responsibility 1. Ownership and improvement of end-to-end Architecture of the platform o Ensure fully integrated talent mining engine and chatbot features in the product over the year 2. Provide technical leadership to Data Science, UX, Frontend, Infrastructure, Analytics, and Customer Support teams o Motivate the team to roll out at least 1 major innovation every month across each functional area o Do code-reviews and provide hands-on support and guidance to all functional areas. 3. Promote overall automation and SAAS architecture (features need to be rolled out across all clients and not customized to one client) o Automate all clients' onboarding activities over the year o Automate reporting/analytic activities over the year 4. Ownership and delivery of platform roadmap across all areas o Brainstorm at least 1 major idea every month across all functional areas (data science, UX, frontend, infrastructure, analytics and customer support) 5. Optimal usage of infrastructure and other collaboration tool costs o Decrease in infrastructure costs by 50% over the year o Improve performance of the infrastructure by 30% over the year 6. Hire and build a team of A players for the entire product team o Double the team size across all functional areas (data science, UX, Frontend, Infrastructure, analytics, and customer support) over the year by actively taking ownership of the hiring function 7. Ensure customers are happy with a fully functioning product o Improve the accuracy of the platform to 90% (measured as co-relation between user feedback and algorithm scoring) o Implement testing automation to improve robustness of the platform, speed up deployment and reduce recurring defects. 8. Provide technical/architecture support to all the pre-sales bids for new client opportunities o Create re-usable collateral and a knowledge bank of technical documentation to support pre-sale RFPs of potential clients. 9. Hiring Engineering team leads and managing team morale and ensuring attrition is managed. Required Skill 1. 8 to 10 years of technical experience in Python, full-stack development, Microservices architecture, and PHP. 2. Bachelor of Engineering/ B Tech from IIT/ NIT/ Top 10 engineering college. 3. Work ethic: Possesses a strong willingness to work hard and sometimes long hours to get the job done. Has a track record of working hard. 4. Communication: Speaks and writes clearly and articulately without being overly verbose or talkative. Maintains this standard in all forms of written communication, including e-mail. 5. Honesty/Integrity: Does not cut corners ethically. Earns trust and maintains 6. Proactivity: Acts without being told what to do. Brings new ideas to the company. Job Type: Permanent Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Experience: Full stack development: 8 years (Required) Python: 5 years (Required) Microservices architecture: 4 years (Required) Work Location: In person