Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for working as an AWS Data Engineer at YASH Technologies. Your role will involve performing tasks related to data collection, processing, storage, and integration. It is essential to have proficiency in data Extract-Transform-Load (ETL) processes, data pipeline setup, as well as knowledge of database and data warehouse technologies on the AWS cloud platform. Prior experience in handling timeseries and unstructured data types, such as image data, is a necessary requirement for this position. Additionally, you should have experience in developing data analytics software on the AWS cloud, either as a full-stack or back-end developer. Skills in software quality assessment, testing, and API integration are also crucial for this role. Working at YASH, you will have the opportunity to build a career in a supportive and inclusive team environment. The company focuses on continuous learning and growth by providing career-oriented skilling models and utilizing technology for upskilling and reskilling activities. You will be part of a Hyperlearning workplace that is grounded on the principles of flexible work arrangements, emotional positivity, self-determination, trust, transparency, open collaboration, and providing support for achieving business goals. YASH Technologies offers stable employment with a great atmosphere and an ethical corporate culture.,
Posted 1 day ago
4.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Job Description Summary As an experienced Technical Product Manager (TPM), youll lead the strategy, development and evolution of our Control Tower for the Subscriptions Program This role is pivotal in building a centralized intelligence and monitoring platform that provides real-time visibility, diagnostics, and decision support across the subscription lifecycle, You will work closely with engineering, data science, operations, and business stakeholders to define and deliver a robust control tower that enhances operational efficiency, customer experience, and business agility, GE HealthCare is a leading global medical technology and digital solutions innovator Our purpose is to create a world where healthcare has no limits Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world, Job Description Key Responsibilities: In this role you will Define and own the product vision, roadmap, and backlog for the Control Tower platform, Collaborate with data teams to design dashboards, alerts, and analytics that provide actionable insights into subscription performance, anomalies, and trends, Partner with engineering, operations, customer support, and business teams to gather requirements, prioritize features, and ensure timely delivery, Drive the development of monitoring tools, automated workflows, and exception handling mechanisms to proactively manage subscription operations, Ensure the Control Tower is intuitive, scalable, and user-friendly for both technical and non-technical users, Define and track key performance indicators (KPIs) to measure the effectiveness and impact of the Control Tower, Identify operational risks and work with stakeholders to implement mitigation strategies through the Control Tower, Skill Requirements Holds a Bachelors or Masters degree in Computer Science, Engineering, or a related field, with a strong grasp of APIs, data pipelines, and system architecture?enabling effective collaboration with engineering teams and informed technical decision-making, Brings over 5 years of experience in product management within SaaS, e-commerce, or subscription-based environments Demonstrated success in leading cross-functional teams, managing product roadmaps, and delivering complex, data-driven solutions using agile methodologies, Skilled in leveraging data visualization tools like Power BI and Tableau to drive insights and product strategy Excels in stakeholder communication, problem-solving, and aligning technical solutions with business goals, Additional Skills Experience building or managing control towers, command centers, or operational dashboards, Knowledge of subscription lifecycle management, supply chain, services, billing systems, and customer retention strategies, Exposure to AI/ML-driven alerting or anomaly detection systems, Inclusion And Diversity GE HealthCare is an Equal Opportunity Employer where inclusion matters Employment decisions are made without regard to race, colour, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law, We expect all employees to live and breathe our behaviours: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership always with unyielding integrity Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities, Our salary and benefits are everything youd expect from an organization with global strength and scale, and youll be surrounded by career opportunities in a culture that fosters care, collaboration and support, ! #Hybrid Additional Information Relocation Assistance Provided: No Show
Posted 2 days ago
1.0 - 4.0 years
5 - 8 Lacs
Mumbai
Work from Office
We are hiring a Data Engineer to design and manage data pipelines from factory floors to the Azure cloud, supporting our central data lakehouse architecture You'll work closely with OT engineers, architects, and AI teams to move data from edge devices into curated layers (Bronze ? Silver ? Gold), ensuring high data quality, security, and performance Your work will directly enable advanced analytics and AI in production and operations, Key Job functions Build data ingestion and transformation pipelines using Azure Data Factory, IoT Hub, and Databricks 2) Integrate OT sensor data using protocols like OPC-UA and MQTT Design Medallion architecture flows with Delta Lake and Synapse Monitor and optimize data performance and reliabilityImplement data quality, observability, and lineage practices ( e-g , with Purview or Unity Catalog) Collaborate with OT and IT teams to ensure contextualized, usable data Show
Posted 2 days ago
1.0 - 4.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Overview Of The Role The Sisal AI & Automation group is looking for a motivated and versatile expert to develop and operationalize next-generation Agentic AI systems across our business As a Senior Generative AI Engineer , you will play a critical role in shaping the software development lifecycle (SDLC) of AI products?guiding them from concept to production deployment?while also mentoring junior team members and managing multiple concurrent initiatives, Youll be joining an industry-leading team and building transformative AI solutions that drive real-world impact across Sisals portfolio, including Sports Betting, Gaming, and other domains This hybrid role combines remote work flexibility with collaboration in our modern offices in Hyderabad , India, What You Will Do Across areas such as Sports Betting, Gaming, and other Sisal products, you will lead the development of Advanced Agentic AI systems , You shall have the opportunity to architect and implement Generative AI to streamline Sisal AI solutions, own the end-to-end SDLC for AI initiatives and set Generative AI standard together with the Data Science group, You will manage and prioritize multiple AI/ML projects simultaneously, driving the company's strategic edge by transforming raw data into actionable insights in this dynamic and thrilling environment, What You Will Need A Masters degree in a STEM field with 4 to 6 years of experience as an AI/ML Engineer, Strong Python programming skills and a solid understanding of databases and data pipelines, Demonstrated success in delivering production-grade AI/ML solutions , with hands-on involvement throughout the full SDLC, Proven experience working with Generative AI tools and frameworks ( e-g , OpenAI, Hugging Face, Anthropic), Hands-on experience building Retrieval-Augmented Generation (RAG) systems using vector databases, Exposure to cloud-based machine learning environments and MLOps tooling, Experience mentoring junior team members and leading small technical groups is highly valued, Ability to juggle multiple parallel projects , prioritize effectively, and manage time across shifting business demands, Excellent communication skills, with the ability to explain technical concepts clearly and persuasively to non-technical stakeholders, Highly Desirable Skills Experience in developing LLM-based applications and LLM-Ops, leveraging them to tackle business problems is desirable, Exposure to Azure tools include Azure OpenAI Ecosystem, Publications on international journals are a big plus It is ok if you do not think you tick every box on this list! We love people who want to challenge themselves and are passionate about what they do If you believe you can contribute in some areas and are eager to learn, we encourage you to apply!! Why Choose Us Aside from a generous base salary, we have a phenomenal benefits and rewards program that is designed to encourage personal and career development, This Package Includes Discretionary annual performance bonus 30 days paid leave Health Insurance for you and your partner, children, and parents or parents-in-law (up to 5 dependents) A personal interest allowance to let you learn something new or pursue a hobby Looking to extend your familyYou will receive a cash gift of 34,000 INR for your new addition whilst working for us 26 weeks primary carer leave, and 4 weeks secondary carer leave External learning support of up to ?1,000 or equivalent in local currency, dedicated 4 learning ?Power Hours? every month during office time, full access to the Udemy and Mindtools platforms, in-house leadership program and many other training opportunities for developing your skills and progressing your career, Show
Posted 2 days ago
5.0 - 10.0 years
25 - 40 Lacs
Gurugram
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have 6-10 years of experience in development, specifically in Java/J2EE, with a strong knowledge of core Java. Additionally, you must be proficient in Spring frameworks, particularly in Spring MVC, Spring Boot, and JPA + Hibernate. It is essential to have hands-on experience with Microservice technology, including development of RESTFUL and SOAP Web Services. A good understanding of Oracle DB is required. Your communication skills, especially when interacting with clients, should be excellent. Experience in building tools like Maven, deployment, and troubleshooting issues is necessary. Knowledge of CI/CD tools such as Jenkins and experience with GIT or similar source control tools is expected. You should also be familiar with Agile/Scrum software development methodologies using tools like Jira, Confluence, and BitBucket and have performed Requirement Analysis. It would be beneficial to have knowledge of frontend stacks like React or Angular, as well as frontend and backend API integration. Experience with AWS, CI/CD best practices, and designing security reference architectures for AWS Infrastructure Applications is advantageous. You should possess good verbal and written communication skills, the ability to multitask in a fast-paced environment, and be highly organized and detail-oriented. Awareness of common information security principles and practices is required. TELUS International is committed to creating a diverse and inclusive workplace and is an equal opportunity employer. All employment decisions are based on qualifications, merits, competence, and performance without regard to any characteristic related to diversity.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
guwahati, assam
On-site
Understanding of business requirements from the discovered information is crucial for the successful implementation of the project. Collaborating with the AI Architect to select the appropriate LLM Model and its variants for optimal performance and cost efficiency is a key responsibility. It is essential to engage in high-level and low-level designing of the solution in alignment with the architected solution to ensure a robust outcome. Optimizing the design considering various aspects such as performance, security, scalability, portability, and maintainability is necessary for overall success. Close collaboration with the Customer's Program Owner is required to foster a strong partnership between HCLTech and the Customer team. Providing mentoring and comprehensive guidance to the technical team, which comprises a mixed squad from HCLTech, is essential for knowledge transfer and skill development. Assisting the technical team in configuration, customization, and development of target services will contribute to the smooth execution of the project. Additionally, familiarity with Azure/AWS Cloud Architecture, Azure Open AI/AWS Bedrock, Azure/AWS Cognitive Services, Azure Cognitive Search, Azure Data Pipeline on ADF/ADB, Azure Function/AWS Serverless, Azure Cosmos/Vector DB, and Python is advantageous for this role.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
In today's world of limited attention spans, the only way to get someone's attention is by making your message relevant, timely, and, most importantly, visually-appealing. Businesses are now utilizing banners and videos at every customer touchpoint, going beyond traditional ads. The proliferation of multiple products, offers, languages, and platforms has made content creation a massive challenge for businesses. Rocketium recognizes this need and is dedicated to providing rapid creation, massive scale, intuitive collaborative features, creative analytics, and more to support fast-moving, high-growth businesses. As a Fullstack Engineer at Rocketium, you will play a pivotal role in steering various initiatives within a vibrant team. You will have the opportunity to deepen your expertise and contribute significantly to the company's growth. Working closely with Product Managers and other key stakeholders, you will be instrumental in shaping feature outlines and crafting detailed engineering design documents. Your insights will be valued in fostering a collaborative environment where collective strategies are integrated. Your responsibilities will extend beyond development as you will be involved in orchestrating rollout plans and adoption blueprints to shape the company's forward-thinking roadmap. You will lead and contribute to groundbreaking projects such as creating a robust infrastructure for managing notifications, building auto-scaled systems, engaging in discussions on architectural approaches, overseeing high-traffic systems, transitioning databases, designing multi-cloud frameworks, leveraging AI for content analysis, and developing efficient data pipelines. The ideal candidate for this role should have 4-6 years of experience in handling large-scale projects using TypeScript/JavaScript and Python. They should possess a comprehensive understanding of database systems, API development, and integration best practices. A proven track record in developing sustainable and scalable solutions, along with adaptability in the evolving SaaS landscape, is essential. Strong leadership skills in fostering collaboration, building cross-functional relationships, and inspiring team members are highly valued. Rocketium offers a supportive working environment with benefits such as flexible working hours, a hybrid working model, bi-annual performance reviews, unlimited vacation policy, physical and mental wellness support, health cover with OPD benefits for the family, learning budget, and meals & munchies to promote a healthy work-life balance. Join us in revolutionizing visual content creation for businesses and be a part of our dynamic team led by visionary leaders and talented professionals.,
Posted 2 days ago
1.0 - 3.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Atomicwork is on a mission to transform the digital workplace experience by uniting people, processes, and platforms through AI automation Our team is building a modern service management platform that enables growing businesses to reduce operational complexity and drive business success, We are seeking a skilled and motivated Data Pipeline Engineer to join our team In this role, you will be responsible for designing, building, and maintaining scalable data pipelines that support our enterprise search capabilities Your work will ensure that data from various sources is efficiently ingested, processed, and indexed, enabling seamless and secure search experiences across the organisation, This position is based out of our Bengaluru office We offer competitive pay to employees and practical benefits for their whole family, If this sounds interesting to you, read on, What Were Looking For (Qualifications) We value hands-on skills and a proactive mindset Formal qualifications are less important than your ability to deliver results and collaborate effectively, Proficiency in programming languages such as Python, Java, or Scala, Strong experience with data pipeline frameworks and tools ( e-g , Apache Airflow, Apache NiFi), Experience with search platforms like Elasticsearch or OpenSearch, Familiarity with data ingestion, transformation, and indexing processes, Understanding of enterprise search concepts, including crawling, indexing, and query processing, Knowledge of data security and access control best practices, Experience with cloud platforms (AWS, GCP, or Azure) and related Backend Engineer Search/Integrations services, Familiarity with Model Context Protocol (MCP) is a strong plus Strong problem-solving and analytical skills, Excellent communication and collaboration What Youll Do (Responsibilities) Design, develop, and maintain data pipelines for enterprise search applications, Implement data ingestion processes from various sources, including databases, file systems, and APIs, Develop data transformation and enrichment processes to prepare data for indexing, Integrate with search platforms to index and update data efficiently, Ensure data quality, consistency, and integrity throughout the pipeline, Monitor pipeline performance and troubleshoot issues as they arise, Collaborate with cross-functional teams, including data scientists, engineers, and product managers, Implement security measures to protect sensitive data during processing and storage, Document pipeline architecture, processes, and best practices, Stay updated with industry trends and advancements in data engineering and enterprise search, Why we are different (culture) As a part of Atomicwork, you can shape our company and business from idea to production Our cultural values also set the bar high, helping us create a better workplace for everyone, Agency: Be self-directed Take initiative and solve problems creatively, Taste: Hold a high bar Sweat the details Build with care and discernment, Ownership: We demonstrate unwavering commitment to our mission and goals, taking full responsibility for triumphs and setbacks, Mastery: We relentlessly pursue continuous self-improvement as individuals and teams, dedicating ourselves to constant learning and growth, Impatience: We recognize that our world moves swiftly and is driven by an unyielding desire to progress with every endeavor, Customer Obsession: We place our customers at the heart of everything we do, relentlessly seeking to understand their needs and exceed their expectations, What we offer (compensation and benefits) We are big on benefits that make sense to you and your family, Fantastic team ?the #1 reason why everybody joins us ? Convenient offices ? well-located offices spread over five different cities ? Paid time off ? Unlimited sick leaves and 15 days off every year, Health insurance ? comprehensive health coverage upto 75% premium covered ?? Flexible allowances ? with hassle-free reimbursements across spends ?? Annual outings ? for everyone to have fun together, What next (applying for this role) Click on the apply button to get started with your application, Answer a few questions about yourself and your work, Wait to hear from us about the next steps, Do you have anything else to tell usEmail careers@atomicwork and let us know whats on your mind, Show
Posted 3 days ago
5.0 - 10.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 3 days ago
5.0 - 10.0 years
25 - 40 Lacs
Pune
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 3 days ago
5.0 - 10.0 years
25 - 40 Lacs
Noida
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 3 days ago
2.0 - 7.0 years
4 - 8 Lacs
Noida
Work from Office
About the Role: We are seeking a highly skilled and motivated Backend Developer with 2 to 5 years of experience to design and implement a high-performance, secure, and scalable server-side architecture for our trading terminal. In this role, you will develop systems capable of processing large volumes of real-time financial data, ensuring low latency and exceptional reliability for mission-critical applications. Your expertise will be central to empowering data-driven trading experiences for our users. Key Responsibilities: Service Architecture & Development: Design, develop, and maintain high-performance backend services, RESTful APIs, and microservices. Architect systems that efficiently process and analyze large-scale real-time market data. Develop robust, modular, and scalable server-side logic to support complex trading transactions. Data Management & Integration: Build and optimize data pipelines connecting external data providers, databases, and client applications. Integrate real-time data feeds using protocols such as WebSockets to enable seamless, live data updates. Collaborate with frontend teams to ensure data consistency, reliability, and performance across the platform. Performance & Security: Optimize system performance with a focus on low latency, high throughput, and resource efficiency. Implement strong security measures including authentication, encryption, and secure API practices to protect sensitive financial data. Monitor system performance, troubleshoot, and resolve issues to ensure uninterrupted service during peak market conditions. Collaboration & Agile Development: Work closely with multi-disciplinary teams (frontend developers, product managers, and QA engineers) in an Agile setting. Participate actively in code reviews, design discussions, and strategy meetings to drive continuous improvement. Leverage CI/CD practices to implement automated testing, integration, and deployment pipelines for frequent yet stable releases. Innovation & Continuous Improvement: Stay updated on backend technologies, cloud services, container orchestration, and microservices architecture. Propose and experiment with new tools and techniques to improve system efficiency and scalability. Document best practices and contribute to a knowledge-sharing culture within the team. Required Qualifications: Experience: A minimum of 2 to 5 years in backend development with a demonstrable record of building robust web applications, APIs, or microservices. Technical Expertise: Proficiency in server-side programming languages such as Node.js, Python, Django Solid experience with both SQL (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Redis) databases. Hands-on experience with cloud platforms (AWS, Azure, or Google Cloud Platform) and containerization tools (Docker, Kubernetes). Familiarity with real-time communication protocols (WebSockets, MQTT) and API design. Development Practices: Strong background in RESTful API development, microservices design, and automated testing methodologies. Experience with version control systems (Git) and CI/CD pipelines. A deep commitment to writing clean, maintainable, and well-documented code. Preferred Qualifications: Prior experience building backend solutions for financial or trading platforms. Familiarity with transaction processing systems and high-frequency trading requirements. Excellent problem-solving skills and strong collaboration capabilities in a fast-paced environment. What We Offer: An engaging, innovative work environment focused on cutting-edge financial technology. A competitive compensation package and comprehensive benefits. Opportunities for professional growth, continuous learning, and career advancement. A chance to make a significant impact by shaping next-generation trading infrastructure. Application Process: Interested candidates should submit: A detailed resume outlining your relevant experience and technical expertise. Links to GitHub repositories or portfolios that highlight your backend projects. A cover letter describing your approach to scalable system design, your passion for financial technology, and how your skills align with our vision. you can share your cv on whatsapp 8115677271 (no call)
Posted 3 days ago
3.0 - 6.0 years
11 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Posted 3 days ago
5.0 - 7.0 years
18 - 20 Lacs
Pune
Work from Office
Critical Skills to Possess: 5+ years of experience in data engineering or ETL development. 5+ years of hands-on experience with Informatica. Experience in production support , handling tickets, and monitoring ETL systems. Strong SQL skills with experience in querying large datasets. Familiarity with data warehousing concepts and design (e.g., star schema, snowflake schema). Experience with relational databases such as Oracle, SQL Server, or PostgreSQL. Knowledge of cloud platforms such as AWS, Azure, or GCP is a plus. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Design, develop, and maintain robust ETL pipelines using Informatica . Work with data architects and business stakeholders to understand data requirements and translate them into technical solutions. Integrate data from various sources including relational databases, flat files, APIs, and cloud-based systems. Optimize and troubleshoot existing Informatica workflows for performance and reliability. Monitor ETL workflows and proactively address failures, performance issues, and data anomalies. Respond to and resolve support tickets related to data loads, ETL job failures, and data discrepancies. Provide support for production data pipelines and jobs Ensure data quality and consistency across different systems and pipelines. Implement data validation, error handling, and auditing mechanisms within ETL processes. Collaborate with data analysts, data scientists, and other engineers to ensure a consistent and accurate data platform. Maintain documentation of ETL processes, data flows, and technical designs. Monitor daily data loads and resolve any ETL failures or data quality issues.
Posted 3 days ago
3.0 - 5.0 years
15 - 20 Lacs
Pune
Work from Office
3+ yrs in BI operations Strong in BI monitoring and cross-functional collaboration. Monitor & Support BI systems Resolve incidents, Optimize workflows Ensure data reliability across tools Example - (Qlik, Power BI, Tableau, and SAP Analytics Cloud.) Required Candidate profile 3+ yrs in BI operations with hands-on in Power BI, Qlik & Tableau. Skilled in troubleshooting, data modeling, ETL, and incident resolution. Strong in BI monitoring and cross-functional collaboration.
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
YipitData is a leading market research and analytics firm specializing in the disruptive economy, having recently secured a significant investment from The Carlyle Group valued over $1B. Recognized for three consecutive years as one of Inc's Best Workplaces, we are a rapidly expanding technology company with offices across various locations globally, fostering a culture centered on mastery, ownership, and transparency. As a potential candidate, you will have the opportunity to collaborate with strategic engineering leaders and report directly to the Director of Data Engineering. This role involves contributing to the establishment of our Data Engineering team presence in India and working within a global team framework, tackling challenging big data problems. We are currently in search of a highly skilled Senior Data Engineer with 6-8 years of relevant experience to join our dynamic Data Engineering team. The ideal candidate should possess a solid grasp of Spark and SQL, along with experience in data pipeline development. Successful candidates will play a vital role in expanding our data engineering team, focusing on enhancing reliability, efficiency, and performance within our strategic pipelines. The Data Engineering team at YipitData sets the standard for all other analyst teams, maintaining and developing the core pipelines and tools that drive our products. This team plays a crucial role in supporting the rapid growth of our business and presents a unique opportunity for the first hire to potentially lead and shape the team as responsibilities evolve. This hybrid role will be based in India, with training and onboarding requiring overlap with US working hours initially. Subsequently, standard IST working hours are permissible, with occasional meetings with the US team. As a Senior Data Engineer at YipitData, you will work directly under the Senior Manager of Data Engineering, receiving hands-on training on cutting-edge data tools and techniques. Responsibilities include building and maintaining end-to-end data pipelines, establishing best practices for data modeling and pipeline construction, generating documentation and training materials, and proficiently resolving complex data pipeline issues using PySpark and SQL. Collaboration with stakeholders to integrate business logic into central pipelines and mastering tools like Databricks, Spark, and other ETL technologies is also a key aspect of the role. Successful candidates are likely to have a Bachelor's or Master's degree in Computer Science, STEM, or a related field, with at least 6 years of experience in Data Engineering or similar technical roles. An enthusiasm for problem-solving, continuous learning, and a strong understanding of data manipulation and pipeline development are essential. Proficiency in working with large datasets using PySpark, Delta, and Databricks, aligning data transformations with business needs, and a willingness to acquire new skills are crucial for success. Effective communication skills, a proactive approach, and the ability to work collaboratively with stakeholders are highly valued. In addition to a competitive salary, YipitData offers a comprehensive compensation package that includes various benefits, perks, and opportunities for personal and professional growth. Employees are encouraged to focus on their impact, self-improvement, and skill mastery in an environment that promotes ownership, respect, and trust.,
Posted 4 days ago
1.0 - 4.0 years
25 - 30 Lacs
Thane
Work from Office
Bachelor s or master s degree in computer science, Data Science, Engineering, or a related field. EsyCommerce is seeking a highly experienced Data Engineer to join our growing team in either Mumbai or Pune. This role requires a strong foundation in data engineering principles, coupled with experience in application development and data science techniques. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and applications, as well as leveraging analytical skills to transform data into valuable insights. This position calls for a blend of technical expertise, problem-solving abilities, and effective communication skills to drive data-driven solutions that meet business objectives.
Posted 5 days ago
8.0 - 13.0 years
18 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
To Apply - Mandatory to submit Details via Google Form - https://forms.gle/cCa1WfCcidgiSTgh8 Position : Senior Data Engineer - Total 8+ years Required Relevant 6+ years in Databricks, AWS, Apache Spark & Informatica (Required Skills) As a Senior data Engineer in our team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced data Engineer to design, implement, and maintain robust data pipelines and analytics solutions using databricks & AWS services. The ideal candidate will have a strong background in data services, big data technologies, and programming languages. Role & responsibilities Technical Leadership: Guide and mentor teams in designing and implementing Databricks solutions. Architecture & Design: Develop scalable data pipelines and architectures using Databricks Lakehouse. Data Engineering: Lead the ingestion and transformation of batch and streaming data. Performance Optimization: Ensure efficient resource utilization and troubleshoot performance bottlenecks. Security & Compliance: Implement best practices for data governance, access control, and compliance. Collaboration: Work closely with data engineers, analysts, and business stakeholders. Cloud Integration: Manage Databricks environments on Azure, AWS, or GCP. Monitoring & Automation: Set up monitoring tools and automate workflows for efficiency. Qualifications: 6+ years of experience in Databricks, AWS and 4+ Apache Spark, and Informatica. Excellent problem-solving and leadership skills. Good to have these skills 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile (Good to have) 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: Good to have - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
IENERGY is a leading provider of EHS and ESG software platforms that are specifically crafted to equip organizations with cutting-edge digital systems for Environment, Health & Safety, Sustainability, and ESG reporting. With a strong focus on leveraging advanced technology and extensive domain expertise, our platform seamlessly integrates real-time data to facilitate intuitive workflows and robust system integration. Trusted by a diverse user base of over 15,000 clients, which includes Fortune 500 companies, IENERGY is dedicated to delivering tangible business impact through swift deployment and scalable innovation. Our suite of offerings comprises IENERGY AURORA for IoT-driven remote monitoring, IENERGY AVALANCHE for enterprise risk management, IENERGY VISVA for AI-driven insights, and IENERGY ICEBERG for seamless data integration. As an IoT Solution Architect at IENERGY, you will hold a full-time on-site position based in Bhubaneswar. In this role, you will be entrusted with the pivotal responsibility of designing and implementing state-of-the-art IoT solutions, providing expert consulting services, and ensuring seamless integration with existing systems. Your daily tasks will revolve around collaborating with cross-functional teams to craft innovative software solutions, managing business processes effectively, and optimizing IoT architecture to drive superior performance and efficiency. Qualifications required for this role include proficiency in Solution Architecture and Integration, along with a solid background in Consulting, Business Process, and Software Development. Additionally, a minimum of 2-3 years of hands-on experience in setting up MQTT protocol enabled IoT devices (such as GPS, Sensors, etc.) is essential. You should also possess 2-3 years of experience in working with Kafka in a live data handling environment, as well as a similar duration of experience in setting up data pipelines. Join us at IENERGY and be a part of a dynamic team that is at the forefront of digital solutions for EHS and ESG domains.,
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You will be working as a Monitoring Team Lead for a Data Pipeline L1 team, overseeing the daily operations to ensure the health and stability of data pipelines, and managing incident response. Your role will involve leading the team, monitoring performance, and escalating issues as needed. As a Team Leader, you will guide and mentor the L1 monitoring team to ensure proficiency in data pipeline monitoring, troubleshooting, and escalation procedures. You will manage team performance, distribute tasks effectively, and resolve conflicts. Acting as a point of contact for the team, you will represent them to stakeholders and advocate for their needs. Your responsibilities will also include developing team strengths and promoting a positive work environment. In terms of Data Pipeline Monitoring, you will continuously monitor data pipelines for performance, availability, and data quality issues. Utilizing monitoring tools, you will detect and analyze alerts related to data pipelines to ensure data freshness, completeness, accuracy, consistency, and validity. For Incident Management, you are required to detect, log, categorize, and track incidents within the ticketing system. Any unresolved issues should be escalated to L2/L3 teams based on predefined SLAs and severity. You will also coordinate with other teams to resolve incidents quickly and efficiently while ensuring proper communication and updates to relevant stakeholders throughout the incident lifecycle. Managing Service Level Agreements (SLAs) related to data pipeline monitoring and incident response will be essential. You will monitor and ensure that the team meets or exceeds established SLAs. Process Improvement is another key aspect where you will identify opportunities to enhance monitoring processes, automation, and efficiency. Implementing best practices for data pipeline monitoring and incident management and conducting regular reviews of service performance are part of your responsibilities. Your role will also involve providing technical expertise to the team, staying updated on industry best practices and new technologies related to data pipelines and monitoring. Maintaining and updating documentation related to data pipeline monitoring processes, procedures, and escalation paths is crucial. Accurate shift handovers to the next shift, with updates on ongoing issues, will also be expected. Qualifications: - Proven experience in data pipeline monitoring and incident management. - Strong understanding of data pipeline concepts, including ingestion, transformation, and storage. - Experience with monitoring tools and technologies. - Excellent communication, interpersonal, and leadership skills. - Ability to work independently and as part of a team in a fast-paced environment. - Experience with cloud services (AWS, Azure, or GCP) is a plus. - Knowledge of data governance principles and practices is beneficial. Skills to be evaluated on: - Data Operation/Operations Team Lead. Mandatory Skills: - Data Operation, Operations Team Lead. Desirable Skills: - Lead Operations, data operations, operations management, team management.,
Posted 6 days ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad
Remote
Tech stack- Database: Mongodb: S3 Postgres Strong experience on Data pipelines; mapping React; Node; Python Aws; Lambda About the job Summary We are seeking a detail-oriented and proactive Data Analyst to lead our file and data operations, with a primary focus on managing data intake from our clients and ensuring data integrity throughout the pipeline. This role is vital to our operational success and will work cross-functionally to support data ingestion, transformation, validation, and secure delivery. The ideal candidate must have hands-on experience with healthcare datasets, especially medical claims data, and be proficient in managing ETL processes and data operations at scale. Responsibilities File Intake & Management Serve as the primary point of contact for receiving files from clients, ensuring all incoming data is tracked, validated, and securely stored. Monitor and automate data file ingestion using tools such as AWS S3, AWS Glue, or equivalent technologies. Troubleshoot and resolve issues related to missing or malformed files and ensure timely communication with internal and external stakeholders. Data Operations & ETL Develop, manage, and optimize ETL pipelines for processing large volumes of structured and unstructured healthcare data. Perform data quality checks, validation routines, and anomaly detection across datasets. Ensure consistency and integrity of healthcare data (e.g., EHR, medical claims, ICD/CPT/LOINC codes) during transformations and downstream consumption. Data Analysis & Reporting Collaborate with data science and analytics teams to deliver operational insights and performance metrics. Build dashboards and visualizations using Power BI or Tableau to monitor data flow, error rates, and SLA compliance. Generate summary reports and audit trails to ensure HIPAA-compliant data handling practices. Process Optimization Identify opportunities for automation and efficiency in file handling and ETL processes. Document procedures, workflows, and data dictionaries to standardize operations. Required Qualifications Bachelors or Master’s degree in Health Informatics, Data Analytics, Computer Science, or related field. 5+ years of experience in a data operations or analyst role with a strong focus on healthcare data. Demonstrated expertise in working with medical claims data, EHR systems, and healthcare coding standards (e.g., ICD, CPT, LOINC, SNOMED, RxNorm). Strong programming and scripting skills in Python and SQL for data manipulation and automation. Hands-on experience with AWS, Redshift, RDS, S3, and data visualization tools such as Power BI or Tableau. Familiarity with HIPAA compliance and best practices in handling protected health information (PHI). Excellent problem-solving skills, attention to detail, and communication abilities.
Posted 6 days ago
5.0 - 10.0 years
19 - 20 Lacs
Bengaluru
Remote
Hi Candidates, we have job openings in one of our MNC Company Interested candidates can apply here and share details to chandrakala.c@i-q.co Note: NP-0-15 days only serving Role & responsibilities We are looking for Data Managers Work Exp: Min 5 yrs. (mandatory) Location: Remote (India) JD: The data modeler designs, implements, and documents data architecture and data modeling solutions, which include the use of relational, dimensional, and NoSQL databases. These solutions support enterprise information management, business intelligence, machine learning, data science, and other business interests. The successful candidate will: - Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL). - Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices. The candidate must be able to work independently and collaboratively. Responsibilities - Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics, and machine learning). - Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models - Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. - Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. - Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. - Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Skills - Bachelors or masters degree in computer/data science technical or related experience. - 5+ years of hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). - Experience with data warehouse, data lake, and enterprise big data platforms in multi-datacenter contexts required. -Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. - Experience in team management, communication, and presentation. Preferred candidate profile
Posted 6 days ago
0.0 - 2.0 years
8 - 12 Lacs
Bengaluru
Work from Office
We are looking for an experienced data engineer to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this data engineering position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of learning machine methods. Responsibilities Analyze and organize raw data Build data systems and pipelines Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Previous experience as a data engineer or in a similar role Technical expertise with data models, data mining, and segmentation techniques Knowledge of programming languages (e.g. Java and Python) Hands-on experience with SQL database design Degree in Computer Science, IT, or similar field; a Masters is a plus Focus will be on building out our Python ETL processes and writing superb SQL Use agile software development processes to make iterative improvements to our back-end systems Model front-end and back-end data sources to help draw a more comprehensive picture of user flows throughout the system and to enable powerful data analysis Build data pipelines that clean, transform, and aggregate data from disparate sources
Posted 6 days ago
2.0 - 5.0 years
0 - 0 Lacs
Kochi, Coimbatore
Work from Office
Role Summary: We are looking for a Data Engineer who will be responsible for designing and developing scalable data pipelines, managing data staging layers, and integrating multiple data sources through APIs and SQL-based systems. You'll work closely with analytics and development teams to ensure high data quality and availability. Key Responsibilities: Design, build, and maintain robust data pipelines and staging tables. Develop and optimize SQL queries for ETL processes and reporting. Integrate data from diverse APIs and external sources. Ensure data integrity, validation, and version control across systems. Collaborate with data analysts and software engineers to support analytics use cases. Automate data workflows and improve processing efficiency
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data pipeline job market in India is currently thriving, with a high demand for professionals who can design, build, and maintain data pipelines to support data-driven decision-making in various industries. Data pipeline roles require a strong understanding of data processing, ETL (extract, transform, load) processes, and data warehousing concepts.
The average salary range for data pipeline professionals in India varies based on experience levels. Entry-level positions typically start at INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the data pipeline field, a typical career progression may include roles such as: - Junior Data Engineer - Data Engineer - Senior Data Engineer - Data Engineering Manager - Chief Data Officer
In addition to expertise in data pipelines, professionals in this field often benefit from having skills in: - SQL - Python or Java - ETL tools (e.g., Apache NiFi, Talend) - Data modeling - Cloud platforms (AWS, GCP, Azure)
As you explore data pipeline jobs in India, remember to continuously enhance your skills, stay updated with industry trends, and practice mock interviews to prepare confidently. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough