Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
chandigarh
On-site
As a Data Architect with over 6 years of experience, you will be responsible for designing and implementing modern data lakehouse architectures on cloud platforms such as AWS, Azure, or GCP. Your primary focus will be on defining data modeling, schema evolution, partitioning, and governance strategies to ensure high-performance and secure data access. In this role, you will own the technical roadmap for scalable data platform solutions, ensuring they are aligned with enterprise needs and future growth. You will also provide architectural guidance and conduct code/design reviews across data engineering teams to maintain high standards of quality. Your responsibilities will include building and maintaining reliable, high-throughput data pipelines for the ingestion, transformation, and integration of structured, semi-structured, and unstructured data. You should have a solid understanding of data warehousing concepts, ETL/ELT pipelines, and data modeling. Experience with tools like Apache Spark (PySpark/Scala), Hive, DBT, and SQL for large-scale data transformation is essential for this role. You will be required to design ETL/ELT workflows using orchestration tools such as Apache Airflow, Temporal, or Apache NiFi. In addition, you will lead and mentor a team of data engineers, providing guidance on code quality, design principles, and best practices. As a subject matter expert in data architecture, you will collaborate with DevOps, Data Scientists, Product Owners, and Business Analysts to understand data requirements and deliver solutions that meet their needs.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
gujarat
On-site
As a Data Engineer at Tata Electronics Private Limited (TEPL), you will play a crucial role in architecting and implementing scalable offline data pipelines for manufacturing systems, ensuring the smooth functioning of various components such as AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Your expertise in designing and optimizing ETL/ELT workflows using tools like Python, Spark, SQL, and Airflow will be instrumental in transforming raw data into actionable insights. You will lead database design and performance tuning activities across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies to enhance the efficiency of manufacturing data processing. Your responsibilities will also include enforcing robust data governance through the implementation of data quality checks, lineage tracking, access controls, security measures, and retention policies. In addition, you will be involved in optimizing storage and processing efficiency by strategically utilizing formats like Parquet, ORC, compression techniques, partitioning, and indexing for high-performance analytics. Implementing streaming data solutions using Kafka or RabbitMQ to handle real-time data flows and synchronization across control systems will be a key aspect of your role. Collaborating cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments will be essential to ensure consistency across manufacturing systems and enable data consumption by downstream applications. You will also mentor junior engineers, establish best practices, documentation standards, and foster a data-driven culture within the organization. Your essential attributes should include expertise in Python programming for building robust ETL/ELT pipelines, proficiency with the Hadoop ecosystem, hands-on experience with Apache Spark (PySpark), strong SQL skills, and proficiency in using Apache Airflow for orchestrating complex data workflows. Experience with real-time data streaming using Kafka or RabbitMQ, both SQL and NoSQL databases, data lake architectures, containerization, CI/CD practices, and data governance principles will be highly valued. Qualifications: - BE/ME Degree in Computer Science, Electronics, Electrical Desired Experience Level: - Masters with a minimum of 2 years of relevant experience - Bachelors with a minimum of 4 years of relevant experience Experience in the semiconductor industry would be considered a plus for this role.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Platform Engineer at our company, you will be at the forefront of a significant transformation in our data capabilities. You will collaborate closely with a Staff Data Architect and Engineer to design and establish the next generation lakehouse and analytics platform. This platform will serve as the foundation driving our company's growth and innovation in the upcoming phase. In this role, you will have the unique opportunity to construct an entire modern data stack from scratch by integrating cutting-edge open-source tools and cloud-native infrastructure. Your responsibilities will include defining, designing, prototyping, and delivering a robust lakehouse and analytics platform utilizing best-in-class technologies. You will work with the Staff Data Architect to build the core infrastructure, covering ingestion, warehouse, transformation, orchestration, and BI/analytics layers. You will operate within an environment that involves GCP-based compute instances and containers, bringing advanced engineering principles to life through tools such as Airbyte, ClickHouse, dbt, metriql, Prefect, Metabase, and Next.js frontend. Additionally, you will optimize the platform for scalability and security, implement warehouse best practices, and enforce access controls to ensure performance for large-scale analytics. Furthermore, you will design flexible and scalable data models to support real-time analytics, complex business queries, and long-term historical analysis. Automation will be a key focus in data ingestion, validation, transformation, and reporting processes, driving operational efficiency. Close collaboration with Product, Engineering, and Leadership teams is essential to align platform objectives with broader company strategies. Mentorship to junior engineers and maintaining high standards of operational excellence across the data organization will also be part of your role. You will drive success metrics by aligning milestone progress with leadership goals focusing on platform adoption, scalability, reliability, and impact on business KPIs. To excel in this role, we are looking for individuals with 5+ years of experience in designing, building, and scaling modern data platforms. Strong expertise in ETL/ELT pipelines, cloud-based warehouses, database optimization, SQL, Python, and containerized data services is required. Deep understanding of cloud platforms, analytics workflows, BI environments, and experience with DevOps tools is highly valued. Ideal candidates will have direct experience setting up lakehouse architecture, familiarity with tools like Airbyte, dbt, Prefect, Metabase, metriql, or equivalents, and exposure to time-series data management and anomaly detection. The ability to thrive in a high-ownership, autonomous role with a startup energy is essential. Joining our team will allow you to architect a core platform that will drive the company's data products and operational excellence for years to come. As an early member of the team, you will have a significant impact on technology choices, design decisions, and operational frameworks. You will take on substantial ownership, work at a fast pace, and witness your contributions directly impact business outcomes. Our team is characterized by a culture of excellence, focusing on integrity, innovation, and rapid personal and professional growth. We encourage all passionate individuals interested in building transformative data platforms, even if they do not meet every qualification, to apply. Women and underrepresented groups are particularly encouraged to apply as research shows they tend to self-select out of opportunities where they don't meet all the listed criteria. Your skills and enthusiasm might make you the perfect addition to our team!,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
A career in the Advisory Acceleration Centre at PwC is an opportunity to leverage PwC's global delivery capabilities and provide premium, cost-effective, and high-quality services to support client engagements. As an Azure Senior Developer at PwC - AC, you will collaborate with the Offshore Manager and Onsite Business Analyst to grasp the project requirements and take charge of the complete implementation of Cloud data engineering solutions. Your role will involve utilizing your expertise in Azure cloud services such as Storage services (Blob, ADLS, PaaS SQL), Azure Data Factory, and Azure Synapse. You should excel in planning, organization, and have the ability to lead as a cloud developer in an agile team, delivering automated cloud solutions. With 4-8 years of hands-on experience, you must be proficient in Azure cloud computing, including big data technologies. Your deep understanding of traditional and modern data architecture and processing concepts will be critical, covering relational databases, data warehousing, big data, NoSQL, and business analytics. Your experience with Azure ADLS, Data Bricks, Data Flows, HDInsight, and Azure Analysis services will be essential, along with building stream-processing systems using solutions like Storm or Spark-Streaming. Designing and implementing scalable ETL/ELT pipelines using Databricks and Apache Spark, optimizing data workflows, and understanding big data use-cases and design patterns are key requirements. Your role will also involve architecture, design, implementation, and support of complex application architectures, as well as hands-on experience in implementing Big Data solutions using Microsoft Data Platform and Azure Data Services. Knowledge of Azure SQL DB, Azure Synapse Analytics, Azure HD Insight, Azure Data Lake Storage, Azure Data Lake Analytics, Azure Machine Learning, Stream Analytics, Azure Data Factory, Azure CosmosDB, and Power BI is essential. Exposure to Open-Source technologies like Apache Spark, Hadoop, NoSQL, Kafka, Solr/Elastic Search, as well as expertise in quality processes, design strategies leveraging Azure and Databricks, and Application DevOps tools will be advantageous. Desired skills include experience in stream-processing systems, Big Data ML toolkits, Python, and AWS Architecture certification. Having worked in Offshore/Onsite Engagements, experience in AWS services like STEP & Lambda, good project management skills, and knowledge in Cloud technologies like AWS, GCP, Informatica-Cloud, Oracle-Cloud, Cloud DW - Snowflake & DBT are also beneficial. The ideal candidate should have a professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA, possess good analytical and problem-solving skills, and excel in communication and presentation. In conclusion, as an Azure Senior Developer at PwC - AC, you will play a pivotal role in delivering high-quality Cloud data engineering solutions, leveraging your expertise in Azure cloud services, big data technologies, and modern data architecture concepts to support client engagements effectively.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Data Architect, you will play a crucial role in designing and implementing scalable, secure, and high-performance Big Data architectures using Databricks, Apache Spark, and cloud-native services. Your expertise will be essential in leading the end-to-end data architecture lifecycle, from requirements gathering to deployment and optimization. Additionally, you will design repeatable and reusable data ingestion pipelines for various ERP source systems like SAP, Salesforce, HR, Factory, and Marketing systems. Collaboration with cross-functional teams to integrate SAP data sources into modern data platforms will be a key aspect of your role, along with driving cloud cost optimization strategies and ensuring efficient resource utilization. Furthermore, you will provide technical leadership and mentorship to a team of data engineers and developers, while also developing and enforcing data governance, data quality, and security standards. Your ability to translate complex business requirements into technical solutions and data models will be crucial, as well as staying current with emerging technologies and industry trends in data architecture and analytics. Your skill set should include proven expertise in Databricks, Apache Spark, Delta Lake, and MLflow, along with strong programming skills in Python, SQL, and PySpark. Experience with SAP data extraction and integration, as well as hands-on experience with cloud platforms like Azure, AWS, or GCP, will be highly beneficial. Required qualifications include a minimum of 6 years of experience in Big Data architecture, Data Engineering, and AI-assisted BI solutions within Databricks and AWS technologies. A bachelor's degree in computer science, information technology, data science, data analytics, or related field is necessary. Additionally, you should have a solid understanding of data modeling, ETL/ELT pipelines, and data warehousing, along with demonstrated team leadership and project management capabilities. Strong communication, problem-solving, and stakeholder management skills are essential for success in this role. Preferred qualifications include experience in the manufacturing domain, certifications in Databricks, cloud platforms, or data architecture, and familiarity with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform).,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
A career in our Advisory Acceleration Centre is the natural extension of PwC's leading-class global delivery capabilities. We provide premium, cost-effective, high-quality services that support process,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Capgemini is seeking a skilled and motivated Business Analyst to become part of the team dedicated to General Insurance Data Platform Modernization. The client is a prominent global insurance company recognized for its innovation, scale, and dedication to excellence. This position is integral to a strategic data modernization initiative aimed at revolutionizing the enterprise data landscape across various regions. We are in search of an experienced Business Analyst with a strong background in data-driven projects within the financial services sector, particularly in general insurance. The ideal candidate will have a crucial role in connecting business requirements with technical implementation, facilitating the delivery of a centralized, scalable, and contemporary data architecture. As a Business Analyst, your primary responsibility will involve enhancing our client's data analytics capabilities to better cater to their customers in the general insurance sector. You will collaborate extensively with stakeholders from different departments, translating business needs into actionable insights while ensuring that data-centric solutions align with overarching business objectives. Your expertise will drive projects to modernize and elevate data platforms, leading to a substantial impact on the organization. Key Responsibilities: Gather and analyze business requirements from stakeholders to guide the modernization of the insurance data platform. Develop and maintain data models, ensuring compliance with regulatory requirements and industry norms. Formulate detailed specifications for data integration and transformation processes. Engage with cross-functional teams, including IT, data engineers, and data scientists, to implement solutions. Conduct workshops and meetings to involve stakeholders and effectively communicate findings. Monitor project progress, identify risks and issues, and propose mitigative actions to ensure timely delivery. Support user acceptance testing and validate solutions against business requirements. Location: India Requirements: 7-10 years of overall experience, with 5-6 years in data engineering/data-driven projects. Extensive domain expertise in financial services, with a preference for general insurance. Mandatory hands-on experience with Azure, Data Lake, and Databricks. Thorough understanding of ETL/ELT pipelines, data warehousing, and data lakes. Proven track record of working on intricate data architectures and global data integration projects. Excellent verbal and written communication skills, confident in client-facing roles. Strong interpersonal and collaboration skills to effectively work with cross-functional teams. Ability to work independently and manage multiple priorities in a fast-paced environment. Other Desired Skills: Knowledge of general insurance products and processes. Experience in data governance and compliance standards. Benefits: Competitive compensation and benefits package, including a competitive salary, performance-based bonuses, comprehensive benefits, career development opportunities, flexible work arrangements, dynamic work culture, private health insurance, retirement benefits, paid time off, training and development, and performance bonuses. Please note that benefits may vary depending on the employee level. About Capgemini: Capgemini is a global leader in collaborating with companies to transform and oversee their businesses by leveraging technology. The organization is guided by its purpose of unleashing human energy through technology for an inclusive and sustainable future. With over 55 years of industry expertise, Capgemini is entrusted by clients worldwide to address their diverse business needs. The group comprises over 350,000 team members in more than 50 countries and reported revenues of 22 billion in 2024.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the data engineering team at PepsiCo, you will play a crucial role in developing and overseeing data product build & operations. Your primary responsibility will be to drive a strong vision for how data engineering can proactively create a positive impact on the business. Working alongside a team of data engineers, you will build data pipelines, rest data on the PepsiCo Data Lake, and facilitate exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Your contributions will directly impact the design, architecture, and implementation of PepsiCo's flagship data products in areas such as revenue management, supply chain, manufacturing, and logistics. You will collaborate closely with process owners, product owners, and business users, operating in a hybrid environment that includes in-house, on-premise data sources as well as cloud and remote systems. Your responsibilities will include active contribution to code development, managing and scaling data pipelines, building automation and monitoring frameworks for data pipeline quality and performance, implementing best practices around systems integration, security, performance, and data management, and empowering the business through increased adoption of data, data science, and business intelligence. Additionally, you will collaborate with internal clients, drive solutioning and POC discussions, and evolve the architectural capabilities of the data platform by engaging with enterprise architects and strategic partners. To excel in this role, you should have at least 6+ years of overall technology experience, including 4+ years of hands-on software development, data engineering, and systems architecture. You should also possess 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools, along with expertise in SQL optimization, performance tuning, and programming languages like Python, PySpark, and Scala. Experience in cloud data engineering, specifically in Azure, is essential, and familiarity with Azure cloud services is a plus. You should have experience in data modeling, data warehousing, building ETL pipelines, and working with data quality tools. Proficiency in MPP database technologies, cloud infrastructure, containerized services, version control systems, deployment & CI tools, and Azure services like Data Factory, Databricks, and Azure Machine Learning tools is desired. Additionally, experience with Statistical/ML techniques, retail or supply chain solutions, metadata management, data lineage, data glossaries, agile development, DevOps, DataOps concepts, and business intelligence tools will be advantageous. A degree in Computer Science, Math, Physics, or related technical fields is preferred for this role.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
At EY, youll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. We are looking for a BI Senior Consultant to join the EY GDS Team. As part of our EY GDS TechOps team, you will be responsible for providing expert-level business intelligence support with a strong focus on Power BI and Databricks. You will work across various regions for our global clients, helping to design, develop, and maintain insightful and scalable data solutions. You will collaborate closely with cross-functional teams to understand business needs, transform data into actionable insights, and continuously improve reporting solutions to meet dynamic business requirements. This is a fantastic opportunity to be part of a leading firm while playing a key role in its growth. You will work with a high-quality team to support global clients ensuring data-driven decision-making through best-in-class analytics, automation, and innovation, all within an international and collaborative environment. To qualify for the role, you must have a Bachelor's degree in a related technology field (Computer Science, Engineering, Data Analytics, etc.) or equivalent work experience. You should have 3-7 years of hands-on experience in Business Intelligence, with strong proficiency in Power BI and Databricks; experience working with global clients is preferred. Proven experience in building and supporting BI dashboards, data models, and reports, ensuring minimal disruption to business operations is required. You should have the ability to analyze user-reported issues, identify root causes, and deliver effective solutions in a timely manner. Experience collaborating with stakeholders to understand reporting needs, gather requirements, and develop scalable data solutions is essential. A strong grasp of ETL processes, data modeling, and data visualization best practices is necessary. Ability to interpret business needs and translate them into technical solutions that enhance efficiency and data-driven decision-making is crucial. Excellent cross-functional communication skills and experience working in offshore/onshore delivery models are a must. You should have the ability to troubleshoot and resolve data discrepancies, report errors, and performance issues related to BI tools. Being a self-starter with the ability to work independently in fast-paced, time-critical environments is important. Flexibility in managing work hours due to the volatile nature of Application Management work including the ability to do shifts and being on call for critical business requirements is required. Ideally, youll also have experience working with cloud-based data platforms such as Azure especially in data engineering and analytics contexts. Strong knowledge of data integration from various sources (e.g., CRM, ERP, POS systems, web analytics), with experience in building robust ETL/ELT pipelines, is beneficial. Proficiency in Databricks, including the use of Delta Lake, SQL, and PySpark for data transformation and processing is a plus. Familiarity with integrating Power BI dashboards with diverse data sources, including cloud storage, data warehouses, and APIs, is an advantage. Experience working in or supporting clients in retail or consumer goods industries is a plus. Certifications such as Microsoft Certified: Data Analyst Associate, Databricks Certified Data Engineer Associate, or similar credentials are a strong advantage. As a BI Senior Consultant, your responsibilities will include providing day-to-day Application Management support for Business Intelligence and Data Analytics solutions, including handling service requests, incident resolution, enhancements, change management, and problem management. You will lead and coordinate root cause analysis for data/reporting issues, bugs, and performance bottlenecks, implementing corrective actions and improvements as needed. Collaborating with business users and technical stakeholders to gather requirements, understand data needs, and provide advice on Power BI dashboards, data models, and Databricks pipelines will be part of your role. You will develop and maintain comprehensive documentation, including data flow diagrams, dashboard usage guides, and test cases/scripts for quality assurance. Flexibility in managing work hours due to the volatile nature of Application Management work including the ability to do shifts and being on call for critical business requirements is essential. We are looking for individuals with client orientation, experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of a market-leading, multi-disciplinary team of hundreds of professionals. Opportunities to work with EY BI application maintenance, practices globally with leading businesses across a range of industries. At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies - and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer support, coaching, and feedback from engaging colleagues opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that's right for you. About EY: As a global leader in assurance, tax, transaction, and advisory services, we're using the finance products, expertise, and systems we've developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities, and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we'll make our ambition to be the best employer by 2025 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now! EY | Building a better working world. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
As an API & Data Integration Engineer at our Advanced Intelligence Work Group, you will be responsible for designing, building, and maintaining backend integrations across internal systems, third-party APIs, and AI copilots. Your role will involve API development, data pipeline engineering, no-code automation, and cloud-based architecture with a primary focus on scalability, security, and compliance. Key Responsibilities: - Building and maintaining RESTful APIs using FastAPI - Integrating third-party services such as CRMs and SaaS tools - Developing and managing ETL/ELT pipelines for real-time and batch data flows - Automating workflows using tools like n8n, Zapier, Make.com - Collaborating with AI/ML teams to ensure clean and accessible data - Ensuring API performance, monitoring, and security - Maintaining integration documentation and ensuring GDPR/CCPA compliance To excel in this role, you should possess: - 5+ years of experience in API development, backend, or data integration - Strong proficiency in Python, with scripting knowledge in JavaScript or Go - Experience with databases like PostgreSQL, MongoDB, and DynamoDB - Hands-on experience with AWS/Azure/GCP and serverless tools such as Lambda and API Gateway - Familiarity with OAuth2, JWT, and SAML - Proven track record in building and managing data pipelines - Comfort with no-code/low-code platforms like n8n and Zapier Nice to Have: - Experience with technologies like Kafka, RabbitMQ, Kong, and Apigee - Familiarity with monitoring tools like Datadog and Postman - Cloud or integration certifications Tech Stack (Mandate): FastAPI, OpenAPI/Swagger, Python, JavaScript or Go, PostgreSQL, MongoDB, DynamoDB, AWS/Azure/GCP, Lambda, API Gateway, ETL/ELT pipelines, n8n, Zapier, Make.com, OAuth2, JWT, SAML, GDPR, CCPA What We Value: - Strong problem-solving and debugging skills - Cross-functional collaboration with Data, Product, and AI teams - A proactive "Get Stuff Done" attitude Join us in this exciting opportunity to work on cutting-edge technology and make a significant impact in the field of API & Data Integration Engineering.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The job is based in Hyderabad/Bangalore/Chennai and at Gramener, you will find a welcoming work environment with diverse colleagues, a clear career path, and ample opportunities for growth and innovation. The company aims to develop a range of easily configurable data applications focused on storytelling for both public and private use. As part of your role, you will be involved in various impactful customer technical data platform projects, taking the lead on strategic initiatives that cover the design, development, and deployment of cutting-edge solutions. Collaboration with platform engineering teams will be key in implementing Data Brick services effectively within the company's infrastructure. Your expertise with Data Bricks Unity Catalog will be crucial in establishing strong data governance and lineage capabilities. Implementing CI/CD practices to streamline the deployment of Data bricks solutions will be part of your responsibilities. Additionally, you will contribute to the development of a data mesh architecture that promotes decentralized data ownership and accessibility across the organization, enabling teams to derive actionable insights from their data. In terms of skills and qualifications, you should have expertise in Azure Architecture and Platform, including Azure Data Lake, AI/ML model hosting, Key Vault, Event Hub, Logic Apps, and other Azure cloud services. Strong integration with Azure, workflow orchestration, and governance in Databricks Development is required. Hands-on experience with scalable ETL/ELT pipelines, Delta Lake, and enterprise data management is essential in Data Engineering & Architecture. Your coding and implementation skills should encompass modular design, CI/CD, version control, and best coding and design practices in Software Engineering. Proficiency in Python and PySpark is necessary for building reusable packages and components catering to both technical and business users. Experience with enterprise processes, structured environments, and compliance frameworks is valuable, along with knowledge of data lineage, GxP, HIPAA, GDP compliance, and regulatory requirements in Governance and Security. Familiarity with Pharma, MedTech, and Life Sciences domains is a plus, including an understanding of industry-specific data, regulatory constraints, and security considerations. Gramener specializes in providing data-driven decision-making solutions to organizations, helping them leverage data as a strategic asset. The company offers strategic data consulting services to guide organizations in making data-driven decisions and transforming data into a competitive advantage. Through a range of products, solutions, and services, Gramener analyzes and visualizes large volumes of data to drive insights and decision-making processes. To learn more about Gramener, visit the company's website and blog.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an AWS Consultant specializing in Infrastructure, Data & AI, and Databricks, you will play a crucial role in designing, implementing, and optimizing AWS Infrastructure solutions. Your expertise will be utilized to deliver secure and scalable data solutions using various AWS services and platforms. Your responsibilities will also include architecting and implementing ETL/ELT pipelines, data lakes, and distributed compute frameworks. You will be expected to work on automation and infrastructure as code using tools like CloudFormation or Terraform, and manage deployments through AWS CodePipeline, GitHub Actions, or Jenkins. Collaboration with internal teams and clients to gather requirements, assess current-state environments, and define cloud transformation strategies will be a key aspect of your role. Your support during pre-sales and delivery cycles will involve contributing to RFPs, SOWs, LOEs, solution blueprints, and technical documentation. Ensuring best practices in cloud security, cost governance, and compliance will be a priority. The ideal candidate for this position will possess 3 to 5 years of hands-on experience with AWS services, a Bachelor's degree or equivalent experience, and a strong understanding of cloud networking, IAM, security best practices, and hybrid connectivity. Proficiency in Databricks on AWS, experience with data modeling, ETL frameworks, and working with structured/unstructured data are required skills. Additionally, you should have working knowledge of DevOps tools and processes in the AWS ecosystem, strong documentation skills, and excellent communication abilities to translate business needs into technical solutions. Preferred certifications for this role include AWS Certified Solutions Architect - Associate or Professional, AWS Certified Data Analytics - Specialty (preferred), and Databricks Certified Data Engineer Associate/Professional (a plus).,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
guwahati, assam
On-site
We are looking for a highly skilled Software Engineer with a strong expertise in Python and a solid understanding of data engineering principles to join our team. As a Software Engineer, you will be responsible for developing and optimizing scalable applications and data workflows, integrating diverse data sources, and supporting the development of data-driven products. This role requires hands-on experience in software development, data modeling, ETL/ELT pipelines, APIs, and cloud-based data systems. You will collaborate closely with product, data, and engineering teams to build high-quality, maintainable, and efficient solutions that support analytics, machine learning, and business intelligence initiatives. In this role, your responsibilities will include software development tasks such as designing, developing, and maintaining Python-based applications, APIs, and microservices with a strong focus on performance, scalability, and reliability. You will also write clean, modular, and testable code following best software engineering practices, participate in code reviews, debugging, and optimization of existing applications, and integrate third-party APIs and services as required for application features or data ingestion. On the data engineering side, you will be building and optimizing data pipelines (ETL/ELT) for ingesting, transforming, and storing structured and unstructured data. You will work with relational and non-relational databases to ensure efficient query performance and data integrity, collaborate with the analytics and ML teams to ensure data availability, quality, and accessibility for downstream use cases, and implement data modeling, schema design, and version control for data pipelines. Additionally, you will be involved in deploying and managing solutions on cloud platforms (AWS/Azure/GCP) using services such as S3, Lambda, Glue, BigQuery, or Snowflake, implementing CI/CD pipelines, and participating in DevOps practices for automated testing and deployment. You will also monitor and optimize application and data pipeline performance using observability tools. Furthermore, you will work cross-functionally with software engineers, data scientists, analysts, and product managers to understand requirements and translate them into technical solutions. You will provide technical guidance and mentorship to junior developers and data engineers as needed and document architecture, code, and processes to ensure maintainability and knowledge sharing. The ideal candidate should have a Bachelors/Masters degree in Computer Science, Engineering, or a related field, along with 3+ years of experience in Python software development. Strong knowledge of data structures, algorithms, and object-oriented programming is required, as well as hands-on experience in building data pipelines. Proficiency with SQL and database systems, experience with cloud services and containerization, familiarity with message queues/streaming platforms, a strong understanding of APIs, RESTful services, and microservice architectures, and knowledge of CI/CD pipelines, Git, and testing frameworks are also desirable.,
Posted 3 weeks ago
15.0 - 19.0 years
0 Lacs
hyderabad, telangana
On-site
As a Technical Lead / Data Architect, you will play a crucial role in our organization by leveraging your expertise in modern data architectures, cloud platforms, and analytics technologies. In this leadership position, you will be responsible for designing robust data solutions, guiding engineering teams, and ensuring successful project execution in collaboration with the project manager. Your key responsibilities will include architecting and designing end-to-end data solutions across multi-cloud environments such as AWS, Azure, and GCP. You will lead and mentor a team of data engineers, BI developers, and analysts to deliver on complex project deliverables. Additionally, you will define and enforce best practices in data engineering, data warehousing, and business intelligence. You will design scalable data pipelines using tools like Snowflake, dbt, Apache Spark, and Airflow, and act as a technical liaison with clients, providing strategic recommendations and maintaining strong relationships. To be successful in this role, you should have at least 15 years of experience in IT with a focus on data architecture, engineering, and cloud-based analytics. You must have expertise in multi-cloud environments and cloud-native technologies, along with deep knowledge of Snowflake, Data Warehousing, ETL/ELT pipelines, and BI platforms. Strong leadership and mentoring skills are essential, as well as excellent communication and interpersonal abilities to engage with both technical and non-technical stakeholders. In addition to the required qualifications, certifications in major cloud platforms and experience in enterprise data governance, security, and compliance are preferred. Familiarity with AI/ML pipeline integration would be a plus. We offer a collaborative work environment, opportunities to work with cutting-edge technologies and global clients, competitive salary and benefits, and continuous learning and professional development opportunities. Join us in driving innovation and excellence in data architecture and analytics.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Engineer at GlobalLogic, you will be responsible for architecting, building, and maintaining complex ETL/ELT pipelines for batch and real-time data processing using various tools and programming languages. Your key duties will include optimizing existing data pipelines for performance, cost-effectiveness, and reliability, as well as implementing data quality checks, monitoring, and alerting mechanisms to ensure data integrity. Additionally, you will play a crucial role in ensuring data security, privacy, and compliance with relevant regulations such as GDPR and local data laws. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Excellent analytical, problem-solving, and critical thinking skills with meticulous attention to detail are essential. Strong communication (written and verbal) and interpersonal skills are also required, along with the ability to collaborate effectively with cross-functional teams. Experience with Agile/Scrum development methodologies is considered a plus. Your responsibilities will involve providing technical leadership and architecture by designing and implementing robust, scalable, and efficient data architectures that align with organizational strategy and future growth. You will define and enforce data engineering best practices, evaluate and recommend new technologies, and oversee the end-to-end data development lifecycle. As a leader, you will mentor and guide a team of data engineers, conduct code reviews, provide feedback, and promote a culture of engineering excellence. You will collaborate closely with data scientists, data analysts, software engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Your role will also involve communicating complex technical concepts and data strategies effectively to both technical and non-technical audiences. At GlobalLogic, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. By joining our team, you will have the chance to work on impactful projects, engage your curiosity and problem-solving skills, and contribute to shaping cutting-edge solutions that redefine industries. With a commitment to integrity and trust, GlobalLogic provides a safe, reliable, and ethical global environment where you can thrive both personally and professionally.,
Posted 4 weeks ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a skilled Snowflake Developer with over 7 years of experience, you will be responsible for designing, developing, and optimizing Snowflake data solutions. Your expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration will be crucial in building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Your key responsibilities will include: - Designing and developing Snowflake databases, schemas, tables, and views following best practices. - Writing complex SQL queries, stored procedures, and UDFs for data transformation. - Optimizing query performance using clustering, partitioning, and materialized views. - Implementing Snowflake features such as Time Travel, Zero-Copy Cloning, Streams & Tasks. - Building and maintaining ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. - Integrating Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). - Developing CDC (Change Data Capture) and real-time data processing solutions. - Designing star schema, snowflake schema, and data vault models in Snowflake. - Implementing data sharing, secure views, and dynamic data masking. - Ensuring data quality, consistency, and governance across Snowflake environments. - Monitoring and optimizing Snowflake warehouse performance (scaling, caching, resource usage). - Troubleshooting data pipeline failures, latency issues, and query bottlenecks. - Collaborating with data analysts, BI teams, and business stakeholders to deliver data solutions. - Documenting data flows, architecture, and technical specifications. - Mentoring junior developers on Snowflake best practices. Required Skills & Qualifications: - 7+ years in database development, data warehousing, or ETL. - 4+ years of hands-on Snowflake development experience. - Strong SQL or Python skills for data processing. - Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). - Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). - Certifications: SnowPro Core Certification (preferred). Preferred Skills: - Familiarity with data governance and metadata management. - Familiarity with DBT, Airflow, SSIS & IICS. - Knowledge of CI/CD pipelines (Azure DevOps).,
Posted 4 weeks ago
11.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
As the Data Governance Tooling & Lifecycle Management Lead at McDonald's Corporation in Hyderabad, you will play a crucial role in developing and implementing end-to-end strategies for data governance tooling and processes across the enterprise. Your responsibilities will include owning the architecture, implementation, and administration of enterprise data governance platforms, such as Collibra, and defining governance workflows, metadata curation, and policy enforcement processes. You will work closely with various teams to ensure data is governed, discoverable, and trusted throughout its lifecycle. In this role, you will be responsible for developing and implementing strategies for data lifecycle governance, from ingestion to archival and deletion, while ensuring compliance with regulations and business needs. You will also lead initiatives to automate and visualize end-to-end data lineage across source systems, pipelines, warehouses, and BI tools. Collaborating with legal, compliance, and security teams, you will define and enforce data access, classification, and privacy policies to support regulatory compliance frameworks. To be successful in this role, you should have at least 11 years of experience in data governance, metadata management, or data operations, with a minimum of 3 years of experience in owning enterprise tooling or lifecycle processes. Deep expertise in data governance platforms, metadata and lineage management, cloud platforms such as GCP and AWS, SQL, ETL/ELT pipelines, and compliance practices is required. You should also possess excellent project management and stakeholder communication skills, along with a degree in Data Management, Information Systems, Computer Science, or a related field. Preferred experience includes working in Retail or QSR environments managing governance across global data operations, exposure to data product ownership, and familiarity with APIs and automation scripts. Holding a current GCP Associates or Professional Certification would be an added advantage. This is a full-time, hybrid role based in Hyderabad, India, where you will collaborate with data stewards, engineers, and product teams to ensure governance tooling meets user needs and drives adoption. Your contributions will be vital in reporting on governance adoption, data quality KPIs, and policy coverage to senior leadership and data councils. If you are looking to join a dynamic team at the forefront of innovation in the fast-food industry, this role offers a unique opportunity to make a significant impact on McDonald's global data governance initiatives.,
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
maharashtra
On-site
You are a highly skilled and motivated Lead Data Scientist / Machine Learning Engineer sought to join a team pivotal in the development of a cutting-edge reporting platform. This platform is designed to measure and optimize online marketing campaigns effectively. Your role will involve focusing on data engineering, ML model lifecycle, and cloud-native technologies. You will be responsible for designing, building, and maintaining scalable ELT pipelines, ensuring high data quality, integrity, and governance. Additionally, you will develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experimenting with different algorithms and leveraging various models will be crucial in driving insights and recommendations. Furthermore, you will deploy and monitor ML models in production and implement CI/CD pipelines for seamless updates and retraining. You will work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translating complex model insights into actionable business recommendations and presenting findings to stakeholders will also be part of your responsibilities. Qualifications & Skills: Educational Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or related field. - Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: - Experience: 5-10 years with the mentioned skillset & relevant hands-on experience. - Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). - ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. - Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. - Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. - MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). - Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: - Experience with Graph ML, reinforcement learning, or causal inference modeling. - Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. - Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. - Experience with distributed computing frameworks (Spark, Dask, Ray). Location: - Bengaluru Brand: - Merkle Time Type: - Full time Contract Type: - Permanent,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Data Governance Engineer to take charge of developing and overseeing robust data governance frameworks on Google Cloud Platform (GCP). Your role will involve leveraging your expertise in data management, metadata frameworks, compliance, and security within cloud environments to ensure the implementation of high-quality, secure, and compliant data practices aligned with organizational objectives. With a minimum of 4 years of experience in data governance, data management, or data security, you should possess hands-on proficiency with Google Cloud Platform (GCP) tools such as BigQuery, Dataflow, Dataproc, and Google Data Catalog. Additionally, a strong command over metadata management, data lineage, and data quality tools like Collibra and Informatica is crucial. A deep understanding of data privacy laws and compliance frameworks, coupled with proficiency in SQL and Python for governance automation, is essential. Experience with RBAC, encryption, data masking techniques, and familiarity with ETL/ELT pipelines and data warehouse architectures will be advantageous. Your responsibilities will include developing and executing comprehensive data governance frameworks with a focus on metadata management, lineage tracking, and data quality. You will be tasked with defining, documenting, and enforcing data governance policies, access control mechanisms, and security standards using GCP-native services like IAM, DLP, and KMS. Managing metadata repositories using tools such as Collibra, Informatica, Alation, or Google Data Catalog will also be part of your role. Collaborating with data engineering and analytics teams to ensure compliance with regulatory standards like GDPR, CCPA, SOC 2, and automating processes for data classification, monitoring, and reporting using Python and SQL will be key responsibilities. Supporting data stewardship initiatives, optimizing ETL/ELT pipelines, and data workflows to adhere to governance best practices will also be part of your role. At GlobalLogic, we offer a culture of caring, emphasizing inclusivity and personal growth. You will have access to continuous learning and development opportunities, engaging and meaningful work, as well as a healthy work-life balance. Join our high-trust organization where integrity is paramount, and collaborate with us to engineer innovative solutions that have a lasting impact on industries worldwide.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior Visualization Engineer at IDP Education Services India LLP, you will be instrumental in shaping and implementing the data visualization strategy. Your role will involve designing and developing interactive and scalable dashboards and reports using tools like Tableau/Power BI to effectively communicate insights to stakeholders. You will collaborate closely with data and business stakeholders to enable data-driven decision-making across the enterprise. Your key responsibilities will include translating business requirements into technical specifications and intuitive visualizations, creating reusable datasets to support scalable self-service analytics, and connecting to various internal and external data sources for data integration. You will work towards ensuring accurate data representation and governance alignment in the reporting layer. Moreover, you will engage with key stakeholders such as commercial, product, marketing, and client success teams to define and communicate business requirements for tracking and reporting. Your ability to present insights and prototypes effectively with clear storytelling and data narratives will be crucial in this role. Additionally, you are expected to contribute to standards and best practices in data visualization and BI development, stay updated with industry trends and technology innovations in data and analytics, and identify opportunities for automation, optimization, and simplification of analytics solutions. To be successful in this role, you should have a degree or equivalent qualification and at least 7 years of experience in data visualization and business intelligence. Proficiency in Tableau or equivalent tools, strong SQL skills, and familiarity with data warehousing and analytics concepts are essential. A certification in Power BI/Tableau would be desirable. Basic understanding of data engineering concepts and excellent proficiency in English (both spoken and written) are also required for effective engagement with cross-functional stakeholders and clear conveyance of ideas. Join us at IDP and be part of a global team dedicated to delivering success to students, test takers, and partners worldwide through trusted relationships, digital technology, and customer research. Visit www.careers.idp.com to learn more.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for a skilled Data Governance Engineer to spearhead the development and supervision of robust data governance frameworks on Google Cloud Platform (GCP). You should have a deep understanding of data management, metadata frameworks, compliance, and security within cloud environments to ensure the adoption of high-quality, secure, and compliant data practices aligned with organizational objectives. The ideal candidate should possess: - Over 4 years of experience in data governance, data management, or data security. - Hands-on expertise with Google Cloud Platform (GCP) tools like BigQuery, Dataflow, Dataproc, and Google Data Catalog. - Proficiency in metadata management, data lineage, and data quality tools such as Collibra, Informatica. - Comprehensive knowledge of data privacy laws and compliance frameworks. - Strong skills in SQL and Python for governance automation. - Experience with RBAC, encryption, and data masking techniques. - Familiarity with ETL/ELT pipelines and data warehouse architectures. Your main responsibilities will include: - Developing and implementing comprehensive data governance frameworks emphasizing metadata management, lineage tracking, and data quality. - Defining, documenting, and enforcing data governance policies, access control mechanisms, and security standards utilizing GCP-native services like IAM, DLP, and KMS. - Managing metadata repositories using tools like Collibra, Informatica, Alation, or Google Data Catalog. - Collaborating with data engineering and analytics teams to ensure compliance with GDPR, CCPA, SOC 2, and other regulatory standards. - Automating processes for data classification, monitoring, and reporting using Python and SQL. - Supporting data stewardship initiatives including the creation of data dictionaries and governance documentation. - Optimizing ETL/ELT pipelines and data workflows to adhere to governance best practices. At GlobalLogic, we offer: - A culture of caring that prioritizes inclusivity, acceptance, and personal connections. - Continuous learning and development opportunities to enhance your skills. - Engagement in interesting and meaningful work with cutting-edge solutions. - Balance and flexibility to help you integrate work and life effectively. - A high-trust organization committed to integrity and ethical practices. GlobalLogic, a Hitachi Group Company, is a leading digital engineering partner to world-renowned companies, focusing on creating innovative digital products and experiences. Join us to collaborate on transforming businesses through intelligent products, platforms, and services.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Databricks Architect, you will play a crucial role in leading the adoption and integration of Databricks as the enterprise AI/ML platform. Your responsibilities will include designing and implementing scalable, secure, and compliant Databricks-based architectures to support advanced analytics, data engineering, and machine learning workloads. You will collaborate with internal infrastructure, security, and compliance teams to ensure solutions align with organizational standards and regulatory requirements. Additionally, you will be responsible for architecting and implementing robust data integrations with internal and external sources, enabling efficient data ingestion, transformation, and access for analytics and AI/ML workflows. Your role will also involve developing frameworks and tools to empower data scientists and analysts with self-service model development and deployment capabilities, while ensuring compliance with data sovereignty regulations. As a key consultant and influencer, you will be expected to provide thought leadership, best practices, and innovative solutions to complex data and AI challenges. You will lead the implementation of a comprehensive ModelOps strategy to manage model lifecycle, governance, versioning, and deployment at scale. Furthermore, you will optimize Databricks workflows for performance, cost-efficiency, and reliability, while enforcing data governance, security, and lineage practices. In addition to technical expertise, you will mentor and upskill team members, promote knowledge sharing, and build high-performing, cross-functional teams focused on excellence in data and AI. Your role will also involve supporting the global deployment of AI models, ensuring compliance with regional data sovereignty laws and organizational policies. The ideal candidate will have at least 5 years of experience architecting and implementing enterprise-grade data and AI/ML solutions, preferably with Databricks, Apache Spark, and cloud platforms such as AWS, Azure, or GCP. You should possess a deep understanding of data lakehouse architectures, ETL/ELT pipelines, data governance, and security best practices. Strong consulting, stakeholder management, and communication skills are essential, along with the ability to drive innovation and lead technology transformations. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is required. Desirable attributes include the ability to define architecture vision and roadmaps, a passion for continuous learning and knowledge sharing, and a commitment to delivering measurable business value and exceptional user experiences. This role requires business hours support during CST Hours (5.30pm - 2.30am IST). If you are a forward-thinking individual with a passion for data and AI, and possess the required skills and experience, we encourage you to apply for this exciting opportunity.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled Data Engineer to join our team, working on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, building scalable and efficient data pipelines on Azure. Responsibilities include designing, building, and maintaining scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark. Developing and optimizing data workflows using SQL and Python or Scala for large-scale data processing and transformation. Implementing performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling. Collaborating with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows. Ensuring data quality and integrity by implementing validation, error-handling, and monitoring mechanisms. Working with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem. Contributing to MLOps practices, including integrating ML pipelines, managing model versioning, and supporting CI/CD processes. Primary Skills required are Data Engineering & Cloud proficiency in Azure Data Platform (Data Factory, Databricks), strong skills in SQL and either Python or Scala for data manipulation, experience with ETL/ELT pipelines and data transformations, familiarity with Big Data technologies (Spark, Delta Lake, Parquet), expertise in data pipeline optimization and performance tuning, experience in feature engineering and model deployment, strong troubleshooting and problem-solving skills, experience with data quality checks and validation. Nice-to-Have Skills include exposure to NLP, time-series forecasting, and anomaly detection, familiarity with data governance frameworks and compliance practices, basics of AI/ML like ML & MLOps Integration, experience supporting ML pipelines with efficient data workflows, knowledge of MLOps practices (CI/CD, model monitoring, versioning). At Tesco, we are committed to providing the best for our colleagues. Total Rewards offered at Tesco are determined by four principles - simple, fair, competitive, and sustainable. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays. Tesco promotes programs supporting health and wellness, including insurance for colleagues and their family, mental health support, financial coaching, and physical wellbeing facilities on campus. Tesco in Bengaluru is a multi-disciplinary team serving customers, communities, and the planet. The goal is to create a sustainable competitive advantage for Tesco by standardizing processes, delivering cost savings, enabling agility through technological solutions, and empowering colleagues. Tesco Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India, dedicated to various roles including Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and others.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a DataOps Engineer, you will play a crucial role within our data engineering team, operating in the realm that merges software engineering, DevOps, and data analytics. Your primary responsibility will involve creating and managing secure, scalable, and production-ready data pipelines and infrastructure that are vital in supporting advanced analytics, machine learning, and real-time decision-making capabilities for our clientele. Your key duties will encompass designing, developing, and overseeing the implementation of robust, scalable, and efficient ETL/ELT pipelines leveraging Python and contemporary DataOps methodologies. You will also be tasked with incorporating data quality checks, pipeline monitoring, and error handling mechanisms, as well as constructing data solutions utilizing cloud-native services on AWS like S3, ECS, Lambda, and CloudWatch. Furthermore, your role will entail containerizing applications using Docker and orchestrating them via Kubernetes to facilitate scalable deployments. You will collaborate with infrastructure-as-code tools and CI/CD pipelines to automate deployments effectively. Additionally, you will be involved in designing and optimizing data models using PostgreSQL, Redis, and PGVector, ensuring high-performance storage and retrieval while supporting feature stores and vector-based storage for AI/ML applications. In addition to your technical responsibilities, you will be actively engaged in driving Agile ceremonies such as daily stand-ups, sprint planning, and retrospectives to ensure successful sprint delivery. You will also be responsible for reviewing pull requests (PRs), conducting code reviews, and upholding security and performance standards. Your collaboration with product owners, analysts, and architects will be essential in refining user stories and technical requirements. To excel in this role, you are required to have at least 10 years of experience in Data Engineering, DevOps, or Software Engineering roles with a focus on data products. Proficiency in Python, Docker, Kubernetes, and AWS (specifically S3 and ECS) is essential. Strong knowledge of relational and NoSQL databases like PostgreSQL, Redis, and experience with PGVector will be advantageous. A deep understanding of CI/CD pipelines, GitHub workflows, and modern source control practices is crucial, as is experience working in Agile/Scrum environments with excellent collaboration and communication skills. Moreover, a passion for developing clean, well-documented, and scalable code in a collaborative setting, along with familiarity with DataOps principles encompassing automation, testing, monitoring, and deployment of data pipelines, will be beneficial for excelling in this role.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Engineering Specialist, you will be responsible for assessing, capturing, and translating complex business issues into structured technical tasks for the data engineering team. This includes designing, building, launching, optimizing, and extending full-stack data and business intelligence solutions. Your role will involve supporting the build of big data environments, focusing on improving data pipelines and data quality, and working with stakeholders to meet business needs. You will create data access tools for the analytics and data scientist team, conduct code reviews, assist other developers, and train team members as required. Additionally, you will ensure that developed systems comply with industry standards and best practices while meeting project requirements. To excel in this role, you should possess a Bachelor's degree in computer science engineering or equivalent, or relevant experience. Certification in cloud technologies, especially Azure, would be beneficial. You should have 2-3+ years of development experience in building and maintaining ETL/ELT pipelines on various sources and operational programming tasks. Experience with Apache data projects or cloud platform equivalents and proficiency in programming languages like Python, Scala, R, Java, Golang, Kotlin, C, or C++ is required. Your work will involve collaborating closely with data scientists, machine learning engineers, and stakeholders to understand requirements and develop data-driven solutions. Troubleshooting, debugging, and resolving issues within generative AI system development, as well as documenting processes, specifications, and training procedures will be part of your responsibilities. In summary, this role requires a strong background in data engineering, proficiency in cloud technologies, experience with data projects and programming languages, and the ability to collaborate effectively with various stakeholders to deliver high-quality data solutions.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City